[go: up one dir, main page]

WO2019098464A1 - Procédé de codage et appareil associé, et procédé de décodage et appareil associé - Google Patents

Procédé de codage et appareil associé, et procédé de décodage et appareil associé Download PDF

Info

Publication number
WO2019098464A1
WO2019098464A1 PCT/KR2018/003821 KR2018003821W WO2019098464A1 WO 2019098464 A1 WO2019098464 A1 WO 2019098464A1 KR 2018003821 W KR2018003821 W KR 2018003821W WO 2019098464 A1 WO2019098464 A1 WO 2019098464A1
Authority
WO
WIPO (PCT)
Prior art keywords
mode
encoding
unit
coding
current block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2018/003821
Other languages
English (en)
Korean (ko)
Inventor
이진영
최나래
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to KR1020207003130A priority Critical patent/KR20200074081A/ko
Publication of WO2019098464A1 publication Critical patent/WO2019098464A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • the present invention relates to a video encoding method and a video decoding method, and more particularly, to a method of efficiently encoding encoding mode information.
  • High quality video requires a large amount of data when encoding.
  • the bandwidth allowed for delivering video data is limited, so that the data rate applied when transmitting video data may be limited. Therefore, in order to efficiently transmit video data, a method of encoding and decoding video data with an increased compression ratio while minimizing deterioration of image quality is needed.
  • Video data can be compressed by eliminating spatial redundancy and temporal redundancy between pixels. Since it is common to have a common feature among adjacent pixels, encoding information is transmitted in units of data consisting of pixels in order to eliminate redundancy between adjacent pixels.
  • the pixel values of the pixels included in the data unit are not transmitted directly, but the necessary method to obtain the pixel value is transmitted.
  • a prediction method for predicting the pixel value similar to the original value is determined for each data unit and the encoding information for the prediction method is transmitted from the encoder to the decoder. Also, since the predicted value is not exactly the same as the original value, the residual data of the difference between the original value and the predicted value is transmitted to the decoder in the encoder.
  • the prediction method is determined in consideration of the size of the encoding information and the residual data.
  • data units divided in pictures have various sizes. The larger the size of a data unit, the more likely the prediction accuracy decreases, and the coding information decreases. Therefore, the size of the block is determined according to the characteristics of the picture.
  • the prediction methods include intra prediction and inter prediction.
  • Intra prediction is a method of predicting pixels of a block from surrounding pixels of the block.
  • Inter prediction is a method of predicting pixels with reference to pixels of another picture referenced by a picture including a block. Therefore, spatial redundancy is removed by intra prediction, and temporal redundancy is eliminated by inter prediction.
  • the encoding information applied to the block can also be predicted from other blocks, thereby reducing the size of the encoded information.
  • the amount of residual data can be reduced by lossy compression of the residual data according to the transformation and quantization process.
  • a video coding method for generating coding information on a combining mode for a current block and a video decoding method for decoding a current block in accordance with coding information on a combining mode are disclosed.
  • a computer-readable recording medium on which a program for executing a video encoding method and a video decoding method according to an embodiment of the present invention is recorded in a computer is disclosed.
  • a video decoding method including the steps of:
  • a coupling mode flag of a current block indicating whether a plurality of coding modes included in a coupling mode are applied to a current block from a bitstream and determining whether or not a coupling mode flag of the current block includes a plurality of coding modes
  • a coding mode determination unit that determines a plurality of coding modes of the combining mode as coding modes of the current block when the coding mode of the current block is applied to the current block, And a decoding unit for decoding the video data.
  • An encoding unit that determines encoding modes of a current block and generates a joint mode flag of the current block according to whether encoding modes of the current block are the same as a plurality of encoding modes of a joint mode, And a bitstream generation unit for generating a bitstream including the bitstream.
  • a computer-readable recording medium on which a program for performing the video coding method and the video decoding method is recorded.
  • the combining mode includes the coding mode of the current block, only the syntax elements related to the combining mode are included in the bitstream, and the plurality of syntax elements corresponding to the coding modes included in the combining mode are omitted, do.
  • FIG. 1A is a block diagram of an image encoding apparatus based on an encoding unit according to a tree structure according to an embodiment of the present invention.
  • FIG. 1B shows a block diagram of a video decoding apparatus based on a coding unit according to a tree structure according to an embodiment.
  • FIG. 2 illustrates a process in which at least one encoding unit is determined by dividing a current encoding unit according to an embodiment.
  • FIG. 3 illustrates a process in which at least one encoding unit is determined by dividing a non-square encoding unit according to an embodiment.
  • FIG. 4 illustrates a process in which an encoding unit is divided based on at least one of block type information and division type information according to an embodiment.
  • FIG. 5 illustrates a method of determining a predetermined encoding unit among odd number of encoding units according to an exemplary embodiment.
  • FIG. 6 illustrates a sequence in which a plurality of coding units are processed when a current coding unit is divided to determine a plurality of coding units according to an exemplary embodiment.
  • FIG. 7 illustrates a process in which, when an encoding unit can not be processed in a predetermined order according to an embodiment, it is determined that the current encoding unit is divided into odd number of encoding units.
  • FIG. 8 illustrates a process in which a first encoding unit is divided into at least one encoding unit according to an embodiment of the present invention.
  • FIG. 10 illustrates a process in which a square-shaped encoding unit is divided when division type information can not be divided into four square-shaped encoding units according to an embodiment
  • FIG. 11 illustrates that the processing order among a plurality of coding units may be changed according to a division process of coding units according to an embodiment.
  • FIG. 12 illustrates a process of determining the depth of an encoding unit according to a change in type and size of an encoding unit when a plurality of encoding units are determined by recursively dividing an encoding unit according to an exemplary embodiment.
  • FIG. 13 illustrates a depth index (hereinafter referred to as PID) for coding unit classification and depth that can be determined according to the type and size of coding units according to an exemplary embodiment.
  • PID depth index
  • FIG. 14 shows that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
  • FIG. 15 illustrates a processing block serving as a reference for determining a determination order of a reference encoding unit included in a picture according to an embodiment.
  • FIG. 16 shows a block diagram of a video decoding apparatus 1600 according to an embodiment for determining a coding mode according to a combining mode.
  • 17A and 17B show a coding mode decision method of the current block when one of a plurality of luma intra prediction modes and a plurality of chroma intra prediction modes is included in the combining mode.
  • FIGS. 18A and 18B show a coding mode determination method of a current block when one of a plurality of luma intra prediction modes, one of a plurality of chroma intra prediction modes, and one of a plurality of conversion modes is included in the combining mode.
  • 19A and 19B are diagrams for explaining the case where the combining mode includes one of a plurality of luma intra prediction modes, one of a plurality of chroma intra prediction modes, one of a plurality of conversion modes, and additionally information about whether or not the coding mode m1 is applied Is included, a coding mode decision method of the current block is shown.
  • 20 shows a method of determining a coding mode of a current block when information on coding mode m1 and m3 is applied to the combining mode and information on the coding mode m2 is included.
  • FIG. 21A shows a method of determining an encoding mode according to a combining mode in an interlace (P slice or B slice), and FIG. 21B shows a method of determining an encoding mode according to a combining mode in an intra slice (I slice).
  • FIG. 22A shows a method of determining an encoding mode according to a combining mode including a DC mode in an interlace (P slice or B slice)
  • FIG. 22B shows a method of determining a coding mode according to a combining mode including an DC mode in an intra slice Indicates a method of determining an encoding mode.
  • 23 is a syntax of an encoding unit showing an embodiment of the combining mode.
  • FIG. 24 shows a video decoding method 2400 according to an embodiment for determining a coding mode according to a combining mode.
  • FIG. 25 shows a video encoding apparatus 2500 according to an embodiment for determining whether to use the joint mode according to the encoding mode.
  • FIG. 26 shows a video encoding method 2600 according to an embodiment for determining whether to use a combining mode according to an encoding mode.
  • a video decoding method including the steps of:
  • the terminology used herein is intended to encompass all commonly used generic terms that may be considered while considering the functionality of the present invention, but this may vary depending upon the intent or circumstance of the skilled artisan, the emergence of new technology, and the like. Also, in certain cases, there may be a term selected arbitrarily by the applicant, in which case the meaning thereof will be described in detail in the description of the corresponding invention. Therefore, the term used in the present invention should be defined based on the meaning of the term, not on the name of a simple term, but on the entire contents of the present invention.
  • part refers to a hardware component such as software, FPGA or ASIC, and " part " However, “ part “ is not meant to be limited to software or hardware. &Quot; Part " may be configured to reside on an addressable storage medium and may be configured to play back one or more processors.
  • part (s) refers to components such as software components, object oriented software components, class components and task components, and processes, Subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays and variables.
  • the functions provided in the components and “ parts " may be combined into a smaller number of components and “ parts “ or further separated into additional components and “ parts ".
  • the " current block” means one of a coding unit, a prediction unit and a conversion unit which are currently encoded or decoded. For convenience of explanation, when it is necessary to distinguish other types of blocks such as a prediction unit, a conversion unit, a "current encoding block”, a “current prediction block”, and a “current conversion block” may be used.
  • “Sub-block” means a data unit divided from “current block”.
  • “upper block” means a data unit including " current block ".
  • sample means data to be processed as data assigned to a sampling position of an image.
  • pixel values in the image of the spatial domain, and transform coefficients on the transform domain may be samples.
  • a unit including at least one of these samples may be defined as a block.
  • FIG. 1A is a block diagram of an image encoding apparatus 100 based on an encoding unit according to a tree structure according to an embodiment of the present invention.
  • the image encoding apparatus 100 includes a maximum encoding unit determination unit 110, an encoding unit determination unit 120, and an output unit 130.
  • the maximum coding unit determination unit 110 divides a picture or a slice included in a picture into a plurality of maximum coding units according to the size of the maximum coding unit.
  • the maximum encoding unit may be a data unit of size 32x32, 64x64, 128x128, 256x256, or the like, and a data unit of a character approval square whose width and height are two.
  • the maximum encoding unit determination unit 110 may provide the output unit 130 with maximum encoding unit size information indicating the size of the maximum encoding unit.
  • the output unit 130 may include the maximum encoding unit size information in the bitstream.
  • the encoding unit determination unit 120 determines the encoding unit by dividing the maximum encoding unit.
  • the encoding unit may be determined by the maximum size and the depth.
  • the depth can be defined as the number of times the encoding unit is spatially divided from the maximum encoding unit. Each time the depth increases by one, the encoding unit is divided into two or more encoding units. Therefore, as the depth increases, the size of the coding unit per depth decreases. Whether or not the coding unit is divided depends on whether or not the coding unit is efficiently divided by Rate-Distortion Optimization. And division information indicating whether or not the encoding unit is divided may be generated. The division information can be expressed in the form of a flag.
  • the encoding unit can be divided in various ways. For example, a square encoding unit can be divided into four square encoding units that are half the width and height. A square encoding unit can be divided into two rectangular encoding units that are half in width. A square encoding unit can be divided into two rectangular encoding units whose height is half. A square encoding unit can be divided into three encoding units by dividing the width or height by 1: 2: 1.
  • a rectangular coding unit whose width is twice the height can be divided into two square coding units.
  • a rectangular coding unit whose width is twice as high can be divided into rectangular coding units having two widths and four times the height.
  • An encoding unit of a rectangle whose width is twice the height can be divided into two rectangular encoding units and one square encoding unit by dividing the width by 1: 2: 1.
  • a rectangular coding unit whose height is twice the width can be divided into two square coding units.
  • a rectangular unit of coding whose height is twice the width can be divided into rectangular units of which the height is four times the width.
  • a rectangular unit of coding whose height is twice the width can be divided into two rectangular unit of coding and one square unit of coding by dividing the height by 1: 2: 1.
  • the image coding apparatus 100 When two or more division methods are available in the image coding apparatus 100, information on division methods that can be used for the coding units among the division methods available in the image coding apparatus 100 can be determined for each picture. Therefore, only the division methods specific to each picture can be determined to be used. If the image encoding apparatus 100 uses only one division method, information on the division method that can be used in the encoding unit is not separately determined.
  • division type information indicating a division method of an encoding unit can be generated. If there is one division method that can be used in a picture to which an encoding unit belongs, the division type information may not be generated. If the division method is adaptively determined to the encoding information around the encoding unit, the division type information may not be generated.
  • the maximum encoding unit may be divided up to the minimum encoding unit according to the minimum encoding unit size information.
  • the depth of the maximum encoding unit is the highest depth and the minimum encoding unit can be defined as the lowest depth. Accordingly, the encoding unit of the higher depth may include a plurality of encoding units of the lower depth.
  • the image data of the current picture is divided into the maximum encoding units.
  • the maximum encoding unit may include encoding units divided by depth. Since the maximum encoding unit is divided by the depth, the image data of the spatial domain included in the maximum encoding unit can be hierarchically classified according to the depth.
  • the maximum depth for limiting the maximum number of times the maximum encoding unit can be hierarchically divided or the minimum size of the encoding unit may be preset.
  • the encoding unit determination unit 120 compares the encoding efficiency when the encoding unit is hierarchically divided and the encoding efficiency when the encoding unit is not divided. The encoding unit determination unit 120 determines whether to divide the encoding unit according to the comparison result. If it is determined that the division of the encoding unit is more efficient, the encoding unit determination unit 120 divides the encoding unit hierarchically. If it is determined that it is efficient to not divide the encoding unit according to the comparison result, the encoding unit is not divided. Whether or not the encoding unit is divided can be determined independently of whether or not the adjacent encoding units are divided.
  • whether or not the encoding unit is divided may be determined from a coding unit having a large depth in the encoding process. For example, the coding efficiency of the coding unit of the maximum depth is compared with the coding efficiency of the coding unit which is one less than the maximum depth so that the encoding units of the maximum depth and the encoding units of the maximum depth, Is encoded more efficiently. Then, according to the determination result, whether or not to divide a coding unit smaller by one than the maximum depth is determined for each region of the maximum coding unit.
  • a coding unit which is smaller by 2 than the maximum depth for each region of the maximum coding unit and a combination of the coding units and the minimum depth coding units which are smaller by one than the maximum depth selected on the basis of the above determination result are more efficiently Is encoded.
  • the same judgment process is sequentially performed on the coding units having small depths and finally the maximum coding unit and the maximum coding unit are divided in a hierarchical manner to determine which of the generated hierarchical structures is more efficiently coded, Is determined.
  • the division of a coding unit may be determined from a coding unit having a small depth in the coding process. For example, when the coding efficiency of the coding unit having the largest coding unit and the coding unit having the largest depth by one than the maximum coding unit is compared, it is determined which of the coding units having the largest coding depth and the largest coding depth is encoded more efficiently . If the coding efficiency of the maximum coding unit is better, the maximum coding unit is not divided. If the encoding efficiency of encoding units having a depth greater than 1 by the maximum encoding unit is better, the maximum encoding unit is divided and the same comparison process is repeated for the divided encoding units.
  • the algorithm for obtaining the hierarchical tree structure of the maximum coding unit considering the coding efficiency and the calculation amount can be designed in various ways.
  • the coding unit determination unit 120 determines the most efficient prediction and conversion method for each coding unit in order to determine the efficiency of each coding unit.
  • the encoding unit may be partitioned into predetermined data units to determine the most efficient prediction and conversion method.
  • the data unit may have various forms according to the division method of the encoding unit.
  • a method of dividing an encoding unit for determining a data unit may be defined as a partition mode. For example, when the encoding unit of size 2Nx2N (where N is a positive integer) is not divided, the size of the prediction unit included in the encoding unit is 2Nx2N.
  • the size of the prediction unit included in the encoding unit may be 2NxN, Nx2N, NxN, etc. depending on the partition mode.
  • the partitioning mode according to an exemplary embodiment includes not only symmetric data units in which the height or width of a coding unit is divided into symmetric ratios but also data units divided into asymmetric ratios such as 1: n or n: 1, , Data units that are divided into other geometric forms, and arbitrary types of data units.
  • the encoding unit can be predicted and converted based on the data unit included in the encoding unit.
  • the data unit for prediction and the data unit for conversion can be separately determined.
  • the data unit for prediction can be defined as a prediction unit
  • the data unit for conversion can be defined as a conversion unit.
  • the partition mode applied in the prediction unit and the partition mode applied in the conversion unit may be different from each other and prediction of the prediction unit and conversion of the conversion unit in the coding unit may be performed in parallel and independently.
  • An encoding unit may be divided into one or more prediction units to determine an efficient prediction method.
  • an encoding unit can be divided into one or more conversion units to determine an efficient conversion method.
  • the division of the prediction unit and the division of the conversion unit can be performed independently. However, when a reconstructed sample in the coding unit is used for intra prediction, the prediction unit or the conversion unit has a dependency relation between prediction units or conversion units included in the coding unit.
  • the prediction unit included in the coding unit can be predicted by intra prediction or inter prediction.
  • Intra prediction is a method of predicting samples of a prediction unit using reference samples around the prediction unit.
  • Inter prediction is a method of obtaining a reference sample from a reference picture referred to by the current picture and predicting the samples of the prediction unit.
  • the encoding unit determination unit 120 can select a most efficient intra prediction method by applying a plurality of intra prediction methods to the prediction unit for intraprediction.
  • Intra prediction methods include a DC mode, a planar mode, a directional mode such as a vertical mode and a horizontal mode, and the like.
  • Intra prediction can be performed for each prediction unit when a reconstructed sample around a coding unit is used as a reference sample.
  • the prediction order of the prediction unit may be dependent on the conversion order of the conversion unit since the restoration of the reference sample in the coding unit should take precedence over the prediction. Therefore, when a reconstructed sample in a coding unit is used as a reference sample, only the intra prediction method for the conversion units corresponding to the prediction unit is determined for the prediction unit, and the actual intra prediction can be performed for each conversion unit.
  • the coding unit determination unit 120 can select the most efficient inter prediction method by determining the optimal motion vector and the reference picture.
  • the coding unit determination unit 120 may determine a plurality of motion vector candidates from spatial units temporally adjacent to the current coding unit for inter prediction and determine the most efficient motion vector among the motion vectors as a motion vector.
  • a plurality of reference picture candidates can be determined from the current and the temporally adjacent encoding units, and the most efficient reference picture can be determined.
  • a reference picture may be determined from predetermined reference picture lists for the current picture.
  • the most efficient motion vector among a plurality of motion vector candidates may be determined as a predictive motion vector for correct prediction, and a motion vector may be determined by correcting a predictive motion vector.
  • Inter prediction can be performed in parallel for each prediction unit in an encoding unit.
  • the coding unit determination unit 120 may obtain only information indicating a motion vector and a reference picture according to the skip mode to restore the coding unit. According to the skip mode, all the coding information including the residual signal is omitted except for the information indicating the motion vector and the reference picture. Since the residual signal is omitted, the skip mode can be used when the accuracy of the prediction is very high.
  • the partitioning mode used may be limited depending on the prediction method for the prediction unit. For example, only the partition mode for a prediction unit of 2Nx2N, NxN size is applied to the intra prediction, whereas a partition mode for a prediction unit of 2Nx2N, 2NxN, Nx2N, NxN size can be applied to the inter prediction. In addition, only the partition mode for a prediction unit of 2Nx2N size can be applied to the skip mode of the inter prediction.
  • the partition mode allowed for each prediction method in the image coding apparatus 100 can be changed according to the coding efficiency.
  • the image encoding apparatus 100 may perform conversion based on a conversion unit included in a coding unit or an encoding unit.
  • the image encoding apparatus 100 may convert residual data, which is a difference value between an original value and a predicted value, of the pixels included in the encoding unit through a predetermined process.
  • the image encoding apparatus 100 may perform lossy compression through quantization and DCT / DST conversion of residual data.
  • the image encoding apparatus 100 can perform lossless compression of the residual data without quantization.
  • the image encoding apparatus 100 can determine the most efficient conversion unit for quantization and conversion. In a similar manner to the encoding unit according to the tree structure, the conversion unit in the encoding unit is also recursively divided into the conversion unit of smaller size, and the residual data of the encoding unit is divided according to the conversion depth according to the conversion depth. . Then, the image encoding apparatus 100 may generate conversion division information for division of the encoding unit and the conversion unit according to the determined tree structure of the conversion unit.
  • the image coding apparatus 100 may be set to a conversion depth indicating the number of divisions from the height and width of the coding unit to the conversion unit. For example, if the size of the conversion unit of the current encoding unit of size 2Nx2N is 2Nx2N, the conversion depth is set to 0 if the conversion depth is 0, if the conversion unit size is NxN, and if the conversion unit size is N / 2xN / 2, . That is, a conversion unit according to the tree structure can be set according to the conversion depth.
  • the coding unit determination unit 120 determines the most efficient prediction method for the current prediction unit among the plurality of intra prediction methods and inter prediction methods.
  • the encoding unit determination unit 120 determines a prediction unit determination method according to the encoding efficiency according to the prediction result.
  • the encoding unit determination unit 120 determines the conversion unit determination method according to the encoding efficiency according to the conversion result.
  • the coding efficiency of the coding unit is finally determined according to the most efficient prediction unit and the determination method of the conversion unit.
  • the encoding unit determination unit 120 determines the hierarchical structure of the maximum encoding unit according to the encoding efficiency of each depth encoding unit.
  • the coding unit determination unit 120 may measure the coding efficiency of the coding unit for each depth and the prediction efficiency of the prediction methods using Rate-Distortion Optimization based on a Lagrangian Multiplier .
  • the encoding unit determination unit 120 may generate division information indicating whether or not the encoding unit is divided according to a hierarchical structure of the determined maximum encoding unit.
  • the encoding unit determination unit 120 may generate the partition mode information for determining a prediction unit and the conversion unit division information for determining a conversion unit for the divided encoding unit.
  • the encoding unit determination unit 120 can generate the division type information indicating the division method together with the division information when the number of division methods of the encoding unit is two or more.
  • the encoding unit determination unit 120 may generate information on the prediction method and the conversion method used in the prediction unit and the conversion unit.
  • the output unit 130 may output the information generated by the maximum encoding unit determination unit 110 and the encoding unit determination unit 120 in the form of a bit stream according to the hierarchical structure of the maximum encoding unit.
  • FIG. 1B shows a block diagram of an image decoding apparatus 150 based on a coding unit according to a tree structure according to an embodiment.
  • the image decoding apparatus 150 includes a receiving unit 160, an encoding information extracting unit 170, and a decoding unit 180.
  • the receiving unit 160 receives and parses the bitstream of the encoded video.
  • the encoding information extracting unit 170 extracts information necessary for decoding for each maximum encoding unit from the parsed bit stream and provides the extracted information to the decoding unit 180.
  • the encoding information extracting unit 170 can extract information on the maximum size of the encoding unit of the current picture from the header, sequence parameter set, or picture parameter set for the current picture.
  • the encoding information extracting unit 170 extracts the final depth and the division information for the encoding units according to the tree structure for each maximum encoding unit from the parsed bit stream.
  • the extracted final depth and division information are output to the decoding unit 180.
  • the decoding unit 180 can determine the tree structure of the maximum encoding unit by dividing the maximum encoding unit according to the extracted final depth and segmentation information.
  • the division information extracted by the encoding information extraction unit 170 is division information for a tree structure determined by the image encoding apparatus 100 to generate a minimum encoding error. Accordingly, the image decoding apparatus 150 can decode the image according to the encoding scheme that generates the minimum encoding error to recover the image.
  • the encoding information extracting unit 170 can extract the division information for a data unit such as a prediction unit and a conversion unit included in the encoding unit. For example, the encoding information extracting unit 170 may extract the most efficient partition mode information for a prediction unit. The encoding information extracting unit 170 can extract the conversion division information for the most efficient tree structure in the conversion unit.
  • the encoding information extracting unit 170 can acquire information on the most efficient prediction method for the prediction units divided from the encoding unit.
  • the encoding information extracting unit 170 can obtain information on the most efficient conversion method for the conversion units divided from the encoding unit.
  • the encoding information extracting unit 170 extracts information from a bit stream according to a method of constructing a bit stream at the output unit 130 of the image encoding apparatus 100.
  • the decoding unit 180 can divide the maximum encoding unit into the encoding units having the most efficient tree structure based on the division information.
  • the decoding unit 180 may divide the encoding unit into prediction units according to the information on the partition mode.
  • the decoding unit 180 may divide the encoding unit into units of conversion according to the conversion division information.
  • the decoding unit 180 may predict a prediction unit according to prediction method information.
  • the decoding unit 180 may dequantize and invert the residual data corresponding to the difference between the original value and the predicted value of the pixel according to the information on the conversion method of the conversion unit. Also, the decoding unit 180 may recover the pixels of the encoding unit according to the prediction result of the prediction unit and the conversion result of the conversion unit.
  • FIG. 2 illustrates a process in which the image decoding apparatus 150 determines at least one encoding unit by dividing a current encoding unit according to an embodiment.
  • the image decoding apparatus 150 may determine the type of an encoding unit using block type information, and may determine a type of an encoding unit to be divided using the type information. That is, the division method of the coding unit indicated by the division type information can be determined according to which block type the block type information used by the video decoding apparatus 150 represents.
  • the image decoding apparatus 150 may use block type information indicating that the current encoding unit is a square type. For example, the image decoding apparatus 150 can determine whether to divide a square encoding unit according to division type information, vertically divide, horizontally divide, or divide into four encoding units. 2, if the block type information of the current encoding unit 200 indicates a square shape, the decoding unit 180 may calculate the size of the current encoding unit 200 according to the division type information indicating that the current block is not divided It is possible to determine the divided coding units 210b, 210c, and 210d based on the division type information indicating the predetermined division method or not dividing the coding unit 210a.
  • the image decoding apparatus 150 determines two encoding units 210b, which are obtained by dividing the current encoding unit 200 in the vertical direction, based on the division type information indicating that the image is divided vertically according to an embodiment .
  • the image decoding apparatus 150 can determine two encoding units 210c in which the current encoding unit 200 is horizontally divided based on the division type information indicating that the image is divided in the horizontal direction.
  • the image decoding apparatus 150 can determine the four coding units 210d obtained by dividing the current coding unit 200 in the vertical direction and the horizontal direction based on the division type information indicating that the coding unit 200 is divided in the vertical direction and the horizontal direction.
  • the division type in which the square coding unit can be divided should not be limited to the above-mentioned form, but may include various forms in which the division type information can be represented.
  • the predetermined divisional form in which the square encoding unit is divided will be described in detail by way of various embodiments below.
  • FIG. 3 illustrates a process in which the image decoding apparatus 150 determines at least one encoding unit by dividing a non-square encoding unit according to an embodiment.
  • the image decoding apparatus 150 may use block type information indicating that the current encoding unit is a non-square format.
  • the image decoding apparatus 150 can determine whether to divide the current non-square coding unit according to the division type information or not by a predetermined method. 3, if the block type information of the current encoding unit 300 or 350 indicates a non-square shape, the image decoding apparatus 150 determines whether the current encoding unit 300 320b, 330a, 330b, 330c, 370a, and 330b based on the division type information indicating a predetermined division method, or the encoding units 310 and 360 having the same size as the encoding units 310 and 360, 370b, 380a, 380b, and 380c.
  • the predetermined division method in which the non-square coding unit is divided will be described in detail through various embodiments.
  • the image decoding apparatus 150 may determine the type in which the coding unit is divided using the division type information.
  • the division type information indicates the number of at least one coding unit generated by dividing the coding unit . 3, if the division type information indicates that the current encoding unit 300 or 350 is divided into two encoding units, the image decoding apparatus 150 decodes the current encoding unit 300 or 350 based on the division type information, To determine the two encoding units 320a, 320b, or 370a, 370b included in the current encoding unit.
  • the non-square current coding unit 300 or 350 can be divided in consideration of the position of the long side.
  • the image decoding apparatus 150 divides the current encoding unit 300 or 350 in the direction of dividing the long side of the current encoding unit 300 or 350 in consideration of the shape of the current encoding unit 300 or 350 So that a plurality of encoding units can be determined.
  • the image decoding apparatus 150 may determine an odd number of encoding units included in the current encoding unit 300 or 350.
  • the image decoding device 150 divides the current encoding unit 300 or 350 into three encoding units 330a , 330b, 330c, 380a, 380b, and 380c.
  • the image decoding apparatus 150 may determine an odd number of encoding units included in the current encoding unit 300 or 350, and the sizes of the determined encoding units may not be the same.
  • the size of the predetermined encoding unit 330b or 380b among the determined odd number of encoding units 330a, 330b, 330c, 380a, 380b, and 380c is different from the size of the other encoding units 330a, 330c, 380a, and 380c . That is, the encoding unit that can be determined by dividing the current encoding unit 300 or 350 may have a plurality of types of sizes.
  • the image decoding apparatus 150 can determine an odd number of encoding units included in the current encoding unit 300 or 350, The image decoding apparatus 150 may limit the encoding unit of at least one of odd number of encoding units generated by division.
  • the image decoding apparatus 150 includes a coding unit 330a, 330b, 330c, 380a, 380b, and 380c generated by dividing a current coding unit 300 or 350, (330b, 380b) may be different from the other encoding units (330a, 330c, 380a, 380c).
  • the image decoding apparatus 150 may restrict the encoding units 330b and 380b positioned at the center to be not further divided, or may be limited to a predetermined number of times, differently from other encoding units 330a, 330c, 380a, and 380c It can be limited to be divided.
  • FIG. 4 illustrates a process in which the image decoding apparatus 150 divides an encoding unit based on at least one of block type information and division type information according to an embodiment.
  • the image decoding apparatus 150 may determine that the first encoding unit 400 in the form of a square is not divided or divided into encoding units based on at least one of the block type information and the division type information. According to one embodiment, when the division type information indicates that the first encoding unit 400 is divided in the horizontal direction, the image decoding apparatus 150 divides the first encoding unit 400 in the horizontal direction, (410).
  • the first encoding unit, the second encoding unit, and the third encoding unit used according to an embodiment are terms used to understand the relation before and after the division between encoding units.
  • the second encoding unit can be determined, and if the second encoding unit is divided, the third encoding unit can be determined.
  • the relationship between the first coding unit, the second coding unit and the third coding unit used can be understood to be in accordance with the above-mentioned characteristic.
  • the image decoding apparatus 150 may determine that the determined second encoding unit 410 is not divided or divided into encoding units based on at least one of the block type information and the division type information.
  • the image decoding apparatus 150 includes a second coding unit 410 of a non-square shape determined by dividing a first coding unit 400 based on at least one of block type information and division type information It may be divided into at least one third encoding unit 420a, 420b, 420c, 420d, or the like, or the second encoding unit 410 may not be divided.
  • the image decoding apparatus 150 may obtain at least one of the block type information and the division type information and the image decoding apparatus 150 may acquire at least one of the block type information and the division type information,
  • the second encoding unit 410 may divide a plurality of second encoding units (for example, 410) of various types into a first encoding unit 410 and a second encoding unit 410
  • the unit 400 can be divided according to the division method.
  • the encoding unit 410 may also be divided into a third encoding unit (e.g., 420a, 420b, 420c, 420d, etc.) based on at least one of the block type information and the division type information for the second encoding unit 410 have. That is, an encoding unit can be recursively divided based on at least one of division type information and block type information associated with each encoding unit. A method which can be used for recursive division of an encoding unit will be described later in various embodiments.
  • the image decoding apparatus 150 divides each of the third encoding units 420a, 420b, 420c, and 420d into units of encoding based on at least one of the block type information and the division type information, It can be determined that the unit 410 is not divided.
  • the image decoding apparatus 150 may divide the second encoding unit 410 in the non-square form into an odd number of third encoding units 420b, 420c and 420d according to an embodiment.
  • the image decoding apparatus 150 may set a predetermined restriction on a predetermined third encoding unit among odd numbered third encoding units 420b, 420c, and 420d.
  • the image decoding apparatus 150 may limit the encoding unit 420c located in the middle among the odd-numbered third encoding units 420b, 420c, and 420d to no longer be divided or be divided into a set number of times . Referring to FIG.
  • the image decoding apparatus 150 includes an encoding unit (not shown) located in the middle among the odd third encoding units 420b, 420c, and 420d included in the second encoding unit 410 in the non- 420c are not further divided or are limited to being divided into a predetermined division form (for example, divided into four coding units or divided into a form corresponding to a form in which the second coding units 410 are divided) (For example, dividing only n times, n > 0).
  • a predetermined division form for example, divided into four coding units or divided into a form corresponding to a form in which the second coding units 410 are divided
  • the above restriction on the encoding unit 420c positioned at the center is merely an example and should not be construed to be limited to the above embodiments and the encoding unit 420c positioned at the center is not limited to the other encoding units 420b and 420d Quot;), < / RTI > which can be decoded differently.
  • the image decoding apparatus 150 may obtain at least one of the block type information and the division type information used for dividing the current encoding unit at a predetermined position in the current encoding unit according to an embodiment.
  • FIG. 13 illustrates a method for an image decoding apparatus 150 to determine a predetermined encoding unit among odd number of encoding units according to an embodiment.
  • at least one of the block type information and the division type information of the current encoding unit 1300 is a sample of a predetermined position among a plurality of samples included in the current encoding unit 1300 (for example, Sample 1340).
  • the predetermined position in the current coding unit 1300 in which at least one of the block type information and the division type information can be obtained should not be limited to the middle position shown in FIG.
  • the image decoding apparatus 150 may determine that the current encoding unit is not divided or divided into the encoding units of various types and sizes by acquiring at least one of the block type information and the division type information obtained from the predetermined position.
  • the image decoding apparatus 150 may select one of the encoding units.
  • the method for selecting one of the plurality of encoding units may be various, and description of these methods will be described later in various embodiments.
  • the image decoding apparatus 150 may divide the current encoding unit into a plurality of encoding units and determine a predetermined encoding unit.
  • FIG. 5 illustrates a method for an image decoding apparatus 150 to determine an encoding unit of a predetermined position among odd number of encoding units according to an embodiment.
  • the image decoding apparatus 150 may use information indicating positions of odd-numbered encoding units in order to determine an encoding unit located in the middle among odd-numbered encoding units. Referring to FIG. 5, the image decoding apparatus 150 may determine an odd number of encoding units 520a, 520b, and 520c by dividing the current encoding unit 500. FIG. The image decoding apparatus 150 can determine the center encoding unit 520b by using information on the positions of the odd number of encoding units 520a, 520b and 520c.
  • the image decoding apparatus 150 determines the positions of the encoding units 520a, 520b, and 520c based on information indicating the positions of predetermined samples included in the encoding units 520a, 520b, and 520c,
  • the encoding unit 520b located in the encoding unit 520 can be determined.
  • the video decoding apparatus 150 decodes the coding units 520a, 520b, and 520c based on information indicating the positions of the upper left samples 530a, 530b, and 530c of the coding units 520a, By determining the position, the coding unit 520b located in the center can be determined.
  • Information indicating the positions of the upper left samples 530a, 530b, and 530c included in the coding units 520a, 520b, and 520c is a position in the picture of the coding units 520a, 520b, and 520c Or information about the coordinates.
  • Information indicating the positions of the upper left samples 530a, 530b, and 530c included in the coding units 520a, 520b, and 520c according to one embodiment is stored in the coding units 520a and 520b included in the current coding unit 500 , 520c, and the width or height may correspond to information indicating the difference between the coordinates of the coding units 520a, 520b, and 520c in the picture.
  • the image decoding apparatus 150 directly uses the information on the positions or coordinates of the coding units 520a, 520b and 520c in the pictures or information on the width or height of the coding units corresponding to the difference between the coordinates
  • the encoding unit 520b located in the center can be determined.
  • the information indicating the position of the upper left sample 530a of the upper coding unit 520a may indicate the coordinates of (xa, ya) and the upper left sample 530b of the middle coding unit 520b May indicate the coordinates of (xb, yb), and information indicating the position of the upper left sample 530c of the lower coding unit 520c may indicate (xc, yc) coordinates.
  • the image decoding apparatus 150 can determine the center encoding unit 520b using the coordinates of the upper left samples 530a, 530b, and 530c included in the encoding units 520a, 520b, and 520c.
  • the coding unit 520b including (xb, yb) coordinates of the sample 530b positioned at the center, 520b, and 520c determined by dividing the current encoding unit 500 by a coding unit located in the middle of the encoding units 520a, 520b, and 520c.
  • the coordinates indicating the positions of the upper left samples 530a, 530b, and 530c may indicate the coordinates indicating the absolute position in the picture
  • the position of the upper left sample unit 530a of the upper coding unit 520a may be (Dxb, dyb), which is information indicating the relative position of the upper left sample 530b of the middle coding unit 520b, and the relative position of the upper left sample 530c of the lower coding unit 520c
  • Information dyn (dxc, dyc) coordinates may also be used.
  • the method of determining the coding unit at a predetermined position by using the coordinates of the sample as information indicating the position of the sample included in the coding unit should not be limited to the above-described method, and various arithmetic Should be interpreted as a method.
  • the image decoding apparatus 150 may divide the current encoding unit 500 into a plurality of encoding units 520a, 520b, and 520c, and may encode a predetermined one of the encoding units 520a, 520b, and 520c
  • the encoding unit can be selected.
  • the image decoding apparatus 150 can select an encoding unit 520b having a different size from the encoding units 520a, 520b, and 520c.
  • the image decoding apparatus 150 includes a (xa, ya) coordinate which is information indicating the position of the upper left sample 530a of the upper encoding unit 520a, (Xc, yc) coordinates, which are information indicating the positions of the upper-stage coding unit 530b and the upper-left sample unit 530c of the lower-stage coding unit 520c, 520b, and 520c, respectively.
  • the video decoding apparatus 150 encodes the video data in units of encoding units 520a, 520b, and 520c using the coordinates (xa, ya), (xb, yb), (xc, yc) indicating the positions of the encoding units 520a, 520b, ) Can be determined.
  • the image decoding apparatus 150 can determine the width of the upper encoding unit 520a as xb-xa and the height as yb-ya. According to an embodiment, the image decoding apparatus 150 can determine the width of the center encoding unit 520b as xc-xb and the height as yc-yb. The image decoding apparatus 150 may determine the width or height of the lower coding unit using the width or height of the current coding unit and the width and height of the upper coding unit 520a and the middle coding unit 520b . The image decoding apparatus 150 may determine an encoding unit having a different size from other encoding units based on the width and height of the determined encoding units 520a, 520b, and 520c.
  • the image decoding apparatus 150 may determine a coding unit 520b as a coding unit at a predetermined position while having a size different from that of the upper coding unit 520a and the lower coding unit 520c.
  • the process of determining the encoding unit having a size different from that of the other encoding units by the above-described video decoding apparatus 150 may be performed in an embodiment that determines encoding units at predetermined positions using the sizes of the encoding units determined based on the sample coordinates .
  • Various processes may be used for determining the encoding unit at a predetermined position by comparing the sizes of the encoding units determined according to predetermined sample coordinates.
  • the position of the sample to be considered for determining the position of the coding unit should not be interpreted as being limited to the left upper end, and information about the position of any sample included in the coding unit can be interpreted as being available.
  • the image decoding apparatus 150 may select an encoding unit of a predetermined position among odd number of encoding units determined by dividing the current encoding unit in consideration of the type of the current encoding unit. For example, if the current coding unit is a non-square shape having a width greater than the height, the image decoding apparatus 150 can determine a coding unit at a predetermined position along the horizontal direction. That is, the image decoding apparatus 150 may determine one of the encoding units which are located in the horizontal direction and limit the encoding unit. If the current coding unit has a non-square shape with a height greater than the width, the image decoding apparatus 150 can determine a coding unit at a predetermined position in the vertical direction. That is, the image decoding apparatus 150 may determine one of the encoding units having different positions in the vertical direction and set a restriction on the encoding unit.
  • the image decoding apparatus 150 may use information indicating positions of even-numbered encoding units in order to determine an encoding unit of a predetermined position among the even-numbered encoding units.
  • the image decoding apparatus 150 can determine an even number of encoding units by dividing the current encoding unit and determine a encoding unit at a predetermined position by using information on the positions of the even number of encoding units.
  • a concrete procedure for this is omitted because it may be a process corresponding to a process of determining a coding unit of a predetermined position (for example, the middle position) among the above-mentioned odd number of coding units.
  • the video decoding apparatus 150 may divide the block type information stored in the samples included in the middle coding unit, Information can be used.
  • the image decoding apparatus 150 may divide the current encoding unit 500 into a plurality of encoding units 520a, 520b, and 520c based on at least one of block type information and division type information, It is possible to determine an encoding unit 520b located in the middle of the plurality of encoding units 520a, 520b, and 520c. Furthermore, the image decoding apparatus 150 may determine a coding unit 520b positioned at the center in consideration of the position at which at least one of the block type information and the division type information is obtained.
  • At least one of the block type information and the division type information of the current encoding unit 500 can be acquired in the sample 540 located in the center of the current encoding unit 500, and the block type information and the division type information If the current encoding unit 500 is divided into a plurality of encoding units 520a, 520b and 520c based on at least one of the encoding units 520a to 520c, You can decide.
  • the information used for determining the coding unit located in the middle should not be limited to at least one of the block type information and the division type information, and various kinds of information may be used in the process of determining the coding unit located in the middle .
  • predetermined information for identifying a coding unit at a predetermined position may be obtained from a predetermined sample included in a coding unit to be determined.
  • the image decoding apparatus 150 includes a plurality of encoding units 520a, 520b, and 520c, which are determined by dividing the current encoding unit 500, Block type information obtained at a predetermined position in the current encoding unit 500 (e.g., a sample located in the middle of the current encoding unit 500) to determine the encoding unit located in the middle of the encoding unit, and And at least one of division type information. .
  • the image decoding apparatus 150 can determine the sample at the predetermined position in consideration of the block form of the current encoding unit 500, and the image decoding apparatus 150 can decode the plural A coding unit 520b including a sample in which predetermined information (for example, at least one of block type information and division type information) can be obtained is determined among the plurality of coding units 520a, 520b and 520c, .
  • the image decoding apparatus 150 may determine a sample 540 positioned at the center of the current encoding unit 500 as a sample from which predetermined information can be obtained,
  • the coding unit 150 may limit the coding unit 520b including the sample 540 to a predetermined limit in the decoding process.
  • the position of the sample from which the predetermined information can be obtained should not be construed to be limited to the above-mentioned position, but may be interpreted as samples at arbitrary positions included in the encoding unit 520b to be determined for limiting.
  • the position of a sample from which predetermined information can be obtained may be determined according to the type of the current encoding unit 500 according to an embodiment.
  • the block type information can determine whether the current encoding unit is a square or a non-square, and determine the position of a sample from which predetermined information can be obtained according to the shape.
  • the image decoding apparatus 150 may use at least one of the information on the width of the current encoding unit and the information on the height, and may be located on a boundary that divides at least one of the width and the height of the current encoding unit by half The sample can be determined as a sample from which predetermined information can be obtained.
  • the image decoding apparatus 150 may set one of the samples adjacent to the boundary dividing the long side of the current encoding unit in half to a predetermined Can be determined as a sample from which the information of < / RTI >
  • the image decoding apparatus 150 may determine at least one of the block type information and the division type information One can be used. According to an exemplary embodiment, the image decoding apparatus 150 may obtain at least one of the block type information and the division type information from a sample at a predetermined position included in the encoding unit, and the image decoding apparatus 150 may determine that the current encoding unit is divided And divide the generated plurality of coding units by using at least one of division type information and block type information obtained from samples at predetermined positions included in each of the plurality of coding units.
  • the coding unit can be recursively divided using at least one of the block type information and the division type information obtained in the sample at the predetermined position included in each of the coding units. Since the recursive division process of the encoding unit has been described with reference to FIG. 4, a detailed description will be omitted.
  • the image decoding apparatus 150 may determine at least one encoding unit by dividing the current encoding unit, and may determine the order in which the at least one encoding unit is decoded in a predetermined block (for example, ). ≪ / RTI >
  • FIG. 6 illustrates a sequence in which a plurality of encoding units are processed when the image decoding apparatus 150 determines a plurality of encoding units by dividing the current encoding unit according to an exemplary embodiment.
  • the image decoding apparatus 150 divides the first encoding unit 600 in the vertical direction according to the block type information and the division type information to determine the second encoding units 610a and 610b, 650b, 650c, and 650d by dividing the first encoding unit 600 in the horizontal direction to determine the second encoding units 630a and 630b or dividing the first encoding unit 600 in the vertical direction and the horizontal direction, Can be determined.
  • the image decoding apparatus 150 may determine the order in which the second encoding units 610a and 610b determined by dividing the first encoding unit 600 in the vertical direction are processed in the horizontal direction 610c .
  • the image decoding apparatus 150 can determine the processing order of the second encoding units 630a and 630b determined by dividing the first encoding unit 600 in the horizontal direction as the vertical direction 630c.
  • the image decoding apparatus 150 processes the encoding units located in one row of the second encoding units 650a, 650b, 650c, and 650d determined by dividing the first encoding unit 600 in the vertical direction and the horizontal direction (For example, a raster scan order or a z scan order 650e) in which the encoding units located in the next row are processed.
  • the image decoding apparatus 150 may recursively divide encoding units. 6, the image decoding apparatus 150 may determine a plurality of encoding units 610a, 610b, 630a, 630b, 650a, 650b, 650c, and 650d by dividing the first encoding unit 600, The determined plurality of encoding units 610a, 610b, 630a, 630b, 650a, 650b, 650c and 650d may be recursively divided.
  • the method of dividing the plurality of encoding units 610a, 610b, 630a, 630b, 650a, 650b, 650c, and 650d may be a method corresponding to the method of dividing the first encoding unit 600.
  • the plurality of encoding units 610a, 610b, 630a, 630b, 650a, 650b, 650c, and 650d may be independently divided into a plurality of encoding units. Referring to FIG.
  • the image decoding apparatus 150 may determine the second encoding units 610a and 610b by dividing the first encoding unit 600 in the vertical direction, and may further determine the second encoding units 610a and 610b Can be determined not to divide or separate independently.
  • the image decoding apparatus 150 may divide the second encoding unit 610a on the left side in the horizontal direction into the third encoding units 620a and 620b and may divide the second encoding unit 610a on the right side into the second encoding units 610b ) May not be divided.
  • the processing order of the encoding units may be determined based on the division process of the encoding units.
  • the processing order of the divided coding units can be determined based on the processing order of the coding units immediately before being divided.
  • the image decoding apparatus 150 can determine the order in which the third coding units 620a and 620b determined by dividing the second coding unit 610a on the left side are processed independently of the second coding unit 610b on the right side.
  • the third encoding units 620a and 620b may be processed in the vertical direction 620c since the second encoding units 610a on the left side are divided in the horizontal direction and the third encoding units 620a and 620b are determined.
  • the third encoding unit included in the left second encoding unit 610a The right encoding unit 610b can be processed after the blocks 620a and 620b are processed in the vertical direction 620c.
  • the above description is intended to explain the process sequence in which encoding units are determined according to the encoding units before division. Therefore, it should not be construed to be limited to the above-described embodiments, It should be construed as being used in various ways that can be handled independently in sequence.
  • FIG. 7 illustrates a process of determining that the current encoding unit is divided into odd number of encoding units when the image decoding apparatus 150 can not process the encoding units in a predetermined order according to an embodiment.
  • the image decoding apparatus 150 may determine that the current encoding unit is divided into odd number of encoding units based on the obtained block type information and the division type information.
  • the first encoding unit 700 of a square shape can be divided into second non-square encoding units 710a and 710b, and the second encoding units 710a and 710b can be independently 3 encoding units 720a, 720b, 720c, 720d, and 720e.
  • the image decoding apparatus 150 may determine a plurality of third encoding units 720a and 720b by dividing the left encoding unit 710a among the second encoding units in the horizontal direction, and the right encoding unit 710b May be divided into an odd number of third encoding units 720c, 720d, and 720e.
  • the image decoding apparatus 150 determines whether or not the third encoding units 720a, 720b, 720c, 720d, and 720e can be processed in a predetermined order and determines whether there are odd-numbered encoding units You can decide. Referring to FIG. 7, the image decoding apparatus 150 may recursively divide the first coding unit 700 to determine the third coding units 720a, 720b, 720c, 720d, and 720e.
  • the image decoding apparatus 150 may further include a first encoding unit 700, a second encoding unit 710a and 710b or a third encoding unit 720a, 720b, 720c, and 720c based on at least one of block type information and division type information, 720d, and 720e may be divided into odd number of coding units among the divided types. For example, an encoding unit located on the right of the second encoding units 710a and 710b may be divided into odd third encoding units 720c, 720d, and 720e.
  • the order in which the plurality of coding units included in the first coding unit 700 are processed may be a predetermined order (for example, a z-scan order 730) 150 can determine whether the third encoding units 720c, 720d, and 720e determined by dividing the right second encoding unit 710b into odd numbers satisfy the condition that the third encoding units 720c, 720d, and 720e can be processed according to the predetermined order.
  • a predetermined order for example, a z-scan order 730
  • the image decoding apparatus 150 satisfies a condition that third encoding units 720a, 720b, 720c, 720d, and 720e included in the first encoding unit 700 can be processed in a predetermined order And it is determined whether or not at least one of the widths and heights of the second encoding units 710a and 710b is divided in half according to the boundaries of the third encoding units 720a, 720b, 720c, 720d, and 720e, .
  • the third encoding units 720a and 720b determined by dividing the height of the left second encoding unit 710a in the non-square shape by half are satisfying the condition, but the right second encoding unit 710b is set to 3 Since the boundaries of the third encoding units 720c, 720d, and 720e, which are determined by dividing the first encoding units 720c, 720d, and 720e, can not divide the width or the height of the second right encoding unit 710b by half, 720e may be determined as not satisfying the condition and the image decoding apparatus 150 may determine that the scanning order is disconnection in the case of such unsatisfactory condition and the right second encoding unit 710b is determined based on the determination result It can be determined to be divided into odd number of encoding units.
  • the coding unit of a predetermined position among the divided coding units may be limited to a predetermined size. Since the embodiment has been described above, a detailed description thereof will be omitted.
  • FIG. 8 illustrates a process in which an image decoding apparatus 150 determines at least one encoding unit by dividing a first encoding unit 800 according to an embodiment.
  • the image decoding apparatus 150 may divide the first encoding unit 800 based on at least one of the block type information and the division type information acquired through the receiver 160.
  • the first encoding unit 800 in the form of a square may be divided into four encoding units having a square form, or may be divided into a plurality of encoding units of a non-square form. For example, referring to FIG.
  • the image decoding device 150 when the block type information indicates that the first encoding unit 800 is a square and that the division type information is divided into non-square encoding units, the image decoding device 150 generates a first encoding unit
  • the encoding unit 800 may be divided into a plurality of non-square encoding units. More specifically, when the division type information indicates that the first encoding unit 800 is divided horizontally or vertically to determine an odd number of encoding units, the image decoding apparatus 150 includes a first encoding unit 800 in the form of a square 800 820b, and 820c divided in the vertical direction as the odd number of encoding units, or into the second encoding units 820a, 820b, and 820c determined by being divided in the horizontal direction.
  • the image decoding apparatus 150 may be configured such that the second encoding units 810a, 810b, 810c, 820a, 820b, and 820c included in the first encoding unit 800 are processed in a predetermined order And the condition is that at least one of the width and height of the first encoding unit 800 is divided in half according to the boundaries of the second encoding units 810a, 810b, 810c, 820a, 820b, and 820c .
  • the boundaries of the second encoding units 810a, 810b and 810c which are determined by dividing the first encoding unit 800 in the vertical direction into a square shape, are divided in half by the width of the first encoding unit 800
  • the first encoding unit 800 can be determined as not satisfying a condition that can be processed in a predetermined order. Also, since the boundaries of the second encoding units 820a, 820b, and 820c determined by dividing the first encoding unit 800 in the horizontal direction into the horizontal direction can not divide the width of the first encoding unit 800 in half, 1 encoding unit 800 may be determined as not satisfying a condition that can be processed in a predetermined order.
  • the image decoding apparatus 150 may determine that the scan sequence is disconnection in the case of such unsatisfactory condition and determine that the first encoding unit 800 is divided into odd number of encoding units based on the determination result. According to one embodiment, when the image decoding apparatus 150 is divided into an odd number of coding units, the coding unit of a predetermined position among the divided coding units may be limited to a predetermined size. Since the embodiment has been described above, a detailed description thereof will be omitted.
  • the image decoding apparatus 150 may divide the first encoding unit and determine various types of encoding units.
  • the image decoding apparatus 150 may divide the first coding unit 800 of a square shape and the first coding unit 830 or 850 of a non-square shape into various types of coding units .
  • the image decoding apparatus 150 generates the first coding unit 900 in the form of a square based on at least one of the block type information and the division type information acquired through the receiving unit 160, 2 encoding units 910a, 910b, 920a, and 920b.
  • the second encoding units 910a, 910b, 920a, and 920b may be independently divided. Accordingly, the video decoding apparatus 150 determines whether to divide or not divide into a plurality of coding units based on at least one of the block type information and the division type information related to each of the second coding units 910a, 910b, 920a, and 920b .
  • the image decoding apparatus 150 divides the non-square left second encoding unit 910a determined by dividing the first encoding unit 900 in the vertical direction into the horizontal direction, 912a, and 912b. However, when the left second encoding unit 910a is divided in the horizontal direction, the right-side second encoding unit 910b is arranged in the horizontal direction in the same direction as the direction in which the left second encoding unit 910a is divided, As shown in Fig.
  • the right second encoding unit 910b is divided in the same direction and the third encoding units 914a and 914b are determined, the left second encoding unit 910a and the right second encoding unit 910b are arranged in the horizontal direction
  • the third encoding units 912a, 912b, 914a, and 914b can be determined by being independently divided.
  • the image decoding apparatus 150 may divide the first coding unit 900 into four square-shaped second coding units 930a, 930b, 930c, and 930d based on at least one of the block type information and the division type information. And this may be inefficient in terms of image decoding.
  • the image decoding apparatus 150 divides the second encoding unit 920a or 920b in the non-square form determined by dividing the first encoding unit 330 in the horizontal direction into the vertical direction, (922a, 922b, 924a, 924b). However, if one of the second coding units (for example, the upper second coding unit 920a) is divided in the vertical direction, the video decoding apparatus 150 may generate a second coding unit (for example, Coding unit 920b) can be restricted so that the upper second encoding unit 920a can not be divided vertically in the same direction as the divided direction.
  • a second coding unit for example, Coding unit 920b
  • FIG. 10 illustrates a process in which the image decoding apparatus 150 divides a square-shaped encoding unit when the division type information can not be divided into four square-shaped encoding units according to an embodiment.
  • the image decoding apparatus 150 divides the first encoding unit 1000 based on at least one of the block type information and the division type information to generate the second encoding units 1010a, 1010b, 1020a, and 1020b You can decide.
  • the division type information may include information on various types in which the coding unit can be divided, but information on various types may not include information for dividing into four square units of coding units.
  • the image decoding apparatus 150 can not divide the first encoding unit 1000 in the square form into the second encoding units 1030a, 1030b, 1030c, and 1030d in the form of four squares.
  • the image decoding apparatus 150 can determine the second encoding units 1010a, 1010b, 1020a, and 1020b in the non-square form based on the division type information.
  • the image decoding apparatus 150 may independently divide the non-square second encoding units 1010a, 1010b, 1020a, and 1020b, respectively.
  • Each of the second encoding units 1010a, 1010b, 1020a, 1020b, and the like may be divided in a predetermined order through a recursive method, and the first encoding unit 1000 May be a partitioning method corresponding to a method in which a partition is divided.
  • the image decoding apparatus 150 can determine the third encoding units 1012a and 1012b in the form of a square by dividing the left second encoding unit 1010a in the horizontal direction, and the right second encoding unit 1010b It is possible to determine the third encoding units 1014a and 1014b in the form of a square by being divided in the horizontal direction. Furthermore, the image decoding apparatus 150 may divide the left second encoding unit 1010a and the right second encoding unit 1010b in the horizontal direction to determine the third encoding units 1016a, 1016b, 1016c, and 1016d in the form of a square have. In this case, the encoding unit may be determined in the same manner as the first encoding unit 1000 is divided into the four second square encoding units 1030a, 1030b, 1030c, and 1030d.
  • the image decoding apparatus 150 can determine the third encoding units 1022a and 1022b in the shape of a square by dividing the upper second encoding unit 1020a in the vertical direction, and the lower second encoding units 1020b Can be divided in the vertical direction to determine the third encoding units 1024a and 1024b in the form of a square. Further, the image decoding apparatus 150 may divide the upper second encoding unit 1020a and the lower second encoding unit 1020b in the vertical direction to determine the third encoding units 1022a, 1022b, 1024a, and 1024b in the form of a square have. In this case, the encoding unit may be determined in the same manner as the first encoding unit 1000 is divided into the four second square encoding units 1030a, 1030b, 1030c, and 1030d.
  • FIG. 11 illustrates that the processing order among a plurality of coding units may be changed according to the division process of the coding unit according to an embodiment.
  • the image decoding apparatus 150 may divide the first encoding unit 1100 based on the block type information and the division type information.
  • the image decoding apparatus 150 includes a first encoding unit 1100 (For example, 1110a, 1110b, 1120a, 1120b, 1130a, 1130b, 1130c, 1130d, etc.)
  • the non-square second encoding units 1110a, 1110b, 1120a, and 1120b which are determined by dividing the first encoding unit 1100 only in the horizontal direction or the vertical direction, As shown in FIG.
  • the image decoding apparatus 150 divides the second encoding units 1110a and 1110b generated by dividing the first encoding unit 1100 in the vertical direction into the horizontal direction, and outputs the third encoding units 1116a and 1116b, 1116c and 1116d can be determined and the second encoding units 1120a and 1120b generated by dividing the first encoding unit 1100 in the horizontal direction are respectively divided in the horizontal direction to generate third encoding units 1126a, 1126b and 1126c , 1126d. Since the process of dividing the second encoding units 1110a, 1110b, 1120a, and 1120b has been described in detail with reference to FIG. 9, a detailed description thereof will be omitted.
  • the image decoding apparatus 150 may process an encoding unit in a predetermined order.
  • the features of the processing of the encoding unit according to the predetermined order have been described above with reference to FIG. 6, and a detailed description thereof will be omitted.
  • the image decoding apparatus 150 divides a first encoding unit 1100 in a square form into 4 pieces of fourth encoding units 1116a, 1116b, 1116c, 1116d, 1126a, 1126b, 1126c, 1126d Can be determined.
  • the image decoding apparatus 150 may process the third encoding units 1116a, 1116b, 1116c, 1116d, 1126a, 1126b, 1126c, and 1126d according to the form in which the first encoding unit 1100 is divided You can decide.
  • the image decoding apparatus 150 divides the second encoding units 1110a and 1110b generated in the vertical direction into the horizontal direction to determine the third encoding units 1116a, 1116b, 1116c, and 1116d And the image decoding apparatus 150 first processes the third encoding units 1116a and 1116b included in the left second encoding unit 1110a in the vertical direction and then processes the third encoding units 1116a and 1116b included in the right second encoding unit 1110b The third encoding units 1116a, 1116b, 1116c, and 1116d may be processed in accordance with an order 1117 of processing the third encoding units 1116c and 1116d in the vertical direction.
  • the image decoding apparatus 150 divides the second encoding units 1120a and 1120b generated in the horizontal direction into vertical directions to determine the third encoding units 1126a, 1126b, 1126c, and 1126d
  • the image decoding apparatus 150 first processes the third encoding units 1126a and 1126b included in the upper second encoding unit 1120a in the horizontal direction and then processes the third encoding units 1126a and 1126b included in the lower second encoding unit 1120b
  • the third encoding units 1126a, 1126b, 1126c, and 1126d can be processed according to the order 1127 of processing the third encoding units 1126c and 1126d in the horizontal direction.
  • the second encoding units 1110a, 1110b, 1120a, and 1120b are divided to determine the third encoding units 1116a, 1116b, 1116c, 1116d, 1126a, 1126b, 1126c, and 1126d, have.
  • the second encoding units 1110a and 1110b determined to be divided in the vertical direction and the second encoding units 1120a and 1120b determined to be divided in the horizontal direction are divided into different formats, but the third encoding units 1116a , 1116b, 1116c, 1116d, 1126a, 1126b, 1126c and 1126d, the result is that the first encoding unit 1100 is divided into the same type of encoding units.
  • FIG. 12 illustrates a process of determining the depth of an encoding unit according to a change in type and size of an encoding unit when a plurality of encoding units are determined by recursively dividing an encoding unit according to an exemplary embodiment.
  • the image decoding apparatus 150 may determine the depth of a coding unit according to a predetermined criterion.
  • a predetermined criterion may be a length of a long side of a coding unit.
  • the depth of the current coding unit is smaller than the depth of the coding unit before being divided it can be determined that the depth is increased by n.
  • an encoding unit with an increased depth is expressed as a lower-depth encoding unit.
  • the image decoding apparatus 150 may generate a square form 1 encoding unit 1200 can be divided to determine the second encoding unit 1202, the third encoding unit 1204, and the like of the lower depth. If the size of the first encoding unit 1200 in the square form is 2Nx2N, the second encoding unit 1202 determined by dividing the width and height of the first encoding unit 1200 by 1/21 times has a size of NxN have.
  • the third encoding unit 1204 determined by dividing the width and height of the second encoding unit 1202 by a half size may have a size of N / 2xN / 2.
  • the width and height of the third encoding unit 1204 correspond to 1/22 times of the first encoding unit 1200. If the depth of the first encoding unit 1200 is D, the depth of the second encoding unit 1202, which is 1/21 times the width and height of the first encoding unit 1200, may be D + 1, The depth of the third encoding unit 1204, which is one-22 times the width and height of the third encoding unit 1200, may be D + 2.
  • block type information indicating a non-square shape for example, block type information is' 1: NS_VER 'indicating that the height is a non-square having a width greater than the width or' 2
  • the image decoding apparatus 150 divides the non-square first coding unit 1210 or 1220 and outputs the second coding unit 1212 or 1222 of lower depth, The third encoding unit 1214 or 1224, or the like.
  • the image decoding apparatus 150 may determine a second encoding unit (e.g., 1202, 1212, 1222, etc.) by dividing at least one of the width and height of the first encoding unit 1210 of Nx2N size. That is, the image decoding apparatus 150 may determine the second encoding unit 1202 of the NxN size or the second encoding unit 1222 of the NxN / 2 size by dividing the first encoding unit 1210 in the horizontal direction, The second encoding unit 1212 of N / 2xN size may be determined by dividing the second encoding unit 1212 in the horizontal direction and the vertical direction.
  • a second encoding unit e.g., 1202, 1212, 1222, etc.
  • the image decoding apparatus 150 divides at least one of the width and the height of the 2NxN first encoding unit 1220 to determine a second encoding unit (for example, 1202, 1212, 1222, etc.) It is possible. That is, the image decoding apparatus 150 may determine the second encoding unit 1202 of NxN size or the second encoding unit 1212 of N / 2xN size by dividing the first encoding unit 1220 in the vertical direction, The second encoding unit 1222 of NxN / 2 size may be determined by dividing the image data in the horizontal direction and the vertical direction.
  • a second encoding unit for example, 1202, 1212, 1222, etc.
  • the image decoding apparatus 150 divides at least one of the width and the height of the second encoding unit 1202 of NxN size to determine a third encoding unit (for example, 1204, 1214, 1224, etc.) It is possible. That is, the image decoding apparatus 150 determines the third encoding unit 1204 of N / 2xN / 2 size by dividing the second encoding unit 1202 in the vertical direction and the horizontal direction, or determines the third encoding unit 1204 of N / 2xN / 3 encoding unit 1214 or a third encoding unit 1224 of N / 2xN / 2 size.
  • a third encoding unit for example, 1204, 1214, 1224, etc.
  • the image decoding apparatus 150 divides at least one of the width and the height of the second encoding unit 1212 of N / 2xN size into a third encoding unit (for example, 1204, 1214, 1224, . That is, the image decoding apparatus 150 divides the second encoding unit 1212 in the horizontal direction to generate a third encoding unit 1204 of N / 2xN / 2 or a third encoding unit 1224 of N / 2xN / 2 size ) Or may be divided in the vertical and horizontal directions to determine the third encoding unit 1214 of N / 2xN / 2 size.
  • a third encoding unit for example, 1204, 1214, 1224
  • the image decoding apparatus 150 divides at least one of the width and the height of the second encoding unit 1214 of NxN / 2 size to generate a third encoding unit (e.g., 1204, 1214, 1224, etc.) . That is, the image decoding apparatus 150 divides the second encoding unit 1212 in the vertical direction to generate a third encoding unit 1204 of N / 2xN / 2 or a third encoding unit 1214 of N / 2xN / 2 size ) Or may be divided in the vertical and horizontal directions to determine the third encoding unit 1224 of N / 2xN / 2 size.
  • a third encoding unit e.g. 1204, 1214, 1224, etc.
  • the image decoding apparatus 150 may divide a square-shaped encoding unit (for example, 1200, 1202, and 1204) into a horizontal direction or a vertical direction.
  • a square-shaped encoding unit for example, 1200, 1202, and 1204
  • the first encoding unit 1200 having a size of 2Nx2N is divided in the vertical direction to determine a first encoding unit 1210 having a size of Nx2N or the first encoding unit 1210 having a size of 2NxN to determine a first encoding unit 1220 having a size of 2NxN .
  • the depth of the encoding unit in which the first encoding unit 1200, 1202, or 1204 of size 2Nx2N is divided in the horizontal direction or the vertical direction is determined May be the same as the depth of the first encoding unit 1200, 1202 or 1204.
  • the width and height of the third encoding unit 1214 or 1224 may correspond to 1/2 of the first encoding unit 1210 or 1220.
  • the depth of the first coding unit 1210 or 1220 is D
  • the depth of the second coding unit 1212 or 1214 which is half the width and height of the first coding unit 1210 or 1220 is D +
  • the depth of the third encoding unit 1214 or 1224, which is half the width and height of the first encoding unit 1210 or 1220 may be D + 2.
  • FIG. 13 illustrates a depth index (hereinafter referred to as PID) for coding unit classification and depth that can be determined according to the type and size of coding units according to an exemplary embodiment.
  • PID depth index
  • the image decoding apparatus 150 may determine a second type of encoding unit by dividing the first encoding unit 1300 in a square form. 13, the image decoding apparatus 150 divides the first encoding unit 1300 into at least one of a vertical direction and a horizontal direction according to the division type information, and outputs the second encoding units 1302a, 1302b, 1304a, 1304b, 1306a, 1306b, 1306c, and 1306d. That is, the image decoding apparatus 150 can determine the second encoding units 1302a, 1302b, 1304a, 1304b, 1306a, 1306b, 1306c, and 1306d based on the division type information for the first encoding unit 1300. [
  • the second encoding units 1302a, 1302b, 1304a, 1304b, 1306a, 1306b, 1306c, and 1306d which are determined according to the division type information for the first encoding unit 1300 in a square form, Depth can be determined based on. For example, since the length of one side of the square-shaped first encoding unit 1300 and the length of longer sides of the non-square-shaped second encoding units 1302a, 1302b, 1304a, and 1304b are the same, 1300) and the non-square type second encoding units 1302a, 1302b, 1304a, and 1304b are denoted by D in the same manner.
  • the image decoding apparatus 150 divides the first encoding unit 1300 into four square-shaped second encoding units 1306a, 1306b, 1306c, and 1306d based on the division type information, 1306b, 1306c, and 1306d are 1/2 times the length of one side of the first encoding unit 1300, the depths of the second encoding units 1306a, 1306b, 1306c, May be a depth of D + 1 which is one depth lower than D, which is the depth of the first encoding unit 1300.
  • the image decoding apparatus 150 divides a first encoding unit 1310 having a height greater than a width in a horizontal direction according to division type information, and generates a plurality of second encoding units 1312a, 1312b, 1314a, 1314b, and 1314c.
  • the image decoding apparatus 150 divides a first encoding unit 1320 having a length greater than a height in a vertical direction according to the division type information to generate a plurality of second encoding units 1322a, 1322b, 1324a, 1324b, and 1324c.
  • the second encoding units 1312a, 1312b, 1314a, 1314b, 1316a, 1316b, 1316c, and 1316d determined according to the division type information for the first encoding unit 1310 or 1320 in the non-
  • the depth can be determined based on the length of the long side. For example, since the length of one side of the square-shaped second encoding units 1312a and 1312b is one-half the length of one side of the non-square first encoding unit 1310 whose height is longer than the width, The depth of the second encoding units 1302a, 1302b, 1304a, and 1304b of the form of D + 1 is one depth lower than the depth D of the first encoding unit 1310 of the non-square form.
  • the image decoding apparatus 150 may divide the non-square first encoding unit 1310 into odd second encoding units 1314a, 1314b, and 1314c based on the division type information.
  • the odd number of second encoding units 1314a, 1314b and 1314c may include non-square second encoding units 1314a and 1314c and a square second encoding unit 1314b.
  • the long side of the non-square type second encoding units 1314a and 1314c and the length of one side of the second type encoding unit 1314b in the form of a square are set to 1/4 of the length of one side of the first encoding unit 1310
  • the depth of the second encoding units 1314a, 1314b, and 1314c may be a depth of D + 1 which is one depth lower than the depth D of the first encoding unit 1310.
  • the image decoding apparatus 150 is connected to the first coding unit 1320 of a non-square form having a width greater than the height in a manner corresponding to the method of determining the depths of the coding units associated with the first coding unit 1310 The depth of the encoding units can be determined.
  • the image decoding apparatus 150 may calculate a size ratio between the coding units The index can be determined based on the index. Referring to FIG. 13, an encoding unit 1314b positioned at the center among odd-numbered encoding units 1314a, 1314b, and 1314c has the same width as other encoding units 1314a and 1314c, May be twice as high as the height of the sidewalls 1314a, 1314c. That is, in this case, the middle encoding unit 1314b may include two of the other encoding units 1314a and 1314c.
  • the coding unit 1314c positioned next to the coding unit 1314c may be three days in which the index is increased by two. That is, there may be a discontinuity in the value of the index.
  • the image decoding apparatus 150 may determine whether odd-numbered encoding units are not the same size based on whether there is an index discontinuity for distinguishing between the divided encoding units.
  • the image decoding apparatus 150 may determine whether the image is divided into a specific division form based on a value of an index for distinguishing a plurality of coding units divided and determined from the current coding unit. 13, the image decoding apparatus 150 divides a rectangular first encoding unit 1310 having a height greater than the width to determine an even number of encoding units 1312a and 1312b or an odd number of encoding units 1314a and 1314b , 1314c.
  • the image decoding apparatus 150 may use an index (PID) indicating each coding unit to identify each of the plurality of coding units.
  • the PID may be obtained at a sample of a predetermined position of each coding unit (e.g., the upper left sample).
  • the image decoding apparatus 150 may determine an encoding unit of a predetermined location among the plurality of encoding units determined by using an index for distinguishing an encoding unit. According to an exemplary embodiment, when the division type information for the rectangular first type encoding unit 1310 having a height greater than the width is divided into three encoding units, the image decoding apparatus 150 encodes the first encoding unit 1310 It can be divided into three encoding units 1314a, 1314b, and 1314c. The image decoding apparatus 150 may assign an index to each of the three encoding units 1314a, 1314b, and 1314c.
  • the image decoding apparatus 150 may compare the indices of the respective encoding units in order to determine the middle encoding unit among the encoding units divided into odd numbers.
  • the image decoding apparatus 150 encodes an encoding unit 1314b having an index corresponding to a middle value among indices based on the indices of the encoding units into an encoding unit 1314b for encoding the middle position among the encoding units determined by dividing the first encoding unit 1310 Can be determined as a unit.
  • the image decoding apparatus 150 may determine an index based on a size ratio between coding units when the coding units are not the same size in determining the index for dividing the divided coding units .
  • the coding unit 1314b generated by dividing the first coding unit 1310 is divided into coding units 1314a and 1314c having the same width as the other coding units 1314a and 1314c but different in height Can be double the height.
  • the index (PID) of the coding unit 1314b located at the center is 1, the coding unit 1314c located next to the coding unit 1314c may be three days in which the index is increased by two.
  • the image decoding apparatus 150 may determine that the image is divided into a plurality of encoding units including encoding units having different sizes from other encoding units.
  • the image decoding apparatus 150 may be configured such that the encoding unit (for example, the middle encoding unit) at a predetermined position among the odd number of encoding units has a format different from that of the other encoding units
  • the current encoding unit can be divided into.
  • the image decoding apparatus 150 can determine an encoding unit having a different size by using an index (PID) for the encoding unit.
  • PID index
  • the index and the size or position of the encoding unit at a predetermined position to be determined are specific for explaining an embodiment, and thus should not be construed to be limited thereto, and various indexes, positions and sizes of encoding units can be used Should be interpreted.
  • the image decoding apparatus 150 may use a predetermined data unit in which recursive division of encoding units starts.
  • FIG. 14 shows that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
  • a predetermined data unit may be defined as a data unit in which an encoding unit starts to be recursively segmented using at least one of block type information and partition type information. That is, it may correspond to a coding unit of the highest depth used in a process of determining a plurality of coding units for dividing a current picture.
  • a predetermined data unit is referred to as a reference data unit for convenience of explanation.
  • the reference data unit may represent a predetermined size and shape.
  • the reference encoding unit may comprise samples of MxN.
  • M and N may be equal to each other, or may be an integer represented by a multiplier of 2. That is, the reference data unit may represent a square or a non-square shape, and may be divided into an integer number of encoding units.
  • the image decoding apparatus 150 may divide the current picture into a plurality of reference data units. According to an embodiment, the image decoding apparatus 150 may divide a plurality of reference data units for dividing a current picture by using the division information for each reference data unit.
  • the segmentation process of the reference data unit may correspond to the segmentation process using a quad-tree structure.
  • the image decoding apparatus 150 may determine in advance a minimum size that the reference data unit included in the current picture can have. Accordingly, the image decoding apparatus 150 can determine reference data units of various sizes having a size larger than a minimum size, and determine at least one coding unit using block type information and division type information based on the determined reference data unit You can decide.
  • the image decoding apparatus 150 may use a square-shaped reference encoding unit 1400 or a non-square-shaped reference encoding unit 1402.
  • the type and size of the reference encoding unit may include various data units (e.g., a sequence, a picture, a slice, a slice segment a slice segment, a maximum encoding unit, and the like).
  • the receiver 160 of the video decoding apparatus 150 may acquire at least one of the information on the format of the reference encoding unit and the size of the reference encoding unit from the bit stream for each of the various data units .
  • the process of determining at least one encoding unit included in the reference-type encoding unit 1400 in the form of a square is described in detail in the process of dividing the current encoding unit 300 of FIG. 10, Is determined in the process of dividing the current encoding unit 1100 or 1150 of FIG. 11, so that a detailed description thereof will be omitted.
  • the image decoding apparatus 150 may include an index for identifying the size and type of the reference encoding unit Can be used. That is, the receiving unit 160 extracts a predetermined condition (for example, a data unit having a size equal to or smaller than a slice) among the various data units (for example, a sequence, a picture, a slice, a slice segment, It is possible to obtain only an index for identifying the size and type of the reference encoding unit for each slice, slice segment, maximum encoding unit, and the like.
  • a predetermined condition for example, a data unit having a size equal to or smaller than a slice
  • the various data units for example, a sequence, a picture, a slice, a slice segment
  • the image decoding apparatus 150 can determine the size and shape of the reference data unit for each data unit satisfying the predetermined condition by using the index.
  • the information on the type of the reference encoding unit and the information on the size of the reference encoding unit are obtained from the bitstream for each relatively small data unit and used, the use efficiency of the bitstream may not be good. Therefore, Information on the size of the reference encoding unit and information on the size of the reference encoding unit can be acquired and used.
  • at least one of the size and the type of the reference encoding unit corresponding to the index indicating the size and type of the reference encoding unit may be predetermined. That is, the image decoding apparatus 150 selects at least one of the size and the type of the reference encoding unit in accordance with the index so that at least one of the size and the type of the reference encoding unit included in the data unit, You can decide.
  • the image decoding apparatus 150 may use at least one reference encoding unit included in one maximum encoding unit. That is, the maximum encoding unit for dividing an image may include at least one reference encoding unit, and the encoding unit may be determined through a recursive division process of each reference encoding unit. According to an exemplary embodiment, at least one of the width and the height of the maximum encoding unit may correspond to at least one integer multiple of the width and height of the reference encoding unit. According to an exemplary embodiment, the size of the reference encoding unit may be a size obtained by dividing the maximum encoding unit n times according to a quadtree structure.
  • the image decoding apparatus 150 may determine the reference encoding unit by dividing the maximum encoding unit n times according to the quad tree structure, and may determine the reference encoding unit based on at least one of the block type information and the division type information As shown in FIG.
  • FIG. 15 shows a processing block serving as a reference for determining a determination order of a reference encoding unit included in a picture 1500 according to an embodiment.
  • the image decoding apparatus 150 may determine at least one processing block that divides a picture.
  • the processing block is a data unit including at least one reference encoding unit for dividing an image, and at least one reference encoding unit included in the processing block may be determined in a specific order. That is, the order of determination of at least one reference encoding unit determined in each processing block may correspond to one of various kinds of order in which the reference encoding unit can be determined, and the reference encoding unit determination order determined in each processing block May be different for each processing block.
  • the order of determination of the reference encoding unit determined for each processing block is a raster scan, a Z scan, an N scan, an up-right diagonal scan, a horizontal scan a horizontal scan, and a vertical scan. However, the order that can be determined should not be limited to the scan orders.
  • the image decoding apparatus 150 may obtain information on the size of the processing block to determine the size of the at least one processing block included in the image.
  • the image decoding apparatus 150 may obtain information on the size of the processing block from the bitstream to determine the size of the at least one processing block included in the image.
  • the size of such a processing block may be a predetermined size of a data unit represented by information on the size of the processing block.
  • the receiving unit 160 of the image decoding apparatus 150 may obtain information on the size of the processing block from the bit stream for each specific data unit.
  • information on the size of a processing block can be obtained from a bitstream in units of data such as an image, a sequence, a picture, a slice, a slice segment, or the like. That is, the receiving unit 160 may obtain the information on the size of the processing block from the bitstream for each of the plurality of data units, and the image decoding apparatus 150 may obtain the information on the size of the processing block using the information on the size of the obtained processing block
  • the size of one processing block may be determined, and the size of the processing block may be an integer multiple of the reference encoding unit.
  • the image decoding apparatus 150 may determine the sizes of the processing blocks 1502 and 1512 included in the picture 1500.
  • the video decoding apparatus 150 may determine the size of the processing block based on information on the size of the processing block obtained from the bitstream.
  • the image decoding apparatus 150 according to an exemplary embodiment of the present invention may be configured such that the horizontal size of the processing blocks 1502 and 1512 is four times the horizontal size of the reference encoding unit, four times the vertical size of the reference encoding unit, You can decide.
  • the image decoding apparatus 150 may determine an order in which at least one reference encoding unit is determined in at least one processing block.
  • the video decoding apparatus 150 may determine each processing block 1502, 1512 included in the picture 1500 based on the size of the processing block, and may include in the processing blocks 1502, 1512 The determination order of at least one reference encoding unit is determined.
  • the determination of the reference encoding unit may include determining the size of the reference encoding unit according to an embodiment.
  • the image decoding apparatus 150 may obtain information on a determination order of at least one reference encoding unit included in at least one processing block from a bitstream, So that the order in which at least one reference encoding unit is determined can be determined.
  • the information on the decision order can be defined in the order or direction in which the reference encoding units are determined in the processing block. That is, the order in which the reference encoding units are determined may be independently determined for each processing block.
  • the image decoding apparatus 150 may obtain information on a determination order of a reference encoding unit from a bitstream for each specific data unit.
  • the receiving unit 160 may acquire information on the order of determination of a reference encoding unit from a bitstream for each data unit such as an image, a sequence, a picture, a slice, a slice segment, and a processing block. Since the information on the determination order of the reference encoding unit indicates the reference encoding unit determination order in the processing block, the information on the determination order can be obtained for each specific data unit including an integer number of processing blocks.
  • the image decoding apparatus 150 may determine at least one reference encoding unit based on the determined order according to an embodiment.
  • the receiving unit 160 may obtain information on the reference encoding unit determination order from the bitstream as the information related to the processing blocks 1502 and 1512, and the video decoding apparatus 150 may receive the information 1502, and 1512, and determine at least one reference encoding unit included in the picture 1500 according to the determination order of the encoding units.
  • the image decoding apparatus 150 may determine a determination order 1504 and 1514 of at least one reference encoding unit associated with each of the processing blocks 1502 and 1512. For example, when information on the determination order of reference encoding units is obtained for each processing block, the reference encoding unit determination order associated with each processing block 1502 and 1512 may be different for each processing block.
  • the reference encoding unit determination order 1504 related to the processing block 1502 is a raster scan order
  • the reference encoding unit included in the processing block 1502 may be determined according to the raster scan order.
  • the reference encoding unit determination order 1514 related to another processing block 1512 is a reverse order of the raster scan order
  • the reference encoding unit included in the processing block 1512 can be determined according to the reverse order of the raster scan order.
  • Coupling Mode refers to a set of coding modes that are applied together for a block.
  • a combining mode is applied to a current block in an encoding end, only a syntax element related to a combination mode is included in the bitstream, and a plurality of syntax elements corresponding to the encoding modes included in the combination mode are omitted, The rate is increased. Therefore, the encoding rate of the image is increased by designating the encoding modes that are highly likely to be used together or have high encoding efficiency when used together.
  • how to apply the coupling mode will be described in detail.
  • FIG. 16 shows a block diagram of a video decoding apparatus 1600 according to an embodiment for determining a coding mode according to a combining mode.
  • the video decoding apparatus 1600 includes a coding mode determination unit 1610 and a decoding unit 1620.
  • the coding mode determination unit 1610 and the decoding unit 1620 are represented as separate units. However, according to the embodiment, the coding mode determination unit 1610 and the decoding unit 1620 are combined to form a single unit .
  • the encoding mode determination unit 1610 and the decoding unit 1620 are represented by a unit located in one device. However, the encoding mode determination unit 1610 and the decoding unit 1620 are not necessarily It does not need to be physically contiguous. Therefore, the encoding mode determination unit 1610 and the decoding unit 1620 may be dispersed according to the embodiment.
  • the coding mode determination unit 1610 and the decoding unit 1620 may be implemented by one processor according to an embodiment. And may be implemented by a plurality of processors according to an embodiment.
  • the encoding mode determination unit 1610 obtains a combination mode flag of the current block indicating whether a plurality of encoding modes included in the combination mode are applied to the current block from the bitstream.
  • the combining mode may include all kinds of coding modes that can be applied to the current block. Encoding modes incompatible with each other can not be included in one combining mode. For example, the DC mode and the vertical mode as the luma intra prediction mode can not be included in the combining mode as well.
  • the combining mode may include a luma intra prediction mode, a chroma intra prediction mode, an encoding mode relating to intra technology, an encoding mode relating to inter technology, a conversion mode, an in-loop filter mode, and the like.
  • the encoding mode related to the intra technique may include a CCIP (Cross Component Intra Prediction) mode, a MPI (Multi Parameter Intra-prediction) mode, and a MIP (Multi-combined Intra Prediction) mode.
  • the encoding mode for the inter-technology may include an affine mode, an adaptive motion vector resolution (AMVR) mode, and an illumination compensation mode.
  • the conversion mode may include an EMT mode (Enhanced Multiple Transform), a Non-Separable Secondary Transform (NSST) mode, and a Rotational Transform (ROT) mode.
  • EMT mode Enhanced Multiple Transform
  • NST Non-Separable Secondary Transform
  • ROT Rotational Transform
  • the in-loop filter mode may include a deblocking filter mode, a sample adaptive offset (SAO) filter mode, and a bilateral filter mode.
  • the combining mode may include one of a plurality of luma intra prediction modes and one of a plurality of chroma intra prediction modes. Two or more combining modes may exist depending on the number of combinations of the luma intra prediction mode and the chroma intra prediction mode which are likely to be used together.
  • the luma intra prediction mode included in the combining mode may be the DC mode
  • the chroma intra prediction mode may be the LM chroma mode.
  • both the luma intra prediction mode and the chroma intra prediction mode can be DC mode.
  • both the luma intra prediction mode and the chroma intra prediction mode included in the combining mode can be determined to be the MPM mode.
  • the MPM mode refers to an intra prediction mode having a high probability of being used most in the current block. Therefore, when both the luma intra prediction mode and the chroma intra prediction mode included in the combining mode are in the MPM mode, the coding efficiency can be improved.
  • the combining mode may be set to include a luma intra prediction mode and a chroma intra prediction mode of a neighboring block at a specific position adjacent to the current block. Therefore, when the combining mode is applied to the current block, the luma intra prediction mode and the chroma intra prediction mode of the current block are determined to be the same as the luma intra prediction mode and the chroma intra prediction mode of the adjacent block at the specific position.
  • the combining mode may be set to include a luma intra prediction mode and a chroma intra prediction mode obtained by analyzing a current or previous picture. For example, by analyzing the prediction mode frequency of the current picture, it is possible to incorporate the best mode luma intra prediction mode and the most significant chroma intra prediction mode into the combining mode.
  • the combining mode may include one of a plurality of conversion modes.
  • the joint mode may comprise one of a plurality of luma intra prediction modes, with a transformation mode.
  • the combining mode may include one of a plurality of chroma intra prediction modes, with the conversion mode. Two or more combining modes may exist depending on the number of combinations of the luma intra prediction mode, the chroma intra prediction mode, and the conversion mode, which are likely to be used together.
  • the joint mode may include a luma intra prediction mode, a chroma intra prediction mode, and a conversion mode, which are likely to be used together, so that the video coding efficiency can be improved. Particularly, since the correlation between the luma intra prediction mode and the conversion mode is high, if the luma intra prediction mode-conversion mode pair is included in the combining mode, the video coding efficiency is likely to be improved.
  • the combining mode includes the DC mode as the luma intra prediction mode, the DC mode or the LM chroma mode as the chroma intra prediction mode, and the DCT conversion mode or the DST conversion mode as the conversion mode.
  • an MPM mode or a peripheral block mode as a luma intra prediction mode and a chroma intra prediction mode in the combining mode and a DCT conversion mode or a DST conversion mode as a conversion mode.
  • the combining mode may include a plurality of conversion modes. If the combining mode includes a conversion mode with a high probability of being selected statistically, the video coding efficiency is likely to be improved.
  • the combining mode may include information on whether or not a specific coding mode is applied, an applying method, and an application range.
  • the combining mode may include information regarding an intra prediction filter mode for filtering intra prediction results of the current block. Since the prediction block according to the intra prediction can have a non-flat and unnatural pattern that is not suitable for frequency conversion, the coding efficiency can be enhanced by including the intra prediction mode and the intra prediction filter mode in the combining mode. Therefore, if the combining mode and the intra prediction filter mode include information that the mode is applied to the current block, the intra prediction filter mode can be applied to the current block without parsing the encoding information for the additional intra prediction filter mode from the bitstream.
  • the combining information may include information that the affine mode is not applied and that the illumination compensation mode is not applied. If the information about the encoding modes having a high probability of being used statistically in the combining mode or having a low probability of being used together is included, the video coding efficiency is likely to be improved.
  • the combining mode may include one of a plurality of luma intra prediction modes, one of a plurality of chroma intra prediction modes, one of a plurality of conversion modes, and information on a specific encoding mode.
  • the specific encoding mode may be the above-mentioned intra prediction mode. Or the particular encoding mode may be an in-loop filter mode. If the coding modes are compatible, two or more pieces of information on the coding modes may be included in the combining mode.
  • the combining mode may include information for applying the MPI mode or the MIP mode included in the intra prediction filter mode when the DC mode is included as the luma intra prediction mode.
  • boundary filtering is required for the prediction block according to the DC mode.
  • the combining mode may include one or more information on the luma intra prediction mode, information on the chroma intra prediction mode, information on the conversion mode, and information on the specific coding mode.
  • the combination of all coding modes included in the combining mode may be allowed if they are compatible between the coding modes included in the combining mode.
  • the combining mode may include information applying a DC mode as a luma intra prediction mode, an LM chroma mode, an MPI mode or a MIP mode as a chroma intra prediction mode.
  • the MPI mode is an intra prediction filter mode in which the prediction value of the current sample is filtered using at least two samples adjacent to the current sample. Specifically, according to the MPI mode, the predicted value of the current sample, the predicted value of the upper adjacent sample of the current sample, and the weighted average value of the predicted values of the left or right adjacent sample of the current sample are determined as new predicted values of the current sample.
  • the current block is filtered by determining a weighted average value based on adjacent samples as a new predicted value for all samples of the current block.
  • the MIP mode is an intra prediction filter mode in which the prediction value of the current sample is filtered using at least two reference samples adjacent to the current block determined according to the position of the current sample. Specifically, according to the MIP mode, the predicted value of the current sample, the value of the reference sample having the same x-coordinate as the current sample, and the weighted average value of the value of the reference sample having the same y-coordinate as the current sample are determined as a new predicted value of the current sample. By determining a weighted average value based on the reference samples for all samples of the current block as a new predicted value, the current block is filtered.
  • the encoding mode included in the join mode may be different.
  • block partitioning of luma and chroma pictures can be performed separately for higher compression efficiency. In this case, the association of the block structure between the luma picture and the chroma picture may be low.
  • the block division can be performed on the chroma picture based on the block structure of the luma picture. In this case, the block structure of the luma picture and the chroma picture can be determined in the same manner.
  • the combining mode can be set to include one of a plurality of luma intra prediction modes and one of a plurality of chroma intra prediction modes.
  • the combining mode can be set to include only one of a plurality of luma intra prediction modes.
  • the combining mode is one of the plurality of luma intra prediction modes and the plurality of chroma intra prediction modes One can be set to be included.
  • the combining mode can be set to include one of the plurality of chroma intra prediction modes.
  • the correspondence between the luma block and the chroma block can be determined by the difference in size and position of the luma block and the chroma block. For example, if the size and position of a luma block and a chroma block are the same, a luma block and a chroma block can be set to correspond. If the size of the luma block and the chroma block are different, it can be set to correspond to a luma block located at the upper left or center of a given chroma block.
  • the combining mode may include one of a plurality of luma intra prediction modes, one of a plurality of chroma intra prediction modes, and information about an intra prediction mode Can be set.
  • the combining mode can be set to include only information about one of the plurality of luma intra prediction modes and the intra prediction filter mode.
  • the join mode may not be used for all blocks included in the current slice.
  • the combining mode can be set to be allowed. Therefore, when the current slice is an I slice type, the encoding mode determination unit 1610 can determine that the combining mode is not used for the current block without acquiring the combining mode flag.
  • the encoding mode determination unit 1610 determines that a plurality of encoding modes of the current mode are the encoding modes of the current block when the encoding mode flag of the current block indicates that a plurality of encoding modes included in the combination mode are applied to the current block do.
  • the combining mode flag indicates a value of 0 or 1. According to one embodiment, when the combining mode flag is 0, a plurality of coding modes included in the combining mode are not applied to the current block at the same time. Conversely, when the coupling mode flag is 1, a plurality of coding modes included in the coupling mode are applied to the current block at the same time.
  • the coding mode determining unit 1610 determines a combining mode index indicating a combining mode of one of the plurality of combining modes Can be obtained.
  • the coding mode determination unit 1610 may determine a plurality of intra prediction modes of the combined mode indicated by the combined mode index to be the coding modes of the current block.
  • the encoding mode determination unit 1610 may obtain the encoding mode information from the bitstream when the joint mode flag of the current block indicates that a plurality of encoding modes included in the joint mode are not applied to the current block.
  • the encoding mode determination unit 1610 can determine the encoding modes of the current block according to the encoding mode information.
  • the encoding mode determination unit 1610 may obtain a combination mode permission flag indicating whether a combination mode is allowed for a data unit including the current block.
  • the coding mode determining unit 1610 can obtain the combining mode permission flag for the current block.
  • the decoding unit 1620 can decode the current block according to the encoding modes of the current block.
  • FIGS. 17A and 22B show a method of determining a coding mode of a current block according to a combining mode flag and a combining mode index.
  • 17A and 17B show a coding mode decision method of the current block when one of a plurality of luma intra prediction modes and a plurality of chroma intra prediction modes is included in the combining mode.
  • the combining mode includes a prediction mode l and a chroma intra prediction mode c. Therefore, when the combining mode flag is 1, the luma intra prediction mode l and the chroma intra prediction mode c are applied to the current block.
  • the luma intra prediction mode information for the luma intra prediction mode and the chroma intra prediction mode information for the chroma intra prediction mode are obtained from the bitstream, and the luma intra prediction mode information and the chroma intra prediction mode information The luma intra prediction mode and the chroma intra prediction mode of the current block are respectively determined.
  • the first combining mode includes the luma intra prediction mode l1 and the chroma intra prediction mode c1.
  • the second combined mode includes the luma intra prediction mode l2 and the chroma intra prediction mode c2.
  • the combining mode flag is 1 and the combining mode index is 0, the luma intra prediction mode l1 and the chroma intra prediction mode c1 are applied to the current block according to the first combining mode.
  • the luma intra prediction mode l2 and the chroma intra prediction mode c2 are applied to the current block according to the second combining mode.
  • the combining mode flag is 0, the luma intra prediction mode and the chroma intra prediction mode of the current block are respectively determined according to the luma intra prediction mode information and the chroma intra prediction mode information obtained from the bit stream, as in Fig.
  • FIGS. 18A and 18B show a coding mode determination method of a current block when one of a plurality of luma intra prediction modes, one of a plurality of chroma intra prediction modes, and one of a plurality of conversion modes is included in the combining mode.
  • the combining mode includes a luma intra prediction mode l, a chroma intra prediction mode c, and a conversion mode t. Therefore, when the combining mode flag is 1, the luma intra prediction mode l, the chroma intra prediction mode c, and the conversion mode t are applied to the current block.
  • luma intra prediction mode information for the luma intra prediction mode when the combining mode flag is 0, luma intra prediction mode information for the luma intra prediction mode, chroma intra prediction mode information for the chroma intra prediction mode, and conversion mode information for the conversion mode are obtained from the bitstream, The luma intra prediction mode, the chroma intra prediction mode, and the conversion mode of the current block are determined according to mode information, chroma intra prediction mode information, and conversion mode information, respectively.
  • the first combining mode includes the luma intra prediction mode l1, the chroma intra prediction mode c1, and the conversion mode t1.
  • the second combined mode includes the luma intra prediction mode l2, the chroma intra prediction mode c2 and the conversion mode t2.
  • the luma intraprediction mode l2 When the joint mode flag is 1 and the joint mode index is 1, the luma intraprediction mode l2, the chroma intraprediction mode c2, and the conversion mode t2 are applied to the current block according to the second joint mode.
  • the combining mode flag When the combining mode flag is 0, the luma intra prediction mode, the chroma intra prediction mode, and the conversion mode are determined based on the luma intra prediction mode information, the chroma intra prediction mode information, and the conversion mode information obtained from the bitstream, .
  • 19A and 19B are diagrams for explaining the case where the combining mode includes one of a plurality of luma intra prediction modes, one of a plurality of chroma intra prediction modes, one of a plurality of conversion modes, and additionally information about whether or not the coding mode m1 is applied Is included, a coding mode decision method of the current block is shown.
  • FIG. 19A shows an embodiment in which information on the combining mode is divided into a combining mode flag and a combining mode index.
  • the first combining mode includes the luma intra prediction mode l1, the chroma intra prediction mode c1, the conversion mode t1, and the coding mode m1.
  • the second combined mode includes the luma intra prediction mode l1, the chroma intra prediction mode c1, the conversion mode t1, and the encoding mode m1.
  • the third combined mode includes the luma intra prediction mode l2, the chroma intra prediction mode c2, the conversion mode t2 and the coding mode m1.
  • the luma intra prediction mode l1, the chroma intra prediction mode c1, the conversion mode t1, and the encoding mode m1 are applied to the current block according to the first combining mode.
  • the joint mode flag is 1 and the joint mode index is 10
  • the luma intraprediction mode l1, the chroma intraprediction mode c1, and the conversion mode t1 are applied to the current block according to the second joint mode, and the encoding mode m1 is determined not to be used do.
  • the joint mode index number of the most used joint mode among the first joint mode, the second joint mode, and the third joint mode may be determined to be zero.
  • Luma intra prediction mode information for the luma intra prediction mode from the bitstream chroma intra prediction mode information for the chroma intra prediction mode and conversion mode information for the conversion mode, m1 information for the encoding mode m1
  • the luma intra prediction mode, the chroma intra prediction mode, and the conversion mode are determined according to the luma intra prediction mode information, the chroma intra prediction mode information, the conversion mode information, and the m1 information, and it is determined whether or not the encoding mode m1 is applied.
  • FIG. 19B shows an embodiment in which the information on the combining mode includes only the combining mode index.
  • the embodiment of FIG. 19B and the embodiment of FIG. 19A are substantially the same because the first bin of the combined mode index serves as a joint mode flag.
  • the luma intra prediction mode l1, the chroma intra prediction mode c1, the conversion mode t1, and the encoding mode m1 are applied to the current block according to the first combining mode.
  • the combined mode index is 110
  • the luma intra prediction mode l1, the chroma intra prediction mode c1, and the conversion mode t1 are applied to the current block according to the second combining mode, and the coding mode m1 is determined not to be used.
  • the combined mode index is 111
  • the luma intraprediction mode l2, the chroma intra prediction mode c2, the conversion mode t2, and the encoding mode m1 are applied to the current block according to the third combining mode.
  • the joint mode index number of the most used joint mode among the first joint mode, the second joint mode, and the third joint mode may be determined to be 10.
  • the luma intra prediction mode, the chroma intra prediction mode, and the conversion mode are determined based on the luma intra prediction mode information, the chroma intra prediction mode information, the conversion mode information and the m1 information obtained from the bitstream, m1 is applied.
  • 20 shows a method of determining a coding mode of a current block when information on coding mode m1 and m3 is applied to the combining mode and information on the coding mode m2 is included.
  • the first combining mode includes an application signal of the encoding mode m1, an unused signal of the detail mode 1 and the encoding mode m3 for the encoding mode m2.
  • the second combining mode includes the application signal of the encoding mode m1, the detail mode 2 of the encoding mode m2, and the non-enable signal of the encoding mode m3.
  • the third joint mode includes an application signal of the encoding mode m1, an application signal of the detail mode 1 and the encoding mode m3 for the encoding mode m2.
  • the fourth combining mode includes an unused signal of the encoding mode m1, an application signal of the detail mode 1 and the encoding mode m3 for the encoding mode m2.
  • the coding mode m1 and the detailed mode 1 of the coding mode m2 are applied to the current block according to the first combining mode, and the coding mode m3 is not applied to the current block .
  • the combining mode flag is 1 and the combining mode index is 01
  • the coding mode m1 and the detailed mode 2 of the coding mode m2 are applied to the current block according to the second combining mode, and the coding mode m3 is not applied to the current block .
  • the coding mode m1 When the combining mode flag is 1 and the combining mode index is 10, the coding mode m1, the detailed mode 1 of the coding mode m2, and the coding mode m3 are applied to the current block according to the third combining mode.
  • the combining mode flag is 1 and the combining mode index is 11
  • the detailed mode 1 and coding mode m3 of the coding mode m2 are applied to the current block according to the fourth combining mode, and the coding mode m1 is not applied to the current block .
  • encoding information for encoding mode m1 encoding information for encoding mode m2, and encoding information for encoding mode m3 are obtained from the bitstream, and encoded information for encoding modes m1 and m3 And the detailed mode of encoding mode m2 are determined.
  • FIG. 21A shows a method of determining an encoding mode according to a combining mode in an interlace (P slice or B slice), and FIG. 21B shows a method of determining an encoding mode according to a combining mode in an intra slice (I slice).
  • the combining mode includes the luma intra prediction mode l, the chroma intra prediction mode c, and the coding modes m1 and m2.
  • the combining mode flag is 1
  • the luma intra prediction mode l, the chroma intra prediction mode c, and the encoding modes m1 and m2 are applied to the current block.
  • the combining mode flag is 0, luma intra prediction mode information for the luma intra prediction mode, chroma intra prediction mode information for the chroma intra prediction mode, encoding information for the encoding mode m1, and encoding information for the encoding mode m2 Is obtained.
  • the luma intra prediction mode and the chroma intra prediction mode of the current block are determined in accordance with the luma intra prediction mode information, the chroma intra prediction mode information, the encoding information about the encoding m1, and the encoding information about the encoding m2 to determine the encoding modes m1 and m2 Is determined.
  • the combining mode includes the luma intra prediction mode l, coding modes m1 and m2.
  • the combining mode flag is 1
  • the luma intra prediction mode l and coding modes m1 and m2 are applied to the current block.
  • the combining mode flag is 0, luma intra prediction mode information for the luma intra prediction mode, encoding information for encoding mode m1, and encoding information for encoding mode m2 are obtained from the bitstream.
  • the luma intra prediction mode of the current block is determined according to the luma intra prediction mode information, the encoding information about the encoding ml and the encoding information about the encoding m2 to determine whether or not the encoding modes m1 and m2 are applied.
  • At least one of the encoding modes m1 and m2 in Figs. 21A and 21B may be a post filter applied after intra prediction.
  • FIG. 22A shows a method of determining an encoding mode according to a combining mode including a DC mode in an interlace (P slice or B slice)
  • FIG. 22B shows a method of determining a coding mode according to a combining mode including an DC mode in an intra slice Indicates a method of determining an encoding mode.
  • the combining mode includes a DC mode as a luma intra prediction mode and an LM chroma mode and MPI mode as a chroma intra prediction mode.
  • the combining mode flag is 1
  • the current block is applied with the DC mode as the luma intra-prediction mode and the LM chroma mode and the MPI mode as the chroma intra prediction mode.
  • the combining mode flag is 0, luma intra prediction mode information for the luma intra prediction mode, chroma intra prediction mode information for the chroma intra prediction mode, and encoding information for MPI are obtained from the bitstream.
  • the luma intra prediction mode, the chroma intra prediction mode, and the MPI are determined depending on the luma intra prediction mode information, the chroma intra prediction mode information, and the MPI coding information.
  • the combining mode includes a DC mode and an MPI mode as luma intra prediction modes.
  • the DC mode and the MPI mode are applied to the current block as the luma intra prediction mode.
  • the combining mode flag is 0, luma intra prediction mode information for the luma intra prediction mode and coding information for the MPI mode are obtained from the bitstream.
  • the luma intra prediction mode and the MPI mode of the current block are determined according to the luma intra prediction mode information and the encoding information for the MPI mode.
  • the MPI mode is applied, but the MIP mode may be applied instead of the MPI mode.
  • the combining mode is applied in the intra slice, but the combining mode may not be applied in the intra slice.
  • 23 is a syntax of an encoding unit showing an embodiment of the combining mode.
  • lccm_flag [x0] [y0] can be obtained even when the slice_type is an I slice type, unlike the disclosure in Fig.
  • the general decoding process proceeds.
  • the MPM mode information and the PIMS mode information are parsed, and the intra prediction mode of the current block can be determined.
  • the MPI mode information and the MIP mode information indicating the intra prediction filtering mode are parsed, and the intra prediction filtering mode of the current block can be determined.
  • Ipm [x0] [y0] [0] indicating the luma intra-prediction mode for the current block located at (x0, y0) is determined to LCCM_LUMA, which is a luma intra prediction mode according to the combining mode, without parsing the coding information.
  • LCCM_LUMA may be in DC mode.
  • ipm [x0] [y0] [1] indicating the chroma intra prediction mode for the current block located at (x0, y0) is determined as LCCM_CHROMA, which is a chroma intra prediction mode according to the combining mode.
  • LCCM_CHROMA may be the LM chroma mode.
  • mpi_idx [x0] [y0] indicating whether or not the MPI mode is applied is determined as 1 in the current block located at (x0, y0). Therefore, the MPI mode is applied to the current block.
  • a normal descriptor may apply the MIP mode instead of the MPI mode to the current block.
  • lccm_flag [x0] [y0] is obtained even when slice_type is an I slice type
  • ipm [x0] [y0] [0] indicating the luma intra prediction mode for the current block is LCCM_LUMA Can be determined.
  • mpi_idx [x0] [y0] indicating whether or not the MPI mode is applied to the current block can be determined to be 1.
  • FIG. 24 shows a video decoding method 2400 according to an embodiment for determining a coding mode according to a combining mode.
  • a combining mode flag of the current block indicating whether a plurality of coding modes included in the combining mode are applied to the current block from the bitstream is obtained.
  • the combining mode may include one of a plurality of luma intra prediction modes and one of a plurality of chroma intra prediction modes.
  • the joint mode may include a DC mode as a luma intra prediction mode and an LM chroma mode as a chroma intra prediction mode.
  • the combining mode may include a first conversion mode. Also, the combining mode may include an intra prediction filter mode for filtering the intra prediction result of the current block.
  • the intraprediction filter mode may be an MPI mode or an MPI mode.
  • the join mode can be determined according to the slice type of the current slice including the current block. For example, when the current slice is a P slice type or B slice type, the combining mode may include one of a plurality of luma intra prediction modes and one of a plurality of chroma intra prediction modes. However, when the current slice is an I slice type, the combining mode may include only the first luma intra prediction mode.
  • the join mode flag of the current block from the bitstream may not be obtained.
  • the join mode may not be applied.
  • a combining mode permission flag indicating whether or not a combining mode is allowed for a data unit including the current block from the bitstream can be obtained.
  • the join mode permit flag indicates that the join mode is allowed for the data unit in which the current block is contained
  • a join mode flag for the current block can be obtained.
  • the combining mode permission flag indicates that the combining mode is not allowed for the data unit including the current block
  • the combining mode flag for the current block is not obtained.
  • step 2420 When the coupling mode flag of the current block indicates that a plurality of coding modes included in the coupling mode are applied to the current block, a plurality of coding modes of the coupling mode are determined as the coding modes of the current block.
  • the luma intra prediction mode and the chroma intra prediction mode included in the combining mode can be determined as the luma intra prediction mode and the chroma intra prediction mode of the current block. Also, the conversion mode included in the combining mode can be determined as the conversion mode of the current block
  • the combining mode includes an intra prediction filter mode, it can be determined to apply the intra prediction filter mode to the current block.
  • a combining mode index indicating a combining mode of one of the plurality of combining modes can be obtained.
  • a plurality of intra prediction modes of the joint mode indicated by the joint mode index may be determined as the encoding modes of the current block.
  • Encoding mode information is obtained from a bitstream when a joint mode flag of a current block indicates that a plurality of encoding modes included in a joint mode are not applied to a current block, encoding mode information is obtained from the bitstream, Can be determined.
  • step 2430 the current block is decoded according to the encoding modes of the current block.
  • the function of the video decoding apparatus 1600 described in FIG. 16 may be included in the video decoding method 2400 of FIG.
  • FIG. 25 shows a video encoding apparatus 2500 according to an embodiment for determining whether to use the joint mode according to the encoding mode.
  • the video coding apparatus 2500 includes a coding unit 2510 and a bitstream generating unit 2520.
  • 25, the encoding unit 2510 and the bitstream generating unit 25120 are represented as separate constituent units. However, according to the embodiment, the encoding unit 2510 and the bitstream generating unit 2520 are combined to form a single unit .
  • the encoding unit 2510 and the bitstream generating unit 2520 are represented by the constituent units located in one apparatus, but the apparatuses responsible for the respective functions of the encoding unit 2510 and the bitstream generating unit 2520 are not necessarily It does not need to be physically contiguous. Therefore, the encoding unit 2510 and the bitstream generating unit 2520 may be dispersed according to the embodiment.
  • the encoding unit 2510 and the bitstream generating unit 2520 may be implemented by one processor according to an embodiment. And may be implemented by a plurality of processors according to an embodiment.
  • the encoding unit 2510 can determine the encoding modes optimized for the current block.
  • the coding unit 2510 generates a combining mode flag of the current block according to whether or not the coding modes of the current block are included in the combining mode.
  • An embodiment of the combining mode discussed in Figs. 16-24 may be applied to the video decoding method 2500 of Fig.
  • the encoding unit 2510 can be set to generate a join mode flag of the current block only when the current slice is a P slice type or a B slice type. Conversely, when the current slice is an I slice type, the encoding unit 2510 can be set not to generate a join mode flag of the current block.
  • the coding unit 2510 may generate a combining mode index indicating a combining mode of the plurality of combining modes.
  • the encoding unit 2510 determines whether a combination mode is permitted for the upper data unit of the current block according to whether the block having the combining mode is present in the upper data unit of the current block and whether the combining mode is allowed in the upper data unit of the current block. Flag can be generated.
  • the encoding unit 2510 can generate the encoding mode information about the encoding modes of the current block when the encoding modes of the current block are not included in the combining mode.
  • the bitstream generation unit 2520 generates a bitstream including a combining mode flag.
  • the functions performed by the coding unit 2510 and the bitstream generating unit 2520 of FIG. 25 may be performed by the bitstream generating unit 120 of FIG. 1A.
  • the video encoding apparatus 2500 of FIG. 25 may perform a video encoding method corresponding to the video decoding method performed by the video decoding apparatus 1600 of FIG.
  • FIG. 26 shows a video encoding method 2600 according to an embodiment for determining whether to use a combining mode according to an encoding mode.
  • step 2610 the encoding modes of the current block are determined.
  • step 2620 depending on whether the encoding modes of the current block are the same as the plurality of encoding modes of the joint mode, a joint mode flag of the current block is generated.
  • step 2630 a bitstream including the combined mode flag is generated.
  • the function of the video encoding apparatus 2500 described in Fig. 25 may be included in the video encoding method 2600 of Fig.
  • video data of a spatial region is encoded for each coding unit of the tree structure, and a video decoding technique based on coding units of the tree structure Decoding is performed for each maximum encoding unit according to the motion vector, and the video data in the spatial domain is reconstructed, and the video and the video, which is a picture sequence, can be reconstructed.
  • the restored video can be played back by the playback apparatus, stored in a storage medium, or transmitted over a network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé de décodage vidéo comprenant les étapes consistant : à acquérir, à partir d'un train de bits, un indicateur de mode de couplage, d'un bloc actuel, pour indiquer si une pluralité de modes de codage compris dans un mode de couplage sont appliqués au bloc actuel en même temps ; à déterminer la pluralité de modes de codage du mode de couplage pour être les modes de codage du bloc actuel lorsque l'indicateur de mode de couplage du bloc actuel indique que la pluralité de modes de codage compris dans le mode de couplage sont appliqués au bloc actuel en même temps ; et à décoder le bloc actuel selon les modes de codage du bloc actuel.
PCT/KR2018/003821 2017-11-14 2018-03-30 Procédé de codage et appareil associé, et procédé de décodage et appareil associé Ceased WO2019098464A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020207003130A KR20200074081A (ko) 2017-11-14 2018-03-30 부호화 방법 및 그 장치, 복호화 방법 및 그 장치

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762585774P 2017-11-14 2017-11-14
US62/585,774 2017-11-14

Publications (1)

Publication Number Publication Date
WO2019098464A1 true WO2019098464A1 (fr) 2019-05-23

Family

ID=66539732

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/003821 Ceased WO2019098464A1 (fr) 2017-11-14 2018-03-30 Procédé de codage et appareil associé, et procédé de décodage et appareil associé

Country Status (2)

Country Link
KR (1) KR20200074081A (fr)
WO (1) WO2019098464A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112055200A (zh) * 2019-06-05 2020-12-08 华为技术有限公司 Mpm列表构建方法、色度块的帧内预测模式获取方法及装置
CN112616057A (zh) * 2019-10-04 2021-04-06 Oppo广东移动通信有限公司 图像预测方法、编码器、解码器以及存储介质
WO2021137577A1 (fr) * 2019-12-31 2021-07-08 엘지전자 주식회사 Procédé et appareil de codage/décodage d'image en vue de la réalisation d'une prédiction sur la base d'un type de mode de prédiction reconfiguré d'un nœud terminal, et procédé de transmission de flux binaire
CN113940065A (zh) * 2019-06-24 2022-01-14 佳能株式会社 用于编码和解码视频样本的块的方法、设备和系统
CN113950834A (zh) * 2019-05-31 2022-01-18 交互数字Vc控股公司 用于隐式多变换选择的变换选择
CN114145017A (zh) * 2019-06-13 2022-03-04 Lg 电子株式会社 基于帧内预测模式转换的图像编码/解码方法和设备,以及发送比特流的方法
US20220217366A1 (en) * 2019-04-27 2022-07-07 Wilus Institute Of Standards And Technology Inc. Method and device for processiong video signal on basis of intra prediction
CN114786019A (zh) * 2019-07-07 2022-07-22 Oppo广东移动通信有限公司 图像预测方法、编码器、解码器以及存储介质
US20230046175A1 (en) * 2019-06-25 2023-02-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Mapping method, encoder, decoder and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100081148A (ko) * 2009-01-05 2010-07-14 에스케이 텔레콤주식회사 블록 모드 부호화/복호화 방법 및 장치와 그를 이용한 영상부호화/복호화 방법 및 장치
WO2012128453A1 (fr) * 2011-03-21 2012-09-27 엘지전자 주식회사 Procédé et dispositif de codage/décodage d'images
WO2013109026A1 (fr) * 2012-01-18 2013-07-25 엘지전자 주식회사 Procédé et dispositif destinés au codage/décodage entropique
KR20150113524A (ko) * 2014-03-31 2015-10-08 인텔렉추얼디스커버리 주식회사 향상된 화면 내 블록 복사 기반의 예측 모드를 이용한 영상 복호화 장치 및 그 방법
WO2016072775A1 (fr) * 2014-11-06 2016-05-12 삼성전자 주식회사 Procédé et appareil de codage de vidéo, et procédé et appareil de décodage de vidéo

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100081148A (ko) * 2009-01-05 2010-07-14 에스케이 텔레콤주식회사 블록 모드 부호화/복호화 방법 및 장치와 그를 이용한 영상부호화/복호화 방법 및 장치
WO2012128453A1 (fr) * 2011-03-21 2012-09-27 엘지전자 주식회사 Procédé et dispositif de codage/décodage d'images
WO2013109026A1 (fr) * 2012-01-18 2013-07-25 엘지전자 주식회사 Procédé et dispositif destinés au codage/décodage entropique
KR20150113524A (ko) * 2014-03-31 2015-10-08 인텔렉추얼디스커버리 주식회사 향상된 화면 내 블록 복사 기반의 예측 모드를 이용한 영상 복호화 장치 및 그 방법
WO2016072775A1 (fr) * 2014-11-06 2016-05-12 삼성전자 주식회사 Procédé et appareil de codage de vidéo, et procédé et appareil de décodage de vidéo

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220217366A1 (en) * 2019-04-27 2022-07-07 Wilus Institute Of Standards And Technology Inc. Method and device for processiong video signal on basis of intra prediction
US12323605B2 (en) * 2019-04-27 2025-06-03 Humax Co., Ltd. Method and device for processing video signal on basis of intra prediction
US12267529B2 (en) 2019-05-31 2025-04-01 Interdigital Vc Holdings, Inc. Transform selection for implicit multiple transform selection
CN113950834A (zh) * 2019-05-31 2022-01-18 交互数字Vc控股公司 用于隐式多变换选择的变换选择
US12238297B2 (en) 2019-06-05 2025-02-25 Huawei Technologies Co., Ltd. Method for constructing MPM list, method for obtaining intra prediction mode of chroma block, and apparatus
CN112055200A (zh) * 2019-06-05 2020-12-08 华为技术有限公司 Mpm列表构建方法、色度块的帧内预测模式获取方法及装置
CN114145017A (zh) * 2019-06-13 2022-03-04 Lg 电子株式会社 基于帧内预测模式转换的图像编码/解码方法和设备,以及发送比特流的方法
CN113940065A (zh) * 2019-06-24 2022-01-14 佳能株式会社 用于编码和解码视频样本的块的方法、设备和系统
US12309389B2 (en) 2019-06-24 2025-05-20 Canon Kabushiki Kaisha Method, apparatus and system for encoding and decoding a block of video samples
US12238298B2 (en) 2019-06-25 2025-02-25 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Mapping method, encoder, decoder and computer storage medium
US11902538B2 (en) * 2019-06-25 2024-02-13 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Mapping method, encoder, decoder and computer storage medium
US20230046175A1 (en) * 2019-06-25 2023-02-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Mapping method, encoder, decoder and computer storage medium
US12126836B2 (en) 2019-07-07 2024-10-22 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Picture prediction method, encoder, decoder, and storage medium
CN114786019A (zh) * 2019-07-07 2022-07-22 Oppo广东移动通信有限公司 图像预测方法、编码器、解码器以及存储介质
CN112616057A (zh) * 2019-10-04 2021-04-06 Oppo广东移动通信有限公司 图像预测方法、编码器、解码器以及存储介质
JP7444998B2 (ja) 2019-12-31 2024-03-06 エルジー エレクトロニクス インコーポレイティド リーフノードの再設定された予測モードタイプに基づいて予測を行う画像符号化/復号化方法、装置、及びビットストリームを伝送する方法
JP2024045744A (ja) * 2019-12-31 2024-04-02 エルジー エレクトロニクス インコーポレイティド リーフノードの再設定された予測モードタイプに基づいて予測を行う画像符号化/復号化方法、装置、及びビットストリームを伝送する方法
US12113975B2 (en) 2019-12-31 2024-10-08 Lg Electronics Inc. Image encoding/decoding method and apparatus for performing prediction on basis of reconfigured prediction mode type of leaf node, and bitstream transmission method
JP2023509053A (ja) * 2019-12-31 2023-03-06 エルジー エレクトロニクス インコーポレイティド リーフノードの再設定された予測モードタイプに基づいて予測を行う画像符号化/復号化方法、装置、及びビットストリームを伝送する方法
WO2021137577A1 (fr) * 2019-12-31 2021-07-08 엘지전자 주식회사 Procédé et appareil de codage/décodage d'image en vue de la réalisation d'une prédiction sur la base d'un type de mode de prédiction reconfiguré d'un nœud terminal, et procédé de transmission de flux binaire

Also Published As

Publication number Publication date
KR20200074081A (ko) 2020-06-24

Similar Documents

Publication Publication Date Title
WO2019098464A1 (fr) Procédé de codage et appareil associé, et procédé de décodage et appareil associé
WO2017209394A1 (fr) Procédés et appareils de codage et de décodage de vidéo selon l'ordre de codage
WO2019168244A1 (fr) Procédé de codage et dispositif associé, et procédé de décodage et dispositif associé
WO2019135601A1 (fr) Procédé de codage et appareil associé, et procédé de décodage et appareil associé
WO2011019253A2 (fr) Procédé et dispositif pour encoder des vidéos en tenant compte de l'ordre de balayage d’unités d'encodage ayant une structure hiérarchique et procédé et dispositif pour décoder des vidéos en tenant compte de l'ordre de balayage des unités d'encodage ayant une structure hiérarchique
WO2012005520A2 (fr) Procédé et appareil d'encodage vidéo au moyen d'une fusion de blocs, et procédé et appareil de décodage vidéo au moyen d'une fusion de blocs
WO2011049396A2 (fr) Procédé et appareil de codage vidéo et procédé et appareil de décodage vidéo sur la base de la structure hiérarchique de l'unité de codage
WO2019066174A1 (fr) Procédé et dispositif de codage, et procédé et dispositif de décodage
WO2018026118A1 (fr) Procédé de codage/décodage d'images
WO2018012808A1 (fr) Procédé et dispositif de prédiction intra de chrominance
WO2017090967A1 (fr) Procédé d'encodage de séquence d'encodage et dispositif correspondant, et procédé de décodage et dispositif correspondant
WO2017188780A2 (fr) Procédé et appareil de codage/décodage de signal vidéo
WO2019216716A2 (fr) Procédé de codage et dispositif associé, et procédé de décodage et dispositif associé
WO2017204532A1 (fr) Procédé de codage/décodage d'images et support d'enregistrement correspondant
WO2018030599A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction intra et dispositif associé
WO2019240458A1 (fr) Procédé de codage et appareil correspondant et procédé de décodage et appareil correspondant
WO2015122549A1 (fr) Procédé et appareil de traitement d'une vidéo
WO2020096427A1 (fr) Procédé de codage/décodage de signal d'image et appareil associé
WO2014003519A1 (fr) Procédé et appareil de codage de vidéo évolutive et procédé et appareil de décodage de vidéo évolutive
WO2017142327A1 (fr) Procédé de prédiction intra pour réduire les erreurs de prédiction intra et dispositif à cet effet
WO2020130745A1 (fr) Procédé de codage et dispositif associé, et procédé de décodage et dispositif associé
WO2018093184A1 (fr) Procédé et dispositif de traitement de signal vidéo
EP3437318A1 (fr) Procédés et appareils de codage et de décodage de vidéo selon l'ordre de codage
WO2018105759A1 (fr) Procédé de codage/décodage d'image et appareil associé
WO2021054811A1 (fr) Procédé et appareil de codage/décodage d'images, et support d'enregistrement sauvegardant un flux binaire

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18877357

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18877357

Country of ref document: EP

Kind code of ref document: A1