WO2019066472A1 - 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 - Google Patents
영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 Download PDFInfo
- Publication number
- WO2019066472A1 WO2019066472A1 PCT/KR2018/011390 KR2018011390W WO2019066472A1 WO 2019066472 A1 WO2019066472 A1 WO 2019066472A1 KR 2018011390 W KR2018011390 W KR 2018011390W WO 2019066472 A1 WO2019066472 A1 WO 2019066472A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sample
- current
- block
- encoding unit
- encoding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
- H04N19/635—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by filter definition or implementation details
Definitions
- the method and apparatus according to an exemplary embodiment may encode or decode an image using various types of encoding units included in an image.
- the method and apparatus according to one embodiment includes an intra prediction method and apparatus.
- Various data units may be used to compress the image and there may be a containment relationship between these data units.
- a data unit can be divided by various methods, and an optimized data unit is determined according to characteristics of an image, so that an image can be encoded or decoded.
- an image decoding method comprising: obtaining information on a transform coefficient of a current block from a bitstream; Generating an intra prediction value of the current sample based on a position of a current sample in the current block and an intra prediction mode of the current block; Determining a first weight for a filtered reference sample and a second weight for an intraprediction value of a current sample based on the location of the current sample in the current block, Generating a filtered predicted sample value of the current sample based on the sample value of the filtered reference sample, the intra-prediction value of the current sample, the first weight for the filtering reference sample, and the second weight for the intra-prediction value of the current sample; Generating a prediction block of the current block including the filtered predicted sample value of the current sample, acquiring a residual block of the current block based on the information about the transform coefficient of the obtained current block, And reconstructing the current block based on the prediction block of the current block and the residual block of the current
- the step of generating an intra prediction value of the current sample based on a position of a current sample in the current block and an intra prediction mode of the current block comprises: Determining a corresponding circular reference sample; And generating an intra prediction value of the current sample based on the sample value of the circular reference sample.
- the first weight for the filtered reference sample may be determined based on a distance between the filtered reference sample and the current sample.
- the first weight for the filtered reference sample may be smaller the greater the distance between the filtered reference sample and the current sample.
- the filtering reference sample may include at least one of a circular reference sample located in a horizontal direction of the current sample and a circular reference sample located in a vertical direction of the current sample.
- the intra prediction mode of the current block is an angular mode
- the filtering reference sample includes at least one of left and upper adjacent samples of the current block located on a line passing through the current sample, the line being in a prediction direction indicated by the angular mode, Lt; / RTI >
- Determining a first weight for a filtered reference sample and a second weight for an intraprediction value of a current sample based on the location of the current sample in the current block Generating a filtered predicted sample value of the current sample based on a sample value of the filtered reference sample, a current weighted average of the current sample, and a second weighted weighted estimate of the current sample, , Determining at least one second intra prediction mode;
- the at least one second intra prediction mode may be determined for each picture unit or may be determined for each block.
- the at least one second intra-prediction mode may be determined as at least one of the intra-prediction mode, the intra-prediction mode indicating a direction opposite to the prediction direction indicated by the intra-prediction mode, the horizontal mode and the vertical mode.
- the first weight and the second weight may be normalized values.
- the method includes generating an intra prediction value of the current sample based on a position of a current sample in a current block and an intra prediction mode of the current block; Determining a first weight for a filtered reference sample and a second weight for an intraprediction value of a current sample based on the location of the current sample in the current block, Generating a filtered predicted sample value of the current sample based on the sample value of the filtered reference sample, the intra-prediction value of the current sample, the first weight for the filtering reference sample, and the second weight for the intra-prediction value of the current sample; Generating a prediction block of the current block including the filtered predicted sample value of the current sample; And encoding the information on the transform coefficients of the current block based on the prediction block of the current block.
- An apparatus for decoding an image obtains information on a transform coefficient of a current block from a bitstream and calculates an intra prediction value of the current sample based on a position of a current sample in the current block and an intra- Determining a first weight for a filtered reference sample and a second weight for an intraprediction value of a current sample based on a sample value of at least one filtered reference sample and a filtered reference sample to be filtered based on a location of a current sample in the current block, Generating a filtered predicted sample value of the current sample based on a sample weight of the filtered reference sample to be filtered, a first weight for the intra prediction of the current sample and a second weight for the intra prediction of the current sample, The predicted sample value of the current sample, Block based on the prediction block of the current block and the residual block of the current block based on the prediction block of the current block and the residual block of the current block based on the information about the transform coefficient of the obtained current block
- the computer program for the image decoding method according to an embodiment of the present disclosure can be recorded in a computer-readable recording medium.
- FIG. 1A shows a block diagram of an image decoding apparatus according to various embodiments.
- FIG. 1B shows a flow diagram of a video decoding method according to various embodiments.
- FIG. 1C shows a flow diagram of a video decoding method according to various embodiments.
- FIG. 1D shows a block diagram of an image decoding unit according to various embodiments.
- FIG. 2A shows a block diagram of an image encoding apparatus according to various embodiments.
- FIG. 2B shows a flowchart of the image encoding method according to various embodiments.
- FIG. 2C shows a flow diagram of a video decoding method according to various embodiments.
- FIG. 2D illustrates a block diagram of an image decoding unit according to various embodiments.
- FIG. 3 illustrates a process in which an image decoding apparatus determines at least one encoding unit by dividing a current encoding unit according to an embodiment.
- FIG. 4 illustrates a process in which an image decoding apparatus determines at least one encoding unit by dividing a non-square encoding unit according to an embodiment.
- FIG. 5 illustrates a process in which an image decoding apparatus divides an encoding unit based on at least one of block type information and split mode mode information according to an embodiment.
- FIG. 6 illustrates a method for an image decoding apparatus to determine a predetermined encoding unit among odd number of encoding units according to an embodiment.
- FIG. 7 illustrates a sequence in which a plurality of coding units are processed when an image decoding apparatus determines a plurality of coding units by dividing a current coding unit according to an exemplary embodiment.
- FIG. 8 illustrates a process of determining that the current encoding unit is divided into odd number of encoding units when the image decoding apparatus can not process the encoding units in a predetermined order according to an embodiment.
- FIG. 9 illustrates a process in which an image decoding apparatus determines at least one encoding unit by dividing a first encoding unit according to an embodiment.
- FIG. 10 illustrates a case where the second encoding unit is limited in a case where the non-square type second encoding unit determined by dividing the first encoding unit by the image decoding apparatus satisfies a predetermined condition according to an embodiment Lt; / RTI >
- FIG. 11 illustrates a process in which an image decoding apparatus divides a square-shaped encoding unit when the split mode information can not be divided into four square-shaped encoding units according to an embodiment.
- FIG. 12 illustrates that the processing order among a plurality of coding units may be changed according to a division process of a coding unit according to an exemplary embodiment.
- FIG. 13 illustrates a process of determining the depth of an encoding unit according to a change in type and size of an encoding unit when a plurality of encoding units are determined by recursively dividing an encoding unit according to an embodiment.
- FIG. 14 illustrates a depth index (hereinafter referred to as a PID) for classifying a depth and a coding unit that can be determined according to the type and size of coding units according to an exemplary embodiment.
- a PID depth index
- FIG. 15 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
- FIG. 16 shows a processing block serving as a reference for determining a determination order of a reference encoding unit included in a picture according to an embodiment.
- 17 is a view for explaining intra prediction modes according to an embodiment.
- FIG. 18 is a diagram for explaining a method for generating a reconstructed sample using an original reference sample, according to an embodiment of the present disclosure
- 19A to 19B are diagrams for explaining how a video decoding apparatus generates reconstructed samples using circular reference samples according to a prediction direction of an intra prediction mode of a current block, according to an embodiment of the present disclosure.
- 20 is a diagram for explaining a method for generating a reconstructed sample using an original reference sample, according to an embodiment of the present disclosure
- FIG. 21 is a diagram for explaining a process in which an image decoding apparatus performs intra prediction on a current block using circular reference samples and reconstructed samples according to an embodiment of the present disclosure
- 22 is a diagram for explaining a process of performing weighted prediction using a circular reference sample and reconstructed reference samples of a left adjacent line and a neighboring adjacent line.
- FIG. 23 is a flowchart illustrating a process of performing weighted prediction using predicted values generated by performing intra prediction using a circular reference sample and reconstructed reference samples of a left adjacent line and an adjacent adjacent line, FIG.
- 24 is a diagram for explaining a process of performing location-based intra prediction of a current sample when the intra-prediction mode of the current block is one of a DC mode, a planar mode, and a vertical mode.
- 25 is a diagram for explaining a process of performing position-based intra prediction of a current sample when the intra-prediction mode of the current block is a diagonal mode in the lower left direction.
- 26 is a diagram for explaining a process of performing location-based intra prediction of a current sample when the intra-prediction mode of the current block is a diagonal mode in the upper right direction.
- 27 is a diagram for explaining a process of performing location-based intra prediction of a current sample when the video decoding apparatus is an angular mode in which the intra prediction mode of the current block is adjacent to the diagonal mode in the lower left direction.
- FIG. 28 is a diagram for explaining a process of performing position-based intra prediction of a current sample when the video decoding apparatus is an angular mode in which the intra prediction mode of the current block is adjacent to the diagonal mode in the upper right direction.
- FIG. 29 is a diagram illustrating an example of a case in which a negative convolution order between coding units is determined in a forward or reverse direction on the basis of an encoding order flag according to an embodiment of the present disclosure, Lt; / RTI > can be used for intra prediction.
- a video decoding method includes obtaining information on a transform coefficient of a current block from a bitstream; Generating an intra prediction value of the current sample based on a position of a current sample in the current block and an intra prediction mode of the current block; Determining a first weight for a filtered reference sample and a second weight for an intraprediction value of a current sample based on the location of the current sample in the current block, Generating a filtered predicted sample value of the current sample based on the sample value of the filtered reference sample, the intra-prediction value of the current sample, the first weight for the filtering reference sample, and the second weight for the intra-prediction value of the current sample; Generating a prediction block of the current block including the filtered predicted sample value of the current sample, acquiring a residual block of the current block based on the information about the transform coefficient of the obtained current block, And reconstructing the current block based on the prediction block of the current block and the residual block of the current block.
- a video encoding method includes generating an intra prediction value of the current sample based on a position of a current sample in a current block and an intra prediction mode of the current block; Determining a first weight for a filtered reference sample and a second weight for an intraprediction value of a current sample based on the location of the current sample in the current block, Generating a filtered predicted sample value of the current sample based on the sample value of the filtered reference sample, the intra-prediction value of the current sample, the first weight for the filtering reference sample, and the second weight for the intra-prediction value of the current sample; Generating a prediction block of the current block including the filtered predicted sample value of the current sample; And encoding the information on the transform coefficients of the current block based on the prediction block of the current block.
- the video decoding apparatus obtains information on a transform coefficient of a current block from a bitstream and calculates an intra prediction value of the current sample based on a position of a current sample in the current block and an intra prediction mode of the current block Determining a first weight for a filtered reference sample and a second weight for an intraprediction value of a current sample based on a sample value of at least one filtered reference sample and a filtered reference sample to be filtered based on a location of a current sample in the current block, Generating a filtered predicted sample value of the current sample based on a sample weight of the filtered reference sample to be filtered, a first weight for the intra prediction of the current sample and a second weight for the intra prediction of the current sample, , And the filtered prediction sample value of the current sample A predictive block of the current block is obtained, a residual block of the current block is obtained based on the information on the transform coefficient of the obtained current block, and a predictive block of the current block and a residual block
- Readable recording medium on which a program for implementing the method according to various embodiments is recorded.
- part used in the specification means software or hardware component, and " part " However, “ part " is not meant to be limited to software or hardware. &Quot; Part " may be configured to reside on an addressable storage medium and may be configured to play back one or more processors.
- part (s) refers to components such as software components, object oriented software components, class components and task components, and processes, Subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays and variables.
- the functions provided in the components and " parts " may be combined into a smaller number of components and “ parts “ or further separated into additional components and " parts ".
- processor should be broadly interpreted to include a general purpose processor, a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a controller, a microcontroller, a state machine, In some circumstances, a “ processor " may refer to an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable gate array (FPGA)
- ASIC application specific integrated circuit
- PLD programmable logic device
- FPGA field programmable gate array
- processor refers to a combination of processing devices, such as, for example, a combination of a DSP and a microprocessor, a combination of a plurality of microprocessors, a combination of one or more microprocessors in conjunction with a DSP core, It can also be called.
- memory should be broadly interpreted to include any electronic component capable of storing electronic information.
- the terminology memory may be any suitable memory such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (PROM), erase- May refer to various types of processor-readable media such as erasable programmable read-only memory (PROM), flash memory, magnetic or optical data storage devices, registers, and the like.
- RAM random access memory
- ROM read-only memory
- NVRAM non-volatile random access memory
- PROM programmable read-only memory
- erase- May to various types of processor-readable media such as erasable programmable read-only memory (PROM), flash memory, magnetic or optical data storage devices, registers, and the like.
- a memory is said to be in electronic communication with a processor if the processor can read information from and / or write information to the memory.
- the memory integrated in the processor is in electronic communication with the processor.
- the " image” may be a static image such as a still image of a video or a dynamic image such as a moving image, i.e., the video itself.
- sample means data to be processed as data assigned to a sampling position of an image.
- pixel values in the image of the spatial domain, and transform coefficients on the transform domain may be samples.
- a unit including at least one of these samples may be defined as a block.
- an image encoding apparatus, an image decoding apparatus, an image encoding method, and an image decoding method according to an embodiment will be described in detail.
- a method of determining a data unit of an image according to an embodiment will be described with reference to FIGS. 3 to 16, and a filtering reference sample to be filtered according to an embodiment, referring to FIGS. 1, 2, 17 to 29,
- An encoding or decoding method and apparatus for determining a weight of a filter and adaptively performing intra prediction based on a weight of a filtering reference sample and a filter is described.
- FIG. 1 a method and apparatus for encoding / decoding for adaptively performing intra prediction based on various types of encoding units according to an embodiment of the present disclosure will be described with reference to FIGS. 1 and 2.
- FIG. 1 a method and apparatus for encoding / decoding for adaptively performing intra prediction based on various types of encoding units according to an embodiment of the present disclosure will be described with reference to FIGS. 1 and 2.
- FIG. 1 a method and apparatus for encoding / decoding for adaptively performing intra prediction based on various types of encoding units according to an embodiment of the present disclosure will be described with reference to FIGS. 1 and 2.
- FIG. 1A shows a block diagram of an image decoding apparatus according to various embodiments.
- the image decoding apparatus 100 may include an acquisition unit 105, an intra prediction unit 110, and an image decoding unit 115.
- the acquisition unit 105, the intra prediction unit 110, and the image decoding unit 115 may include at least one processor.
- the acquisition unit 105, the intra prediction unit 110, and the image decoding unit 115 may include a memory for storing instructions to be executed by at least one processor.
- the image decoding unit 115 may be implemented in hardware separate from the acquisition unit 105 and the intra prediction unit 110 or may include an acquisition unit 105 and an intra prediction unit 110.
- the obtaining unit 105 may obtain information on the transform coefficient of the current block from the bitstream.
- the obtaining unit 105 may obtain information on the prediction mode of the current block from the bitstream and information on the intra prediction mode of the current block.
- the acquiring unit 105 may include information indicating a prediction mode of the current block, which indicates an intra mode or an inter prediction mode.
- the information on the intra prediction mode of the current block may be information on the intra prediction mode applied to the current block among the plurality of intra prediction modes.
- the intra-prediction mode may be one of at least one angular mode having a DC mode, a planar mode, and a prediction direction.
- the angular mode includes a horizontal mode, a vertical mode, and a diagonal mode, and may include a mode having a predetermined direction except for the horizontal direction, the vertical direction, and the diagonal direction.
- the number of angular modes may be 65 or 33.
- the intra prediction unit 110 can be activated when the prediction mode of the current block is the intra prediction mode.
- the intra prediction unit 110 may determine an original reference sample related value based on the position of the current sample in the current block and the intra prediction mode of the current block. That is, the intra predictor 110 determines at least one of the reference samples based on the position of the current sample in the current block and the intra prediction mode of the current block, and based on the determined at least one reference sample You can determine the value of the circle reference sample.
- the circular reference sample may be a sample of a neighboring block of the current block, and the circular reference sample may include a sample of a left neighboring block of the current block or a sample of the upper adjacent block.
- the circular reference sample may include a sample of a predetermined line in the vertical direction adjacent to the left side of the current block, or a sample of a predetermined line in the horizontal direction adjacent to the upper side of the current block.
- the circular reference sample is not limited to include the sample of the left adjacent block of the current block or the sample of the upper adjacent block, and may include a sample of the upper adjacent block of the current block or a sample of the right adjacent block.
- the intra prediction unit 110 may generate an intra prediction value for the current sample based on the position of the current sample in the current block and the intra prediction mode of the current block. That is, the intra prediction unit 110 can determine a circular reference sample corresponding to the current sample based on the current sample position and the intra prediction mode of the current block. The intra prediction unit 110 may generate an intra prediction value of the current sample based on the sample value of the original reference sample. The intra prediction unit 110 performs intra prediction of the current block, Determine the weight of the at least one filtering reference sample and the filter to be filtered based on at least one of the positions of the reference samples of the block and determine a filtering reference sample related value based on the weight of the filtering reference sample and the filter.
- the intra prediction unit 110 determines that the sample adjacent to the upper left corner of the current block, At least one of a neighboring sample of the current block and a neighboring sample of the current block located in the left direction of the current block may be determined as the filtering reference sample.
- the intra prediction unit 110 predicts the left and upper adjacent samples of the current block located on the line passing the current sample in the current block, At least one of the samples adjacent to the side edge may be determined as the filtering reference sample. At this time, the line may be directed in a direction opposite to the prediction direction indicated by the angular mode.
- the intra prediction unit 110 can determine the number of taps of the filter to be applied to the filtering reference sample based on at least one of the intra prediction mode of the current block and the size of the current block.
- the intra prediction unit 110 is not limited to determining the weight of the filter to be applied to the filtering reference sample based on at least one of the intra prediction mode of the current block, the position of the current sample in the current block, and the position of the reference sample , The weight of the filter to be applied to the filtering reference sample may be determined based on the size of the current block.
- the intra prediction unit 110 can determine a part of the samples in the reference line adjacent to the current block as a filtering reference sample based on the horizontal direction component and the vertical direction component of the prediction direction specified by the intra prediction mode.
- the intra prediction unit 110 If the intra prediction mode of the current block is a predetermined intra prediction mode, the intra prediction unit 110 generates an intra prediction mode based on at least one of the intra prediction mode of the current block, the position of the current sample in the current block, The filtering reference sample related values may be determined based on the weights of the at least one filtering reference sample and the filter to be filtered.
- the intra predictor 110 determines at least one intra prediction mode and uses the determined at least one intra prediction mode to apply a filter based on at least one of a position of a current sample in the current block and a position of a reference sample of the current block
- the weight of the filtering reference sample and the filter can be determined.
- Intraprediction unit 110 may determine a filtering reference sample related value based on the weight of the filtering reference sample and the filter.
- At this time, at least one intra prediction mode may be determined for each picture unit or may be determined for each block.
- the at least one intra prediction mode may be an intra prediction mode determined based on an intra prediction mode of the current block, or may be a predetermined intra prediction mode.
- the predetermined intra prediction mode may be at least one of a horizontal mode and a vertical mode.
- the intra prediction unit 110 may acquire a prediction block of a current block including a prediction sample of the current block based on at least one of a circular reference sample related value and a filtering reference sample related value. For example, the intra-prediction unit 110 may determine whether to perform the intra-prediction on the current sample using both the circular reference sample-related value and the filtering reference sample value, or to use one of the circular reference sample- And to determine whether to perform intra prediction on the current sample based on the determined value. The intra prediction unit 110 can acquire the prediction block of the current block including the prediction sample of the current sample based on the determination.
- Intraprediction unit 110 determines a first weight for a filtered reference sample and a second weight for an intraprediction value of a current sample based on a sample value of at least one filtered reference sample and a filtered reference sample to be filtered based on the location of the current sample in the current block .
- the first weight for the filtered reference sample may be determined based on the distance between the filtered reference sample and the current sample. For example, the first weight may be determined based on the distance between the filtered reference sample and the current sample relative to the size of the current block. At this time, the size of the current block may mean the height or width of the current block.
- the first weight may decrease as the distance between the filtered reference sample and the current sample increases.
- the second weight may also be determined in a manner similar to the first weight. The first weight and the second weight may be normalized values.
- the filtering reference sample may include at least one of a circular reference sample located in the horizontal direction of the current sample and a circular reference sample located in the vertical direction of the current sample.
- the filtering reference sample may include at least one of the left and upper adjacent samples of the current block located on the line passing the current sample.
- the line may be directed in a direction opposite to the prediction direction indicated by the angular mode.
- Intraprediction unit 110 estimates the filtered prediction of the current sample based on the sample value of the filtered reference sample to be filtered, the intraprediction value of the current sample, the first weight for the filtering reference sample, and the second weight for the intra- You can create a sample value. For example, only when the intra prediction mode of the current block is a predetermined intra prediction mode, the intraprediction unit 110 calculates a weighted sum of a sample value of a filtering reference sample to be filtered, an intraprediction value of a current sample, And a second weight for the intra prediction value of the current sample to produce a filtered predicted sample value of the current sample.
- the intra-prediction unit 110 may determine at least one second intra-prediction mode and may use at least one second intra-prediction mode to perform at least one filtering Determining a first weight for a reference sample and a first weight for a filtered reference sample and a second weight for an intraprediction value of the current sample and determining a second weight for a sample value of a filtered reference sample to be filtered, And generate a filtered predicted sample value of the current sample based on the first weight and a second weight for the intra prediction value of the current sample.
- at least one second intraprediction mode may be determined for each picture unit or may be determined for each block.
- the at least one second intra prediction mode may be determined to be at least one of an intra prediction mode of the current block, an intra prediction mode indicating a direction opposite to a prediction direction indicated by an intra prediction mode of the current block, a horizontal mode and a vertical mode.
- the intra prediction unit 110 may generate a prediction block of the current block including the filtered prediction sample value of the current block.
- the intra predictor 110 performs filtering based on a first weight for a circular reference sample-related value and a second weight for a filtering reference sample-related value, a circular reference sample-related value, and a filtering reference sample-related value, A prediction sample can be obtained.
- the intra predictor 110 may determine that the second weight for the filtering reference sample related value is smaller as the distance from the filtering reference sample to the current sample is greater.
- the image decoding unit 115 can obtain the residual block of the current block based on the information on the transform coefficient of the current block. That is, the image decoding unit 115 can obtain a residual sample related to the residual block of the current block by performing inverse quantization and inverse transform based on the information on the transform coefficient of the current block from the bitstream.
- the image decoding unit 115 can restore the current block based on the prediction block of the current block and the residual block of the current block.
- the image decoding unit 115 generates a current in-block reconstruction sample using a sample value of a prediction block intra-prediction block of the current block and a residual value of a residual sample in a residual block of the current block, It is possible to generate a restoration block of the block.
- the video decoding apparatus 100 obtains, from the bitstream, flag information indicating whether to perform intraprediction adaptively based on the weight of the filtering reference sample and the filter, And may decide whether to perform intra prediction in an adaptive manner based on the weights.
- the flag information may be obtained for each block, and in particular, for each maximum encoding unit.
- the image decoding apparatus 100 can obtain flag information commonly applied to the luminance component and the chrominance component. Alternatively, the image decoding apparatus 100 may obtain flag information which is applied to the luminance component or the chrominance component, respectively.
- the video decoding apparatus 100 can determine whether to perform the intra-prediction adaptively based on the weights of the filtering reference samples and the filter, without obtaining the flag information from the bit stream. For example, the image decoding apparatus 100 may determine that intra prediction is to be adaptively performed based on the weight of the filtering reference sample and the filter if the prediction mode of the current block is a predetermined intra prediction mode.
- the video decoding apparatus 100 may determine whether to perform the intra-prediction adaptively based on the weight of the filtering reference sample and the filter using the information of the neighboring blocks without obtaining the flag information from the bit stream. For example, the image decoding apparatus 100 may perform filtering on the current block based on flag information of a neighboring block indicating whether intraprediction is to be performed adaptively based on the filtering reference samples and the weight of the filter, Based on the weights of the reference samples and the filter, to perform intra prediction in an adaptive manner.
- the video decoding apparatus 100 may determine whether to perform intra-prediction adaptively based on the weights of the filtering reference samples and the filter based on the size of the current block. For example, when the current block size is a predetermined first block size, the video decoding apparatus 100 may adaptively perform intra prediction based on the filtering reference samples and the weight of the filter, 100 can perform conventional intra prediction without adaptively performing intraprediction based on the weights of the filtering reference samples and the filter in the case of a predetermined second block size.
- the video decoding apparatus 100 can perform intraprediction by combining intraprediction sub-decoding tools similar to adaptive intraprediction subtraction / decoding tools based on the weight of filtering reference samples and filters. Alternatively, the video decoding apparatus 100 may assign a priority among a plurality of intra-prediction sub-decoding tools and perform intra-prediction according to the priority between the sub-decoding tools. That is, if a high-priority sub / decryption tool is used, a low priority sub / decryption tool may not be used, and if a high priority sub / decryption tool is not used, / RTI > can be used.
- FIG. 1B shows a flow diagram of a video decoding method according to various embodiments.
- step S105 the image decoding apparatus 100 can obtain information on the transform coefficients of the current block.
- step S110 the image decoding apparatus 100 determines whether at least one filtering reference sample to be filtered based on at least one of the intra prediction mode of the current block, the position of the current sample in the current block, Determine the weights, and determine the filtering reference sample related values based on the weights of the filtering reference samples and the filter to be filtered.
- the image decoding apparatus 100 may determine a circular reference sample-related value based on the position of the current sample in the current block and the intra prediction mode of the current block.
- the image decoding apparatus 100 may obtain a prediction block of a current block including a predicted sample of the current sample based on at least one of a circular reference sample-related value and a filtering reference sample-related value.
- step S125 the image decoding apparatus 100 can obtain the residual block of the current block based on the information on the transform coefficient of the current block.
- the image decoding apparatus 100 may restore the current block based on the prediction block of the current block and the residual block of the current block.
- FIG. 1C shows a flow diagram of a video decoding method according to various embodiments.
- step S155 the image decoding apparatus 100 can obtain information on the transform coefficients of the current block.
- step S160 the image decoding apparatus 100 may generate an intra prediction value for the current sample based on the current sample position in the current block and the intra prediction mode of the current block.
- step S165 the image decoding apparatus 100 calculates a first weight for a filter reference sample and a sample value of at least one filtering reference sample to be filtered based on the position of the current sample in the current block, 2 weight and determines a filtered predicted sample of the current sample based on a sample value of the filtered reference sample to be filtered, a first weight for the intra prediction of the current sample and a second weight for the intra prediction value of the current sample, Value can be generated.
- the image decoding apparatus 100 may generate a prediction block of a current block including a filtered prediction sample value of the current sample.
- step S175 the image decoding apparatus 100 can obtain the residual block of the current block based on the information on the transform coefficient of the current block.
- step S180 the image decoding apparatus 100 may restore the current block based on the prediction block of the current block and the residual block of the current block.
- FIG. 1D shows a block diagram of an image decoding unit 6000 according to various embodiments.
- the image decoding unit 6000 performs operations to encode image data in the image decoding unit 115 of the image decoding apparatus 100.
- the entropy decoding unit 6150 parses the encoded image data to be decoded and the encoding information necessary for decoding from the bitstream 6050.
- the encoded image data is a quantized transform coefficient
- the inverse quantization unit 6200 and the inverse transform unit 6250 reconstruct the residue data from the quantized transform coefficients.
- the intra prediction unit 6400 performs intra prediction on a block-by-block basis.
- the intra prediction unit 6400 of FIG. 1D may correspond to the intra prediction unit 110 of FIG. 1A.
- the inter-prediction unit 6350 performs inter-prediction using the reference image obtained in the reconstruction picture buffer 6300 for each block.
- the spatial data for the block of the current image is restored by adding the predictive data and residue data for each block generated by the intra predictor 6400 or the inter predictor 6350 and the deblocking block 6450 and
- the SAO performing unit 6500 performs loop filtering on the data of the reconstructed spatial region and outputs the filtered reconstructed image 6600.
- restored images stored in the restored picture buffer 6300 can be output as a reference image.
- the stepwise operations of the image decoding unit 6000 may be performed on a block-by-block basis.
- FIG. 2A shows a block diagram of an image encoding apparatus according to various embodiments.
- the image encoding apparatus 150 may include an intra prediction unit 155 and an image encoding unit 160.
- the intra prediction unit 155 and the image encoding unit 160 may include at least one processor.
- the intra prediction unit 155 and the image encoding unit 160 may include a memory for storing instructions to be executed by at least one processor.
- the intra prediction unit 155 and the image encoding unit 160 may be implemented separately from the intra prediction unit 155 and the image encoding unit 160 or may include the intra prediction unit 155 and the image encoding unit 160.
- the intra prediction unit 155 determines the weight of the filtering reference sample and the filter to be filtered based on at least one of the intra prediction mode of the current block, the position of the current sample in the current block, and the position of the reference sample of the current block, The filtering reference sample and the filter weight can be determined based on the weight of the filtering reference sample and the filter.
- the intra prediction unit 155 predicts the circular reference sample related value based on the position of the current sample in the current block and the intra prediction mode of the current block You can decide.
- the intra prediction unit 155 may generate a prediction block of a current block including a prediction sample of the current sample based on at least one of a circular reference sample related value and a filtering reference sample related value.
- the intra prediction unit 155 may generate an intra prediction value of the current sample based on the position of the current sample in the current block and the intra prediction mode of the current block.
- Intraprediction unit 155 determines a first weight for a filtered reference sample and a second weight for an intraprediction value of a current sample based on the location of the current sample in the current block and the sample value of at least one filtered reference sample to be filtered .
- Intraprediction unit 155 estimates the filtered prediction of the current sample based on the sample value of the filtered reference sample to be filtered, the intraprediction value of the current sample, the first weight for the filtering reference sample, and the second weight for the intra- You can create a sample value.
- the intra prediction unit 155 may generate a prediction block of the current block including the filtered prediction sample value of the current sample.
- the image encoding unit 160 can encode information on the transform coefficients of the current block based on the prediction block of the current block. That is, the image encoding unit 160 generates a residual block of the current block based on the original block of the current block and the prediction block of the current block, transforms and quantizes the residual block of the current block, Can be encoded.
- the image encoding unit 160 may encode information on the prediction mode of the current block and information on the intra prediction mode of the current block.
- the image encoding unit 160 may generate a bitstream including information on the transform coefficients of the current block and output the bitstream.
- FIG. 2B shows a flowchart of the image encoding method according to various embodiments.
- step S205 the image encoding apparatus 150 determines whether at least one filtering reference sample to be filtered based on at least one of the intra prediction mode of the current block, the position of the current sample in the current block, Determine the weights, and determine the filtering reference sample related values based on the weights of the filtering reference samples and the filter to be filtered.
- the image encoding apparatus 150 may determine a circular reference sample-related value based on the position of the current sample in the current block and the intra prediction mode of the current block.
- the image encoding apparatus 150 may generate a prediction block of a current block including a predicted sample of the current sample based on at least one of a circular reference sample-related value and a filtering reference sample-related value.
- step S220 the image encoding device 150 can encode information on the transform coefficient of the current block based on the prediction block of the current block.
- FIG. 2C shows a flow chart of the image encoding method according to various embodiments.
- the image encoding apparatus 150 may generate the intra prediction value of the current sample based on the current sample position in the current block and the intra prediction mode of the current block.
- step S255 the image encoding apparatus 150 calculates a first weight for a filtered reference sample and a first weight for a filtered reference sample based on the position of the current sample in the current block, 2 weight and determines a filtered estimate of the current sample based on a sample weight of the filtered reference sample to be filtered, a first weight for the intra prediction of the current sample, and a second weight for the intra prediction of the current sample, You can create a sample value.
- the image encoding apparatus 150 may generate a prediction block of the current block including the filtered prediction sample value of the current sample.
- step S265 the image encoding device 150 can encode information on the transform coefficients of the current block based on the prediction block of the current block.
- FIG. 2D shows a block diagram of an image encoding unit according to various embodiments.
- the image encoding unit 7000 performs operations to encode image data in the image encoding unit 160 of the image encoding device 150.
- the intra predictor 7200 performs intraprediction on a block-by-block basis among the current image 7050
- the inter-prediction unit 7150 performs intra prediction on the current image 7050 and the reference image obtained from the reconstructed picture buffer 7100 To perform inter prediction.
- the transform unit 7250 generates residue data by subtracting the prediction data for each block output from the intra prediction unit 7200 or the inter prediction unit 7150 from the data for the block to be encoded of the current image 7050, And the quantization unit 7300 may perform conversion and quantization on the residue data to output the quantized transform coefficients on a block-by-block basis.
- the intra-prediction unit 7200 of FIG. 2D corresponds to the intra-prediction unit 155 of FIG. 2A .
- the inverse quantization unit 7450 and the inverse transformation unit 7500 can perform inverse quantization and inverse transformation on the quantized transform coefficients to restore the residue data in the spatial domain.
- Residue data of the reconstructed spatial region is reconstructed into spatial domain data for a block of the current image 7050 by adding predictive data for each block output from the intra predictor 7200 or the inter predictor 7150 .
- the deblocking unit 7550 and the SAO performing unit perform in-loop filtering on the data of the reconstructed spatial region to generate a filtered reconstructed image.
- the generated restored image is stored in the restored picture buffer 7100.
- the reconstructed images stored in the reconstructed picture buffer 7100 can be used as reference images for inter prediction of other images.
- the entropy encoding unit 7350 entropy-codes the quantized transform coefficients, and the entropy-encoded coefficients can be output as a bitstream 7400.
- the stepwise operations of the image encoding unit 7000 according to various embodiments may be performed for each block.
- one picture may be divided into one or more slices.
- One slice may be a sequence of one or more Coding Tree Units (CTUs).
- CTUs Coding Tree Units
- CTB maximum coding block
- the maximum coding block means an NxN block including NxN samples (N is an integer). Each color component may be divided into one or more maximum encoding blocks.
- the maximum encoding unit is the maximum encoding block of the luma sample and the two maximum encoding blocks of the chroma samples corresponding thereto, Samples, and chroma samples.
- the maximum encoding unit is a unit including syntax structures used for encoding the maximum encoded block and monochrome samples of the monochrome sample.
- the maximum encoding unit is a unit including syntax structures used for encoding the pictures and the samples of the picture.
- One maximum coding block may be divided into MxN coding blocks (M, N is an integer) including MxN samples.
- a coding unit is a coding unit that encodes two coding blocks of a luma sample coding block and corresponding chroma samples and luma samples and chroma samples Is a unit that includes syntax structures used for decoding.
- the encoding unit is a unit including syntax blocks used for encoding the mono chrome samples and the encoded block of the monochrome sample.
- an encoding unit is a unit including syntax structures used for encoding the pictures and the samples of the picture.
- the maximum encoding block and the maximum encoding unit are concepts that are distinguished from each other, and the encoding block and the encoding unit are conceptually distinguished from each other. That is, the (maximum) coding unit means a data structure including a (maximum) coding block including a corresponding sample and a corresponding syntax structure.
- a (maximum) encoding unit or a (maximum) encoding block refers to a predetermined size block including a predetermined number of samples.
- the image can be divided into a maximum coding unit (CTU).
- the size of the maximum encoding unit may be determined based on information obtained from the bitstream.
- the shape of the largest encoding unit may have a square of the same size.
- the present invention is not limited thereto.
- the maximum size of a luma encoded block from the bitstream can be obtained.
- the maximum size of a luma encoding block indicated by information on the maximum size of a luma encoding block may be one of 16x16, 32x32, 64x64, 128x128, and 256x256.
- information on the maximum size and luma block size difference of a luma coding block that can be divided into two from the bitstream can be obtained.
- the information on the luma block size difference may indicate the size difference between the luma maximum encoding unit and the maximum luma encoding block that can be divided into two. Therefore, when the information on the maximum size of the luma coding block obtained from the bitstream and capable of being divided into two pieces is combined with information on the luma block size difference, the size of the luma maximum coding unit can be determined. Using the size of the luma maximum encoding unit, the size of the chroma maximum encoding unit can also be determined.
- the size of the chroma block may be half the size of the luma block
- the size of the chroma maximum encoding unit may be the size of the luma maximum encoding unit It can be half the size.
- the maximum size of the luma coding block capable of binary division can be variably determined.
- the maximum size of a luma coding block capable of ternary splitting can be fixed.
- the maximum size of a luma coding block capable of ternary partitioning on an I slice is 32x32
- the maximum size of a luma coding block capable of ternary partitioning on a P slice or B slice can be 64x64.
- the maximum encoding unit may be hierarchically divided in units of encoding based on division mode information obtained from the bitstream.
- division mode information at least one of information indicating whether a quad split is performed, information indicating whether or not the division is multi-division, division direction information, and division type information may be obtained from the bitstream.
- information indicating whether a quad split is present may indicate whether the current encoding unit is quad-split (QUAD_SPLIT) or not quad-split.
- the information indicating whether the current encoding unit is multi-divided may indicate whether the current encoding unit is no longer divided (NO_SPLIT) or binary / ternary divided.
- the division direction information indicates that the current encoding unit is divided into either the horizontal direction or the vertical direction.
- the division type information indicates that the current encoding unit is divided into binary division) or ternary division.
- the division mode of the current encoding unit can be determined according to the division direction information and the division type information.
- the division mode when the current coding unit is divided into the horizontal direction is divided into binary horizontal division (SPLIT_BT_HOR), ternary horizontal division (SPLIT_TT_HOR) when tiled in the horizontal direction, and division mode in the case of binary division in the vertical direction.
- the binary vertical division (SPLIT_BT_VER) and the division mode in the case of ternary division in the vertical direction can be determined to be the ternary vertical division (SPLIT_BT_VER).
- the image decoding apparatus 100 can obtain the split mode mode information from the bit stream in one bin string.
- the form of the bit stream received by the video decoding apparatus 100 may include a fixed length binary code, a unary code, a truncated unary code, and a predetermined binary code.
- An empty string is a binary sequence of information. The empty string may consist of at least one bit.
- the image decoding apparatus 100 can obtain the split mode mode information corresponding to the bin string based on the split rule.
- the video decoding apparatus 100 can determine whether or not to divide the encoding unit into quad, division, or division direction and division type based on one bin string.
- the encoding unit may be less than or equal to the maximum encoding unit.
- the maximum encoding unit is also one of the encoding units since it is the encoding unit having the maximum size.
- the encoding unit determined in the maximum encoding unit has the same size as the maximum encoding unit. If the division type mode information for the maximum encoding unit indicates division, the maximum encoding unit may be divided into encoding units. In addition, if division type mode information for an encoding unit indicates division, encoding units can be divided into smaller-sized encoding units.
- the division of the image is not limited to this, and the maximum encoding unit and the encoding unit may not be distinguished. The division of encoding units will be described in more detail with reference to FIG. 3 to FIG.
- one or more prediction blocks for prediction from the encoding unit can be determined.
- the prediction block may be equal to or smaller than the encoding unit.
- one or more conversion blocks for conversion from an encoding unit may be determined.
- the conversion block may be equal to or smaller than the encoding unit.
- the shapes and sizes of the transform block and the prediction block may not be related to each other.
- prediction can be performed using an encoding unit as an encoding unit as a prediction block.
- conversion can be performed using the encoding unit as a conversion block as a conversion block.
- the current block and the neighboring blocks of the present disclosure may represent one of a maximum encoding unit, an encoding unit, a prediction block, and a transform block.
- the current block or the current encoding unit is a block in which decoding or encoding is currently proceeding, or a block in which the current segmentation is proceeding.
- the neighboring block may be a block restored before the current block.
- the neighboring blocks may be spatially or temporally contiguous from the current block.
- the neighboring block may be located at one of the left lower side, the left side, the upper left side, the upper side, the upper right side, the right side, and the lower right side of the current block.
- FIG. 3 illustrates a process in which the image decoding apparatus 100 determines at least one encoding unit by dividing a current encoding unit according to an embodiment.
- the block shape may include 4Nx4N, 4Nx2N, 2Nx4N, 4NxN, Nx4N, 32NxN, Nx32N, 16NxN, Nx16N, 8NxN, or Nx8N.
- N may be a positive integer.
- the block type information is information indicating at least one of a ratio, or a size, of a shape, direction, width, and height of an encoding unit.
- the shape of the encoding unit may include a square and a non-square. If the width and height of the encoding unit are the same (i.e., the block type of the encoding unit is 4Nx4N), the image decoding apparatus 100 can determine the block type information of the encoding unit as a square. The image decoding apparatus 100 can determine the shape of the encoding unit as a non-square.
- the image decoding apparatus 100 When the width and height of the encoding unit are different (i.e., the block type of the encoding unit is 4Nx2N, 2Nx4N, 4NxN, Nx4N, 32NxN, Nx32N, 16NxN, Nx16N, 8NxN, or Nx8N), the image decoding apparatus 100
- the block type information of the encoding unit can be determined as a non-square.
- the image decoding apparatus 100 sets the width and height ratio of the block type information of the coding unit to 1: 2, 2: 1, 1: 4, 4: , 8: 1, 1:16, 16: 1, 1:32, 32: 1.
- the video decoding apparatus 100 can determine whether the coding unit is the horizontal direction or the vertical direction. Further, the image decoding apparatus 100 can determine the size of the encoding unit based on at least one of the width of the encoding unit, the length of the height, and the width.
- the image decoding apparatus 100 may determine the type of the encoding unit using the block type information, and may determine the type of the encoding unit to be divided using the division type mode information. That is, the division method of the coding unit indicated by the division type mode information can be determined according to which block type the block type information used by the video decoding apparatus 100 represents.
- the image decoding apparatus 100 can obtain the split mode information from the bit stream. However, the present invention is not limited thereto, and the image decoding apparatus 100 and the image encoding apparatus 150 can determine the promised divided mode information based on the block type information.
- the video decoding apparatus 100 can determine the promised divided mode mode information for the maximum encoding unit or the minimum encoding unit. For example, the image decoding apparatus 100 may determine the division type mode information as a quad split with respect to the maximum encoding unit. Also, the video decoding apparatus 100 can determine the division type mode information to be " not divided " for the minimum encoding unit. Specifically, the image decoding apparatus 100 can determine the size of the maximum encoding unit to be 256x256.
- the video decoding apparatus 100 can determine the promised division mode information in advance by quad division.
- Quad partitioning is a split mode mode that bisects both the width and the height of the encoding unit.
- the image decoding apparatus 100 can obtain a 128x128 encoding unit from the 256x256 maximum encoding unit based on the division type mode information. Also, the image decoding apparatus 100 can determine the size of the minimum encoding unit to be 4x4.
- the image decoding apparatus 100 can obtain the division type mode information indicating " not divided " for the minimum encoding unit.
- the image decoding apparatus 100 may use block type information indicating that the current encoding unit is a square type. For example, the image decoding apparatus 100 can determine whether to divide a square encoding unit according to division type mode information, vertically or horizontally, four encoding units, or the like.
- the decoding unit 120 decodes the same size as the current encoding unit 300 according to the split mode mode information indicating that the current block is not divided 310c, 310d, 310e, 310f, etc.) based on the division type mode information indicating the predetermined division method without dividing the coding unit 310a having the division type mode information 310b, 310c, 310d, 310e, 310f or the like.
- the image decoding apparatus 100 includes two encoding units 310b, which are obtained by dividing a current encoding unit 300 in the vertical direction, based on division mode information indicating that the image is divided vertically according to an embodiment You can decide.
- the image decoding apparatus 100 can determine two encoding units 310c in which the current encoding unit 300 is divided in the horizontal direction based on the division type mode information indicating that the image is divided in the horizontal direction.
- the image decoding apparatus 100 can determine four coding units 310d in which the current coding unit 300 is divided into the vertical direction and the horizontal direction based on the division type mode information indicating that the image is divided into the vertical direction and the horizontal direction.
- the image decoding apparatus 100 includes three encoding units 310e obtained by dividing the current encoding unit 300 in the vertical direction on the basis of the division mode mode information indicating that the image is divided ternary in the vertical direction according to an embodiment You can decide.
- the image decoding apparatus 100 can determine the three encoding units 310f in which the current encoding unit 300 is divided in the horizontal direction based on the division mode mode information indicating that the image is divided tangentially in the horizontal direction.
- a division type in which a square coding unit can be divided should not be limited to the above-described type, and various types of division mode information can be included.
- the predetermined divisional form in which the square encoding unit is divided will be described in detail by way of various embodiments below.
- FIG. 4 illustrates a process in which the image decoding apparatus 100 determines at least one encoding unit by dividing a non-square encoding unit according to an embodiment.
- the image decoding apparatus 100 may use block type information indicating that the current encoding unit is a non-square format.
- the image decoding apparatus 100 may determine whether to divide the non-square current encoding unit according to the division mode mode information or not in a predetermined method. 4, if the block type information of the current encoding unit 400 or 450 indicates a non-square shape, the image decoding apparatus 100 determines whether the current encoding unit 440a, 440a, 440a, 440a, 440a, 440a, 440a, 440a, 440a, 440a, 440a, 440a, 440a, 440a, 440a, 440a, 440a, , 440a, , 440a, , 440a, , 440a, , 440a, , 440a, , 470b, 480a, 480b, 480c.
- the predetermined division method in which the non-square coding unit is divided will be described in detail through various
- the image decoding apparatus 100 may determine the type in which the encoding unit is divided using the division type mode information.
- the division type mode information may include at least one of the encoding units Can be expressed. 4 when the division type mode information indicates that the current encoding unit 400 or 450 is divided into two encoding units, the image decoding apparatus 100 determines the current encoding unit 400 or 450 based on the division type mode information, 450) to determine two encoding units 420a, 420b, or 470a, 470b included in the current encoding unit.
- the video decoding apparatus 100 divides the current coding unit 400 or 450 into non- The current encoding unit can be divided in consideration of the position of the long side of the encoding unit 400 or 450.
- the image decoding apparatus 100 divides the current encoding unit 400 or 450 in the direction of dividing the long side of the current encoding unit 400 or 450 in consideration of the shape of the current encoding unit 400 or 450 So that a plurality of encoding units can be determined.
- the video decoding apparatus 100 when the division type mode information indicates that an encoding unit is divided into an odd number of blocks (ternary division), the video decoding apparatus 100 performs an odd number encoding The unit can be determined. For example, when the division type mode information indicates that the current encoding unit 400 or 450 is divided into three encoding units, the image decoding apparatus 100 converts the current encoding unit 400 or 450 into three encoding units 430a, 430b, 430c, 480a, 480b, and 480c.
- the ratio of the width and height of the current encoding unit 400 or 450 may be 4: 1 or 1: 4. If the ratio of width to height is 4: 1, the length of the width is longer than the length of the height, so the block type information may be horizontal. If the ratio of width to height is 1: 4, the block type information may be vertical because the length of the width is shorter than the length of the height.
- the image decoding apparatus 100 may determine to divide the current encoding unit into odd number blocks based on the division type mode information. The image decoding apparatus 100 can determine the division direction of the current encoding unit 400 or 450 based on the block type information of the current encoding unit 400 or 450.
- the image decoding apparatus 100 can determine the encoding units 430a, 430b, and 430c by dividing the current encoding unit 400 in the horizontal direction. Also, when the current encoding unit 450 is in the horizontal direction, the image decoding apparatus 100 can determine the encoding units 480a, 480b, and 480c by dividing the current encoding unit 450 in the vertical direction.
- the image decoding apparatus 100 may determine an odd number of encoding units included in the current encoding unit 400 or 450, and the sizes of the determined encoding units may not be the same. For example, the size of a predetermined encoding unit 430b or 480b among the determined odd number of encoding units 430a, 430b, 430c, 480a, 480b, and 480c is different from the size of the other encoding units 430a, 430c, 480a, and 480c .
- an encoding unit that can be determined by dividing the current encoding unit (400 or 450) may have a plurality of types of sizes, and an odd number of encoding units (430a, 430b, 430c, 480a, 480b, 480c) May have different sizes.
- the image decoding apparatus 100 may determine an odd number of encoding units included in the current encoding unit 400 or 450, Furthermore, the image decoding apparatus 100 may set a predetermined restriction on at least one of the odd number of encoding units generated by division.
- the image decoding apparatus 100 includes an encoding unit 430a, 430b, 430c, 480a, 480b, and 480c, which are generated by dividing a current encoding unit 400 or 450, The decoding process for the coding units 430b and 480b may be different from the coding units 430a, 430c, 480a, and 480c.
- the coding units 430b and 480b positioned at the center are restricted so as not to be further divided unlike the other coding units 430a, 430c, 480a, and 480c, It can be limited to be divided.
- FIG. 5 illustrates a process in which the image decoding apparatus 100 divides an encoding unit based on at least one of block type information and split mode mode information according to an embodiment.
- the image decoding apparatus 100 may determine to divide or not divide the first encoding unit 500 of a square shape into encoding units based on at least one of the block type information and the division mode mode information .
- the image decoding apparatus 100 divides the first encoding unit 500 in the horizontal direction, The unit 510 can be determined.
- the first encoding unit, the second encoding unit, and the third encoding unit used according to an embodiment are terms used to understand the relation before and after the division between encoding units.
- the second encoding unit can be determined, and if the second encoding unit is divided, the third encoding unit can be determined.
- the relationship between the first coding unit, the second coding unit and the third coding unit used can be understood to be in accordance with the above-mentioned characteristic.
- the image decoding apparatus 100 may determine that the determined second encoding unit 510 is not divided or divided into encoding units based on the division mode information. Referring to FIG. 5, the image decoding apparatus 100 divides a second encoding unit 510 of a non-square shape determined by dividing a first encoding unit 500 based on division type mode information into at least one third encoding 520a, 520b, 520c, 520d, etc., or the second encoding unit 510 may not be divided.
- the image decoding apparatus 100 can obtain the division type mode information and the image decoding apparatus 100 divides the first encoding unit 500 based on the obtained division type mode information to generate a plurality of second encoding And the second encoding unit 510 may be divided according to the manner in which the first encoding unit 500 is divided based on the division type mode information. According to one embodiment, when the first encoding unit 500 is divided into the second encoding units 510 based on the division type mode information for the first encoding unit 500, the second encoding units 510 (E.g., 520a, 520b, 520c, 520d, etc.) based on the split mode mode information for the second encoding unit 510.
- the second encoding units 510 E.g., 520a, 520b, 520c, 520d, etc.
- the encoding unit may be recursively divided based on the division mode information associated with each encoding unit. Therefore, a square encoding unit may be determined in a non-square encoding unit, and a non-square encoding unit may be determined by dividing the square encoding unit recursively.
- predetermined encoding units for example, An encoding unit or a square-shaped encoding unit
- the square-shaped third coding unit 520b which is one of the odd-numbered third coding units 520b, 520c, and 520d, may be divided in the horizontal direction and divided into a plurality of fourth coding units.
- the non-square fourth encoding unit 530b or 530d which is one of the plurality of fourth encoding units 530a, 530b, 530c, and 530d, may be further divided into a plurality of encoding units.
- the fourth encoding unit 530b or 530d in the non-square form may be divided again into odd number of encoding units.
- a method which can be used for recursive division of an encoding unit will be described later in various embodiments.
- the image decoding apparatus 100 may divide each of the third encoding units 520a, 520b, 520c, and 520d into encoding units based on the division type mode information. Also, the image decoding apparatus 100 may determine that the second encoding unit 510 is not divided based on the division type mode information. The image decoding apparatus 100 may divide the non-square second encoding unit 510 into odd third encoding units 520b, 520c and 520d according to an embodiment. The image decoding apparatus 100 may set a predetermined restriction on a predetermined third encoding unit among odd numbered third encoding units 520b, 520c, and 520d.
- the image decoding apparatus 100 may limit the number of encoding units 520c located in the middle among odd numbered third encoding units 520b, 520c, and 520d to no longer be divided, or be divided into a set number of times .
- the image decoding apparatus 100 includes an encoding unit (not shown) located in the middle among odd third encoding units 520b, 520c, and 520d included in the second encoding unit 510 in the non- 520c may not be further divided or may be limited to being divided into a predetermined division form (for example, divided into four coding units only or divided into a form corresponding to a form in which the second coding units 510 are divided) (For example, dividing only n times, n > 0).
- a predetermined division form for example, divided into four coding units only or divided into a form corresponding to a form in which the second coding units 510 are divided
- the above restriction on the coding unit 520c positioned at the center is merely an example and should not be construed to be limited to the above embodiments and the coding unit 520c positioned at the center is not limited to the coding units 520b and 520d Quot;), < / RTI > which can be decoded differently.
- the image decoding apparatus 100 may acquire division mode information used for dividing a current encoding unit at a predetermined position in a current encoding unit.
- FIG. 6 illustrates a method by which the image decoding apparatus 100 determines a predetermined encoding unit among odd number of encoding units according to an embodiment.
- the division type mode information of the current encoding units 600 and 650 includes information on a sample at a predetermined position among a plurality of samples included in the current encoding units 600 and 650 (for example, 640, 690).
- the predetermined position in the current coding unit 600 in which at least one of the division mode information can be obtained should not be limited to the middle position shown in FIG. 6, and the predetermined position should be included in the current coding unit 600 (E.g., top, bottom, left, right, top left, bottom left, top right or bottom right, etc.)
- the image decoding apparatus 100 may determine division mode mode information obtained from a predetermined position and divide the current encoding unit into the encoding units of various types and sizes.
- the image decoding apparatus 100 may select one of the encoding units.
- the method for selecting one of the plurality of encoding units may be various, and description of these methods will be described later in various embodiments.
- the image decoding apparatus 100 may divide the current encoding unit into a plurality of encoding units and determine a predetermined encoding unit.
- the image decoding apparatus 100 may use information indicating the positions of odd-numbered encoding units in order to determine an encoding unit located in the middle among odd-numbered encoding units. 6, the image decoding apparatus 100 divides the current encoding unit 600 or the current encoding unit 650 into odd number of encoding units 620a, 620b, 620c or odd number of encoding units 660a, 660b, and 660c. The image decoding apparatus 100 may use the information on the positions of the odd-numbered encoding units 620a, 620b, and 620c or the odd-numbered encoding units 660a, 660b, and 660c, (660b).
- the image decoding apparatus 100 determines the positions of the encoding units 620a, 620b, and 620c based on information indicating the positions of predetermined samples included in the encoding units 620a, 620b, and 620c,
- the encoding unit 620b located in the encoding unit 620b can be determined.
- the video decoding apparatus 100 encodes the encoding units 620a, 620b, and 620c based on information indicating the positions of the upper left samples 630a, 630b, and 630c of the encoding units 620a, 620b, and 620c,
- the encoding unit 620b located in the center can be determined.
- Information indicating the positions of the upper left samples 630a, 630b, and 630c included in the coding units 620a, 620b, and 620c according to one embodiment is stored in the pictures of the coding units 620a, 620b, and 620c Or information about the position or coordinates of the object.
- Information indicating the positions of the upper left samples 630a, 630b, and 630c included in the coding units 620a, 620b, and 620c according to one embodiment is stored in the coding units 620a , 620b, and 620c, and the width or height may correspond to information indicating the difference between the coordinates of the encoding units 620a, 620b, and 620c in the picture.
- the image decoding apparatus 100 directly uses the information on the position or the coordinates in the picture of the coding units 620a, 620b, and 620c or the information on the width or height of the coding unit corresponding to the difference value between the coordinates
- the encoding unit 620b located in the center can be determined.
- the information indicating the position of the upper left sample 630a of the upper coding unit 620a may indicate the coordinates (xa, ya) and the upper left sample 530b of the middle coding unit 620b May indicate the coordinates (xb, yb), and the information indicating the position of the upper left sample 630c of the lower coding unit 620c may indicate the coordinates (xc, yc).
- the video decoding apparatus 100 can determine the center encoding unit 620b using the coordinates of the upper left samples 630a, 630b, and 630c included in the encoding units 620a, 620b, and 620c.
- the coding unit 620b including (xb, yb) coordinates of the sample 630b located at the center, Can be determined as a coding unit located in the middle of the coding units 620a, 620b, and 620c determined by dividing the current coding unit 600.
- the coordinates indicating the positions of the samples 630a, 630b and 630c in the upper left corner may indicate the coordinates indicating the absolute position in the picture
- the position of the upper left sample 630a of the upper coding unit 620a may be (Dxb, dyb), which is information indicating the relative position of the sample 630b at the upper left of the middle encoding unit 620b, and the relative position of the sample 630c at the upper left of the lower encoding unit 620c
- Information dyn (dxc, dyc) coordinates may also be used.
- the method of determining the coding unit at a predetermined position by using the coordinates of the sample as information indicating the position of the sample included in the coding unit should not be limited to the above-described method, and various arithmetic Should be interpreted as a method.
- the image decoding apparatus 100 may divide the current encoding unit 600 into a plurality of encoding units 620a, 620b, and 620c and may encode a predetermined one of the encoding units 620a, 620b, and 620c
- the encoding unit can be selected according to the criterion. For example, the image decoding apparatus 100 can select an encoding unit 620b having a different size from among the encoding units 620a, 620b, and 620c.
- the image decoding apparatus 100 may include (xa, ya) coordinates, which is information indicating the position of the upper left sample 630a of the upper encoding unit 620a, a sample of the upper left sample of the middle encoding unit 620b (Xc, yc) coordinates, which is information indicating the position of the lower-stage coding unit 630b and the position of the upper-left sample 630c of the lower-stage coding unit 620c, , 620b, and 620c, respectively.
- the image decoding apparatus 100 encodes the encoding units 620a and 620b using the coordinates (xa, ya), (xb, yb), (xc, yc) indicating the positions of the encoding units 620a, 620b and 620c , And 620c, respectively.
- the image decoding apparatus 100 may determine the width of the upper encoding unit 620a as the width of the current encoding unit 600.
- the image decoding apparatus 100 can determine the height of the upper encoding unit 620a as yb-ya.
- the image decoding apparatus 100 may determine the width of the middle encoding unit 620b as the width of the current encoding unit 600 according to an embodiment.
- the image decoding apparatus 100 can determine the height of the middle encoding unit 620b as yc-yb.
- the image decoding apparatus 100 may determine the width or height of the lower coding unit by using the width or height of the current coding unit and the width and height of the upper coding unit 620a and the middle coding unit 620b .
- the image decoding apparatus 100 may determine an encoding unit having a different size from other encoding units based on the widths and heights of the determined encoding units 620a, 620b, and 620c. Referring to FIG.
- the image decoding apparatus 100 may determine a coding unit 620b as a coding unit at a predetermined position while having a size different from that of the upper coding unit 620a and the lower coding unit 620c.
- the process of determining the encoding unit having a size different from that of the other encoding units by the video decoding apparatus 100 may be the same as that of the first embodiment in which the encoding unit of a predetermined position is determined using the size of the encoding unit determined based on the sample coordinates .
- Various processes may be used for determining the encoding unit at a predetermined position by comparing the sizes of the encoding units determined according to predetermined sample coordinates.
- the video decoding apparatus 100 determines the position (xd, yd) which is the information indicating the position of the upper left sample 670a of the left encoding unit 660a and the position (xd, yd) of the sample 670b at the upper left of the middle encoding unit 660b 660b and 660c using the (xf, yf) coordinates, which is information indicating the (xe, ye) coordinate which is the information indicating the position of the right encoding unit 660c and the position of the sample 670c at the upper left of the right encoding unit 660c, Each width or height can be determined.
- the image decoding apparatus 100 encodes the encoded units 660a and 660b using the coordinates (xd, yd), (xe, ye), (xf, yf) indicating the positions of the encoding units 660a, 660b and 660c And 660c, respectively.
- the image decoding apparatus 100 may determine the width of the left encoding unit 660a as xe-xd. The image decoding apparatus 100 can determine the height of the left encoding unit 660a as the height of the current encoding unit 650. [ According to an embodiment, the image decoding apparatus 100 may determine the width of the middle encoding unit 660b as xf-xe. The image decoding apparatus 100 can determine the height of the middle encoding unit 660b as the height of the current encoding unit 600.
- the image decoding apparatus 100 may determine that the width or height of the right encoding unit 660c is less than the width or height of the current encoding unit 650 and the width and height of the left encoding unit 660a and the middle encoding unit 660b . ≪ / RTI > The image decoding apparatus 100 may determine an encoding unit having a different size from the other encoding units based on the widths and heights of the determined encoding units 660a, 660b, and 660c. Referring to FIG.
- the image decoding apparatus 100 may determine a coding unit 660b as a coding unit at a predetermined position while having a size different from that of the left coding unit 660a and the right coding unit 660c.
- the process of determining the encoding unit having a size different from that of the other encoding units by the video decoding apparatus 100 may be the same as that of the first embodiment in which the encoding unit of a predetermined position is determined using the size of the encoding unit determined based on the sample coordinates .
- Various processes may be used for determining the encoding unit at a predetermined position by comparing the sizes of the encoding units determined according to predetermined sample coordinates.
- the position of the sample to be considered for determining the position of the coding unit should not be interpreted as being limited to the left upper end, and information about the position of any sample included in the coding unit can be interpreted as being available.
- the image decoding apparatus 100 can select a coding unit at a predetermined position among the odd number of coding units determined by dividing the current coding unit considering the type of the current coding unit. For example, if the current coding unit is a non-square shape having a width greater than the height, the image decoding apparatus 100 can determine a coding unit at a predetermined position along the horizontal direction. That is, the image decoding apparatus 100 may determine one of the encoding units which are located in the horizontal direction and limit the encoding unit. If the current coding unit is a non-square shape having a height greater than the width, the image decoding apparatus 100 can determine a coding unit at a predetermined position in the vertical direction. That is, the image decoding apparatus 100 may determine one of the encoding units having different positions in the vertical direction and set a restriction on the encoding unit.
- the image decoding apparatus 100 may use information indicating positions of even-numbered encoding units in order to determine an encoding unit at a predetermined position among the even-numbered encoding units.
- the image decoding apparatus 100 can determine an even number of encoding units by dividing the current encoding unit (binary division) and determine a predetermined encoding unit using information on the positions of the even number of encoding units. A concrete procedure for this is omitted because it may be a process corresponding to a process of determining a coding unit of a predetermined position (e.g., the middle position) among the odd number of coding units described with reference to FIG.
- the video decoding apparatus 100 may determine the block type information stored in the sample included in the middle coding unit, Mode information can be used.
- the image decoding apparatus 100 may divide the current encoding unit 600 into a plurality of encoding units 620a, 620b, and 620c based on the division type mode information, 620a, 620b, and 620c among the encoding units 620a and 620b. Furthermore, the image decoding apparatus 100 can determine the encoding unit 620b positioned at the center in consideration of the position at which the split mode information is obtained.
- the division type mode information of the current encoding unit 600 can be obtained in the sample 640 positioned in the middle of the current encoding unit 600, and the current encoding unit 600 can be obtained based on the division type mode information
- the encoding unit 620b including the sample 640 may be determined as a middle encoding unit.
- the information used for determining the coding unit located in the middle should not be limited to the division type mode information, and various kinds of information can be used in the process of determining the coding unit located in the middle.
- predetermined information for identifying a coding unit at a predetermined position may be obtained from a predetermined sample included in a coding unit to be determined.
- the image decoding apparatus 100 includes a plurality of encoding units 620a, 620b, and 620c that are determined by dividing a current encoding unit 600, Obtained from a sample at a predetermined position in the current coding unit 600 (for example, a sample located in the middle of the current coding unit 600) in order to determine the coding mode, Can be used.
- the image decoding apparatus 100 can determine the sample at the predetermined position in consideration of the block form of the current encoding unit 600, and the image decoding apparatus 100 can determine a plurality of It is possible to determine a coding unit 620b including a sample from which predetermined information (for example, divided mode information) can be obtained among the number of coding units 620a, 620b, and 620c .
- the image decoding apparatus 100 may determine a sample 640 located in the center of a current encoding unit 600 as a sample from which predetermined information can be obtained, The coding unit 100 may limit the coding unit 620b including the sample 640 to a predetermined limit in the decoding process.
- the position of the sample from which the predetermined information can be obtained should not be construed to be limited to the above-mentioned position, but may be interpreted as samples at arbitrary positions included in the encoding unit 620b to be determined for limiting.
- the position of a sample from which predetermined information can be obtained may be determined according to the type of the current encoding unit 600.
- the block type information can determine whether the current encoding unit is a square or a non-square, and determine the position of a sample from which predetermined information can be obtained according to the shape.
- the video decoding apparatus 100 may use at least one of the information on the width of the current coding unit and the information on the height to position at least one of the width and the height of the current coding unit in half The sample can be determined as a sample from which predetermined information can be obtained.
- the image decoding apparatus 100 selects one of the samples adjacent to the boundary dividing the longer side of the current encoding unit into halves by a predetermined Can be determined as a sample from which the information of < / RTI >
- the image decoding apparatus 100 may use the division mode information to determine a predetermined unit of the plurality of encoding units.
- the image decoding apparatus 100 may acquire division type mode information from a sample at a predetermined position included in an encoding unit, and the image decoding apparatus 100 may include a plurality of encoding units
- the units may be divided using the division mode information obtained from the sample at a predetermined position included in each of the plurality of encoding units. That is, the coding unit can be recursively divided using the division type mode information obtained in the sample at the predetermined position contained in each of the coding units. Since the recursive division process of the encoding unit has been described with reference to FIG. 5, a detailed description thereof will be omitted.
- the image decoding apparatus 100 can determine at least one encoding unit by dividing the current encoding unit, and the order in which the at least one encoding unit is decoded is determined as a predetermined block (for example, ). ≪ / RTI >
- FIG. 7 illustrates a sequence in which a plurality of coding units are processed when the image decoding apparatus 100 determines a plurality of coding units by dividing the current coding unit according to an embodiment.
- the image decoding apparatus 100 may determine the second encoding units 710a and 710b by dividing the first encoding unit 700 in the vertical direction according to the division type mode information,
- the second encoding units 730a and 730b may be determined by dividing the first encoding unit 700 in the horizontal direction or the second encoding units 750a, 750b, 750c, and 750d by dividing the first encoding unit 700 in the vertical direction and the horizontal direction have.
- the image decoding apparatus 100 may determine the order in which the second encoding units 710a and 710b determined by dividing the first encoding unit 700 in the vertical direction are processed in the horizontal direction 710c .
- the image decoding apparatus 100 may determine the processing order of the second encoding units 730a and 730b determined by dividing the first encoding unit 700 in the horizontal direction as the vertical direction 730c.
- the image decoding apparatus 100 processes the encoding units located in one row of the second encoding units 750a, 750b, 750c and 750d determined by dividing the first encoding unit 700 in the vertical direction and the horizontal direction, (For example, a raster scan order or a z scan order 750e) in which the encoding units located in the next row are processed.
- the image decoding apparatus 100 may recursively divide encoding units. 7, the image decoding apparatus 100 may determine a plurality of encoding units 710a, 710b, 730a, 730b, 750a, 750b, 750c and 750d by dividing the first encoding unit 700, The determined plurality of encoding units 710a, 710b, 730a, 730b, 750a, 750b, 750c, and 750d can be recursively divided.
- the method of dividing the plurality of encoding units 710a, 710b, 730a, 730b, 750a, 750b, 750c, and 750d may be a method corresponding to the method of dividing the first encoding unit 700.
- the plurality of encoding units 710a, 710b, 730a, 730b, 750a, 750b, 750c, and 750d may be independently divided into a plurality of encoding units.
- the image decoding apparatus 100 may determine the second encoding units 710a and 710b by dividing the first encoding unit 700 in the vertical direction, and may further determine the second encoding units 710a and 710b Can be determined not to divide or separate independently.
- the image decoding apparatus 100 may divide the second encoding unit 710a on the left side in the horizontal direction into the third encoding units 720a and 720b and the second encoding units 710b ) May not be divided.
- the processing order of the encoding units may be determined based on the division process of the encoding units.
- the processing order of the divided coding units can be determined based on the processing order of the coding units immediately before being divided.
- the image decoding apparatus 100 can determine the order in which the third encoding units 720a and 720b determined by dividing the second encoding unit 710a on the left side are processed independently of the second encoding unit 710b on the right side.
- the third encoding units 720a and 720b may be processed in the vertical direction 720c because the second encoding units 710a on the left side are divided in the horizontal direction and the third encoding units 720a and 720b are determined.
- the order in which the left second encoding unit 710a and the right second encoding unit 710b are processed corresponds to the horizontal direction 710c
- the right encoding unit 710b can be processed after the blocks 720a and 720b are processed in the vertical direction 720c.
- the above description is intended to explain the process sequence in which encoding units are determined according to the encoding units before division. Therefore, it should not be construed to be limited to the above-described embodiments, It should be construed as being used in various ways that can be handled independently in sequence.
- FIG. 8 illustrates a process of determining that the current encoding unit is divided into odd number of encoding units when the image decoding apparatus 100 can not process the encoding units in a predetermined order according to an embodiment.
- the image decoding apparatus 100 may determine that the current encoding unit is divided into odd number of encoding units based on the obtained division mode mode information.
- the first encoding unit 800 in the form of a square may be divided into second non-square encoding units 810a and 810b, and the second encoding units 810a and 810b may be independently 3 encoding units 820a, 820b, 820c, 820d, and 820e.
- the image decoding apparatus 100 can determine the plurality of third encoding units 820a and 820b by dividing the left encoding unit 810a of the second encoding unit in the horizontal direction, and the right encoding unit 810b Can be divided into an odd number of third encoding units 820c, 820d, and 820e.
- the image decoding apparatus 100 determines whether or not the third encoding units 820a, 820b, 820c, 820d, and 820e can be processed in a predetermined order and determines whether there are odd- You can decide. Referring to FIG. 8, the image decoding apparatus 100 may recursively divide the first encoding unit 800 to determine the third encoding units 820a, 820b, 820c, 820d, and 820e.
- the image decoding apparatus 100 may further include a first encoding unit 800, a second encoding unit 810a and 810b or a third encoding unit 820a, 820b, 820c , 820d, and 820e are divided into odd number of coding units among the divided types. For example, an encoding unit located on the right of the second encoding units 810a and 810b may be divided into odd third encoding units 820c, 820d, and 820e.
- the order in which the plurality of coding units included in the first coding unit 800 are processed may be a predetermined order (for example, a z-scan order 830) 100 can determine whether the third encoding units 820c, 820d, and 820e determined by dividing the right second encoding unit 810b into odd numbers satisfy the condition that the third encoding units 820c, 820d, and 820e can be processed according to the predetermined order.
- a predetermined order for example, a z-scan order 830
- the image decoding apparatus 100 satisfies a condition that third encoding units 820a, 820b, 820c, 820d, and 820e included in the first encoding unit 800 can be processed in a predetermined order And it is determined whether or not at least one of the widths and heights of the second encoding units 810a and 810b is divided in half according to the boundaries of the third encoding units 820a, 820b, 820c, 820d, and 820e, .
- the third encoding units 820a and 820b which are determined by dividing the height of the left second encoding unit 810a in the non-square shape by half, can satisfy the condition.
- the boundaries of the third encoding units 820c, 820d, and 820e determined by dividing the right second encoding unit 810b into three encoding units do not divide the width or height of the right second encoding unit 810b in half ,
- the third encoding units 820c, 820d, and 820e may be determined as not satisfying the condition.
- the image decoding apparatus 100 may determine that the scan order is disconnection in the case of such unsatisfactory condition and determine that the right second encoding unit 810b is divided into odd number of encoding units based on the determination result.
- the image decoding apparatus 100 may limit a coding unit of a predetermined position among the divided coding units when the coding unit is divided into odd number of coding units. Since the embodiment has been described above, a detailed description thereof will be omitted.
- FIG. 9 illustrates a process in which the image decoding apparatus 100 determines at least one encoding unit by dividing a first encoding unit 900 according to an embodiment.
- the image decoding apparatus 100 may divide the first encoding unit 900 based on the division type mode information acquired through a receiving unit (not shown).
- the first coding unit 900 in the form of a square may be divided into four coding units having a square form, or may be divided into a plurality of non-square coding units.
- the image decoding apparatus 100 transmits the first encoding unit 900 And may be divided into a plurality of non-square encoding units.
- the video decoding apparatus 100 determines whether or not the first coding unit 900 can be divided into the second encoding units 910a, 910b, and 910c divided in the vertical direction as the odd number of encoding units or the second encoding units 920a, 920b, and 920c determined in the horizontal direction.
- the image decoding apparatus 100 may be configured such that the second encoding units 910a, 910b, 910c, 920a, 920b, and 920c included in the first encoding unit 900 are processed in a predetermined order And the condition is that at least one of the width and the height of the first encoding unit 900 is divided in half according to the boundaries of the second encoding units 910a, 910b, 910c, 920a, 920b, and 920c .
- the boundaries of the second encoding units 910a, 910b, and 910c which are determined by vertically dividing the first encoding unit 900 in a square shape, are divided in half by the width of the first encoding unit 900
- the first encoding unit 900 can be determined as not satisfying a condition that can be processed in a predetermined order.
- the boundaries of the second encoding units 920a, 920b, and 920c which are determined by dividing the first encoding unit 900 in the horizontal direction into the horizontal direction, can not divide the width of the first encoding unit 900 in half, 1 encoding unit 900 may be determined as not satisfying a condition that can be processed in a predetermined order.
- the image decoding apparatus 100 may determine that the scan sequence is disconnection in the case of such unsatisfactory condition and determine that the first encoding unit 900 is divided into odd number of encoding units based on the determination result. According to an embodiment, the image decoding apparatus 100 may limit a coding unit of a predetermined position among the divided coding units when the coding unit is divided into odd number of coding units. Since the embodiment has been described above, a detailed description thereof will be omitted.
- the image decoding apparatus 100 may determine the encoding units of various types by dividing the first encoding unit.
- the image decoding apparatus 100 may divide a first coding unit 900 in a square form and a first coding unit 930 or 950 in a non-square form into various types of coding units .
- the image decoding apparatus 100 may convert the first encoding unit 1000 in a square form into a second encoding unit 1010a in a non-square form on the basis of the division type mode information acquired through a receiver (not shown) , 1010b, 1020a, and 1020b.
- the second encoding units 1010a, 1010b, 1020a, and 1020b may be independently divided. Accordingly, the image decoding apparatus 100 can determine whether to divide or not divide the image into a plurality of encoding units based on the division type mode information associated with each of the second encoding units 1010a, 1010b, 1020a, and 1020b.
- the image decoding apparatus 100 divides the left second encoding unit 1010a in a non-square form determined by dividing the first encoding unit 1000 in the vertical direction into a horizontal direction, 1012a, and 1012b.
- the right-side second encoding unit 1010b is arranged in the horizontal direction in the same manner as the direction in which the left second encoding unit 1010a is divided, As shown in Fig.
- the left second encoding unit 1010a and the right second encoding unit 1010b are arranged in the horizontal direction
- the third encoding units 1012a, 1012b, 1014a, and 1014b can be determined by being independently divided. However, this is the same result that the image decoding apparatus 100 divides the first encoding unit 1000 into four square-shaped second encoding units 1030a, 1030b, 1030c, and 1030d based on the split mode information, It may be inefficient in terms of image decoding.
- the image decoding apparatus 100 divides a second encoding unit 1020a or 1020b in a non-square form determined by dividing a first encoding unit 1000 in a horizontal direction into a vertical direction, (1022a, 1022b, 1024a, 1024b).
- the image decoding apparatus 100 may be configured to encode the second encoding unit (for example, The encoding unit 1020b) can be restricted such that the upper second encoding unit 1020a can not be divided vertically in the same direction as the divided direction.
- FIG. 11 illustrates a process in which the image decoding apparatus 100 divides a square-shaped encoding unit when the split mode information can not be divided into four square-shaped encoding units according to an embodiment.
- the image decoding apparatus 100 may determine the second encoding units 1110a, 1110b, 1120a, and 1120b by dividing the first encoding unit 1100 based on the division type mode information.
- the division type mode information may include information on various types in which an encoding unit can be divided, but information on various types may not include information for division into four square units of encoding units. According to the division type mode information, the image decoding apparatus 100 can not divide the first encoding unit 1100 in the square form into the second encoding units 1130a, 1130b, 1130c, and 1130d in the four square form.
- the image decoding apparatus 100 may determine the non-square second encoding units 1110a, 1110b, 1120a, and 1120b based on the split mode information.
- the image decoding apparatus 100 may independently divide the non-square second encoding units 1110a, 1110b, 1120a, and 1120b, respectively.
- Each of the second encoding units 1110a, 1110b, 1120a, 1120b, etc. may be divided in a predetermined order through a recursive method, which is a method of dividing the first encoding unit 1100 based on the split mode information May be a corresponding partitioning method.
- the image decoding apparatus 100 can determine the third encoding units 1112a and 1112b in the form of a square by dividing the left second encoding unit 1110a in the horizontal direction and the right second encoding unit 1110b It is possible to determine the third encoding units 1114a and 1114b in the form of a square by being divided in the horizontal direction. Furthermore, the image decoding apparatus 100 may divide the left second encoding unit 1110a and the right second encoding unit 1110b in the horizontal direction to determine the third encoding units 1116a, 1116b, 1116c, and 1116d in the form of a square have. In this case, the encoding unit can be determined in the same manner as the first encoding unit 1100 is divided into the four second square encoding units 1130a, 1130b, 1130c, and 1130d.
- the image decoding apparatus 100 can determine the third encoding units 1122a and 1122b in the form of a square by dividing the upper second encoding unit 1120a in the vertical direction, and the lower second encoding units 1120b May be divided in the vertical direction to determine the third encoding units 1124a and 1124b in the form of a square. Further, the image decoding apparatus 100 may divide the upper second encoding unit 1120a and the lower second encoding unit 1120b in the vertical direction to determine the square-shaped third encoding units 1126a, 1126b, 1126a, and 1126b have. In this case, the encoding unit can be determined in the same manner as the first encoding unit 1100 is divided into the four second square encoding units 1130a, 1130b, 1130c, and 1130d.
- FIG. 12 illustrates that the processing order among a plurality of coding units may be changed according to a division process of a coding unit according to an exemplary embodiment.
- the image decoding apparatus 100 may divide the first encoding unit 1200 based on the division type mode information. If the block type is square and the division type mode information indicates that the first encoding unit 1200 is divided into at least one of a horizontal direction and a vertical direction, the image decoding apparatus 100 may generate the first encoding unit 1200 (For example, 1210a, 1210b, 1220a, 1220b, etc.) can be determined by dividing the second coding unit. Referring to FIG. 12, the non-square second encoding units 1210a, 1210b, 1220a, and 1220b, which are determined by dividing the first encoding unit 1200 only in the horizontal direction or the vertical direction, Can be divided independently.
- the image decoding apparatus 100 divides the second encoding units 1210a and 1210b, which are generated by dividing the first encoding unit 1200 in the vertical direction, in the horizontal direction, and outputs the third encoding units 1216a, 1216b, 1216c and 1216d can be determined and the second encoding units 1220a and 1220b generated by dividing the first encoding unit 1200 in the horizontal direction are divided in the horizontal direction and the third encoding units 1226a, , 1226d. Since the process of dividing the second encoding units 1210a, 1210b, 1220a, and 1220b has been described above with reference to FIG. 11, a detailed description thereof will be omitted.
- the image decoding apparatus 100 may process an encoding unit in a predetermined order.
- the features of the processing of the encoding unit in the predetermined order have been described in detail with reference to FIG. 7, and a detailed description thereof will be omitted. 12, the image decoding apparatus 100 divides a first encoding unit 1200 of a square shape into 4 pieces of fourth encoding units 1216a, 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, 1226d Can be determined.
- the image decoding apparatus 100 may process the third encoding units 1216a, 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d according to the form in which the first encoding unit 1200 is divided You can decide.
- the image decoding apparatus 100 divides the generated second encoding units 1210a and 1210b in the vertical direction and divides them in the horizontal direction to determine third encoding units 1216a, 1216b, 1216c, and 1216d And the image decoding apparatus 100 first processes the third encoding units 1216a and 1216c included in the left second encoding unit 1210a in the vertical direction and then processes the third encoding units 1216a and 1216c included in the second right encoding unit 1210b The third encoding units 1216a, 1216b, 1216c, and 1216d can be processed according to the order 1217 of processing the third encoding units 1216b and 1216d in the vertical direction.
- the image decoding apparatus 100 divides the second encoding units 1220a and 1220b generated in the horizontal direction into vertical directions to determine the third encoding units 1226a, 1226b, 1226c and 1226d And the image decoding apparatus 100 first processes the third encoding units 1226a and 1226b included in the upper second encoding unit 1220a in the horizontal direction and then encodes the third encoding units 1226a and 1226b included in the lower second encoding unit 1220b The third encoding units 1226a, 1226b, 1226c, and 1226d may be processed in accordance with an order 1227 for processing the third encoding units 1226c and 1226d in the horizontal direction.
- the second encoding units 1210a, 1210b, 1220a, and 1220b are divided to determine the third encoding units 1216a, 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d, have.
- the second encoding units 1210a and 1210b determined to be divided in the vertical direction and the second encoding units 1220a and 1220b determined to be divided in the horizontal direction are divided into different formats, but the third encoding units 1216a , 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d, the result is that the first encoding unit 1200 is divided into the same type of encoding units. Accordingly, the image decoding apparatus 100 recursively divides an encoding unit through a different process based on division mode information, thereby eventually determining the same type of encoding units, It can be processed in order.
- FIG. 13 illustrates a process of determining the depth of an encoding unit according to a change in type and size of an encoding unit when a plurality of encoding units are determined by recursively dividing an encoding unit according to an embodiment.
- the image decoding apparatus 100 may determine the depth of a coding unit according to a predetermined criterion.
- a predetermined criterion may be a length of a long side of a coding unit.
- the depth of the current encoding unit is smaller than the depth of the encoding unit before being divided it can be determined that the depth is increased by n.
- an encoding unit with an increased depth is expressed as a lower-depth encoding unit.
- the image decoding apparatus 100 may generate a square 1 encoding unit 1300 can be divided to determine the second encoding unit 1302, the third encoding unit 1304, etc. of the lower depth. If the size of the first encoding unit 1300 in the form of a square is 2Nx2N, the second encoding unit 1302 determined by dividing the width and height of the first encoding unit 1300 by 1/2 may have a size of NxN have.
- the third encoding unit 1304 determined by dividing the width and height of the second encoding unit 1302 by a half size may have a size of N / 2xN / 2.
- the width and height of the third encoding unit 1304 correspond to 1/4 of the first encoding unit 1300. If the depth of the first encoding unit 1300 is D, the depth of the second encoding unit 1302, which is half the width and height of the first encoding unit 1300, may be D + 1, The depth of the third encoding unit 1304, which is one fourth of the width and height of the third encoding unit 1300, may be D + 2.
- block type information indicating a non-square shape for example, block type information is' 1: NS_VER 'indicating that the height is a non-square having a width greater than the width or' 2 >
- the image decoding apparatus 100 divides the first coding unit 1310 or 1320 in a non-square form into a second coding unit 1312 or 1322 of a lower depth, The third encoding unit 1314 or 1324, or the like.
- the image decoding apparatus 100 may determine a second coding unit (for example, 1302, 1312, 1322, etc.) by dividing at least one of the width and the height of the first coding unit 1310 of Nx2N size. That is, the image decoding apparatus 100 can determine the second encoding unit 1302 of NxN size or the second encoding unit 1322 of NxN / 2 size by dividing the first encoding unit 1310 in the horizontal direction, It is also possible to determine the second encoding unit 1312 of N / 2xN size by dividing it in the horizontal direction and the vertical direction.
- a second coding unit for example, 1302, 1312, 1322, etc.
- the image decoding apparatus 100 divides at least one of a width and a height of a 2NxN first encoding unit 1320 to determine a second encoding unit (e.g., 1302, 1312, 1322, etc.) It is possible. That is, the image decoding apparatus 100 can determine the second encoding unit 1302 of NxN size or the second encoding unit 1312 of N / 2xN size by dividing the first encoding unit 1320 in the vertical direction, The second encoding unit 1322 of the NxN / 2 size may be determined by dividing the image data in the horizontal direction and the vertical direction.
- a second encoding unit e.g. 1302, 1312, 1322, etc.
- the image decoding apparatus 100 divides at least one of the width and the height of the second encoding unit 1302 of NxN size to determine a third encoding unit (for example, 1304, 1314, 1324, etc.) It is possible. That is, the image decoding apparatus 100 determines the third encoding unit 1304 of N / 2xN / 2 size by dividing the second encoding unit 1302 in the vertical direction and the horizontal direction, or determines the third encoding unit 1304 of N / 4xN / 3 encoding unit 1314 or a third encoding unit 1324 of N / 2xN / 4 size.
- a third encoding unit for example, 1304, 1314, 1324, etc.
- the image decoding apparatus 100 may divide at least one of the width and the height of the second encoding unit 1312 of N / 2xN size into a third encoding unit (e.g., 1304, 1314, 1324, etc.) . That is, the image decoding apparatus 100 divides the second encoding unit 1312 in the horizontal direction to generate a third encoding unit 1304 of N / 2xN / 2 or a third encoding unit 1324 of N / 2xN / 4 size ) Or may be divided in the vertical and horizontal directions to determine the third encoding unit 1314 of N / 4xN / 2 size.
- a third encoding unit e.g. 1304, 1314, 1324, etc.
- the image decoding apparatus 100 divides at least one of the width and the height of the second encoding unit 1322 of NxN / 2 size to generate a third encoding unit 1304, 1314, 1324, . That is, the image decoding apparatus 100 divides the second encoding unit 1322 in the vertical direction to generate a third encoding unit 1304 of N / 2xN / 2 or a third encoding unit 1314 of N / 4xN / 2 size ) Or may be divided in the vertical and horizontal directions to determine the third encoding unit 1324 of N / 2xN / 4 size.
- the image decoding apparatus 100 may divide a square-shaped encoding unit (for example, 1300, 1302, and 1304) into a horizontal direction or a vertical direction.
- a square-shaped encoding unit for example, 1300, 1302, and 1304
- the first encoding unit 1300 having a size of 2Nx2N is divided in the vertical direction to determine a first encoding unit 1310 having a size of Nx2N or the first encoding unit 1310 having a size of 2NxN to determine a first encoding unit 1320 having a size of 2NxN .
- the depth of the encoding unit when the depth is determined based on the length of the longest side of the encoding unit, the depth of the encoding unit, which is determined by dividing the first encoding unit 1300 of 2Nx2N size in the horizontal direction or the vertical direction, May be the same as the depth of the unit (1300).
- the width and height of the third encoding unit 1314 or 1324 may correspond to one fourth of the first encoding unit 1310 or 1320.
- the depth of the first coding unit 1310 or 1320 is D
- the depth of the second coding unit 1312 or 1322 which is half the width and height of the first coding unit 1310 or 1320 is D +
- the depth of the third encoding unit 1314 or 1324, which is one fourth of the width and height of the first encoding unit 1310 or 1320 may be D + 2.
- FIG. 14 illustrates a depth index (hereinafter referred to as a PID) for classifying a depth and a coding unit that can be determined according to the type and size of coding units according to an exemplary embodiment.
- a PID depth index
- the image decoding apparatus 100 may divide the first encoding unit 1400 in a square form to determine various types of second encoding units. 14, the image decoding apparatus 100 divides the first encoding unit 1400 into at least one of a vertical direction and a horizontal direction according to the division type mode information, and outputs the second encoding units 1402a, 1402b, and 1404a , 1404b, 1406a, 1406b, 1406c, 1406d. That is, the image decoding apparatus 100 can determine the second encoding units 1402a, 1402b, 1404a, 1404b, 1406a, 1406b, 1406c, and 1406d based on the split mode mode information for the first encoding unit 1400 .
- the second encoding units 1402a, 1402b, 1404a, 1404b, 1406a, 1406b, 1406c, and 1406d which are determined according to the split mode mode information for the first encoded unit 1400 in the form of a square.
- the depth of field can be determined based on the depth. For example, since the length of one side of the first encoding unit 1400 in the square form is the same as the length of long sides of the second encoding units 1402a, 1402b, 1404a, and 1404b in the non-square form, 1400) and the non-square type second encoding units 1402a, 1402b, 1404a, 1404b are denoted by D in the same manner.
- the video decoding apparatus 100 divides the first encoding unit 1400 into four square-shaped second encoding units 1406a, 1406b, 1406c, and 1406d based on the split mode information, Since the length of one side of the second coding units 1406a, 1406b, 1406c and 1406d is half the length of one side of the first coding unit 1400, the length of one side of the second coding units 1406a, 1406b, 1406c and 1406d The depth may be a depth of D + 1 which is one depth lower than D, which is the depth of the first encoding unit 1400.
- the image decoding apparatus 100 divides a first encoding unit 1410 having a height greater than a width in a horizontal direction according to division mode information, and generates a plurality of second encoding units 1412a, 1412b, and 1414a , 1414b, and 1414c.
- the image decoding apparatus 100 divides a first encoding unit 1420 of a shape whose width is longer than a height in a vertical direction according to the division mode information to generate a plurality of second encoding units 1422a, 1422b, and 1424a , 1424b, and 1424c.
- 1422a, 1422b, 1424a, 1422b, 1424b, 1424b, 1424b, 1424b, 1424b, 1424c can be determined in depth based on the length of the long side. For example, since the length of one side of the square-shaped second encoding units 1412a and 1412b is 1/2 times the length of one side of the non-square first encoding unit 1410 whose height is longer than the width, The depth of the second encoding units 1412a and 1412b of the form is D + 1 which is one depth lower than the depth D of the first encoding unit 1410 of the non-square form.
- the image decoding apparatus 100 may divide the non-square first encoding unit 1410 into odd second encoding units 1414a, 1414b, and 1414c based on the division type mode information.
- the odd number of second encoding units 1414a, 1414b and 1414c may include non-square second encoding units 1414a and 1414c and a square second encoding unit 1414b.
- the length of the long sides of the non-square type second encoding units 1414a and 1414c and the length of one side of the second encoding unit 1414b in the square form are set to 1/10 of the length of one side of the first encoding unit 1410
- the depth of the second encoding units 1414a, 1414b, and 1414c may be a depth of D + 1 which is one depth lower than D, which is the depth of the first encoding unit 1410.
- the image decoding apparatus 100 is connected to the first encoding unit 1420 in the form of a non-square shape whose width is longer than the height in a manner corresponding to the scheme for determining the depths of the encoding units associated with the first encoding unit 1410 The depth of the encoding units can be determined.
- the image decoding apparatus 100 determines an index (PID) for distinguishing the divided coding units. If the odd-numbered coding units are not the same size, The index can be determined based on the index. 14, an encoding unit 1414b positioned at the center among the odd-numbered encoding units 1414a, 1414b, and 1414c has the same width as other encoding units 1414a and 1414c, Lt; / RTI > 1414a and 1414c. That is, in this case, the encoding unit 1414b positioned in the middle may include two of the other encoding units 1414a and 1414c.
- PID index
- the coding unit 1414c positioned next to the coding unit 1414c may be three days in which the index is increased by two. That is, there may be a discontinuity in the value of the index.
- the image decoding apparatus 100 may determine whether odd-numbered encoding units are not the same size based on the presence or absence of an index discontinuity for distinguishing between the divided encoding units.
- the image decoding apparatus 100 may determine whether the image is divided into a specific division form based on an index value for distinguishing a plurality of coding units divided from the current coding unit. 14, the image decoding apparatus 100 divides a first coding unit 1410 of a rectangular shape whose height is longer than the width to determine an even number of coding units 1412a and 1412b or an odd number of coding units 1414a and 1414b , And 1414c.
- the image decoding apparatus 100 may use an index (PID) indicating each coding unit in order to distinguish each of the plurality of coding units.
- the PID may be obtained at a sample of a predetermined position of each coding unit (e.g., the upper left sample).
- the image decoding apparatus 100 may determine a coding unit of a predetermined position among the coding units determined by using the index for classifying the coding unit.
- the image decoding apparatus 100 encodes the first encoding unit 1410, Can be divided into three coding units 1414a, 1414b and 1414c.
- the image decoding apparatus 100 can assign an index to each of the three encoding units 1414a, 1414b, and 1414c.
- the image decoding apparatus 100 may compare the indexes of the respective encoding units in order to determine the middle encoding unit among the encoding units divided into odd numbers.
- the image decoding apparatus 100 encodes an encoding unit 1414b having an index corresponding to a middle value among the indices based on the indices of the encoding units by encoding the middle position among the encoding units determined by dividing the first encoding unit 1410 Can be determined as a unit.
- the image decoding apparatus 100 may determine an index based on a size ratio between coding units when the coding units are not the same size in determining the index for dividing the divided coding units .
- the coding unit 1414b generated by dividing the first coding unit 1410 is divided into coding units 1414a and 1414c having the same width as the other coding units 1414a and 1414c but different in height Can be double the height.
- the image decoding apparatus 100 may determine that the image decoding apparatus 100 is divided into a plurality of encoding units including encoding units having different sizes from other encoding units.
- the image decoding apparatus 100 determines that the encoding unit (for example, the middle encoding unit) at a predetermined position among the odd number of encoding units is different from the encoding units You can split the current encoding unit into a form.
- the image decoding apparatus 100 may determine an encoding unit having a different size by using an index (PID) for the encoding unit.
- PID index
- the index and the size or position of the encoding unit at a predetermined position to be determined are specific for explaining an embodiment, and thus should not be construed to be limited thereto, and various indexes, positions and sizes of encoding units can be used Should be interpreted.
- the image decoding apparatus 100 may use a predetermined data unit in which a recursive division of an encoding unit starts.
- FIG. 15 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
- a predetermined data unit may be defined as a unit of data in which an encoding unit begins to be recursively segmented using segmentation mode information. That is, it may correspond to a coding unit of the highest depth used in the process of determining a plurality of coding units for dividing the current picture.
- a predetermined data unit is referred to as a reference data unit for convenience of explanation.
- the reference data unit may represent a predetermined size and shape.
- the reference encoding unit may comprise samples of MxN.
- M and N may be equal to each other, or may be an integer represented by a multiplier of 2. That is, the reference data unit may represent a square or a non-square shape, and may be divided into an integer number of encoding units.
- the image decoding apparatus 100 may divide the current picture into a plurality of reference data units. According to an embodiment, the image decoding apparatus 100 may divide a plurality of reference data units for dividing the current picture into pieces using the split mode information for each reference data unit.
- the segmentation process of the reference data unit may correspond to the segmentation process using a quad-tree structure.
- the image decoding apparatus 100 may determine in advance a minimum size that the reference data unit included in the current picture can have. Accordingly, the image decoding apparatus 100 can determine reference data units of various sizes having a size larger than a minimum size, and can determine at least one encoding unit using the split mode information based on the determined reference data unit .
- the image decoding apparatus 100 may use a square-shaped reference encoding unit 1500 or a non-square-shaped reference encoding unit 1502.
- the type and size of the reference encoding unit may include various data units (e.g., a sequence, a picture, a slice, a slice segment a slice segment, a maximum encoding unit, and the like).
- a receiver (not shown) of the video decoding apparatus 100 may acquire at least one of information on the type of the reference encoding unit and information on the size of the reference encoding unit from the bitstream for each of the various data units have.
- the process of determining at least one encoding unit included in the reference type encoding unit 1500 is described in detail in the process of dividing the current encoding unit 300 of FIG. 3, and the non- Is determined in the process of dividing the current encoding unit 400 or 450 of FIG. 4, so that a detailed description thereof will be omitted.
- the image decoding apparatus 100 may include an index for identifying the size and type of the reference encoding unit Can be used. That is, a receiving unit (not shown) receives a predetermined condition (for example, a data unit having a size equal to or smaller than a slice) among the various data units (e.g., sequence, picture, slice, slice segment, ), It is possible to obtain only an index for identification of the size and type of the reference encoding unit for each slice, slice segment, maximum encoding unit, and the like.
- a predetermined condition for example, a data unit having a size equal to or smaller than a slice
- the various data units e.g., sequence, picture, slice, slice segment,
- the image decoding apparatus 100 can determine the size and shape of the reference data unit for each data unit satisfying the predetermined condition by using the index.
- the information on the type of the reference encoding unit and the information on the size of the reference encoding unit are obtained from the bitstream for each relatively small data unit and used, the use efficiency of the bitstream may not be good. Therefore, Information on the size of the reference encoding unit and information on the size of the reference encoding unit can be acquired and used. In this case, at least one of the size and the type of the reference encoding unit corresponding to the index indicating the size and type of the reference encoding unit may be predetermined.
- the image decoding apparatus 100 selects at least one of the size and the type of the reference encoding unit in accordance with the index, thereby obtaining at least one of the size and the type of the reference encoding unit included in the data unit, You can decide.
- the image decoding apparatus 100 may use at least one reference encoding unit included in one maximum encoding unit. That is, the maximum encoding unit for dividing an image may include at least one reference encoding unit, and the encoding unit may be determined through a recursive division process of each reference encoding unit. According to an exemplary embodiment, at least one of the width and the height of the maximum encoding unit may correspond to at least one integer multiple of the width and height of the reference encoding unit. According to an exemplary embodiment, the size of the reference encoding unit may be a size obtained by dividing the maximum encoding unit n times according to a quadtree structure.
- the image decoding apparatus 100 can determine the reference encoding unit by dividing the maximum encoding unit n times according to the quad-tree structure, and determine the reference encoding unit based on the block type information and the split mode information Can be divided based on one.
- FIG. 16 shows a processing block serving as a reference for determining a determination order of a reference encoding unit included in a picture 1600 according to an embodiment.
- the image decoding apparatus 100 may determine at least one processing block for dividing a picture.
- the processing block is a data unit including at least one reference encoding unit for dividing an image, and at least one reference encoding unit included in the processing block may be determined in a specific order. That is, the order of determination of at least one reference encoding unit determined in each processing block may correspond to one of various kinds of order in which the reference encoding unit can be determined, and the reference encoding unit determination order determined in each processing block May be different for each processing block.
- the order of determination of the reference encoding unit determined for each processing block is a raster scan, a Z scan, an N scan, an up-right diagonal scan, a horizontal scan a horizontal scan, and a vertical scan. However, the order that can be determined should not be limited to the scan orders.
- the image decoding apparatus 100 may obtain information on the size of the processing block and determine the size of the at least one processing block included in the image.
- the image decoding apparatus 100 may obtain information on the size of the processing block from the bitstream to determine the size of the at least one processing block included in the image.
- the size of such a processing block may be a predetermined size of a data unit represented by information on the size of the processing block.
- a receiver (not shown) of the image decoding apparatus 100 may acquire information on the size of a processing block from a bitstream for each specific data unit.
- information on the size of a processing block can be obtained from a bitstream in units of data such as an image, a sequence, a picture, a slice, a slice segment, and the like. That is, the receiving unit (not shown) may obtain the information on the size of the processing block from the bitstream for each of the plurality of data units, and the image decoding apparatus 100 divides the picture using the information on the size of the obtained processing block
- the size of the at least one processing block may be determined, and the size of the processing block may be an integer multiple of the reference encoding unit.
- the image decoding apparatus 100 may determine the sizes of the processing blocks 1602 and 1612 included in the picture 1600.
- the video decoding apparatus 100 can determine the size of the processing block based on information on the size of the processing block obtained from the bitstream.
- the image decoding apparatus 100 according to an exemplary embodiment of the present invention may be configured such that the horizontal size of the processing blocks 1602 and 1612 is four times the horizontal size of the reference encoding unit, four times the vertical size of the reference encoding unit, You can decide.
- the image decoding apparatus 100 may determine an order in which at least one reference encoding unit is determined in at least one processing block.
- the video decoding apparatus 100 may determine each processing block 1602, 1612 included in the picture 1600 based on the size of the processing block, and may include in the processing blocks 1602, 1612 The determination order of at least one reference encoding unit is determined.
- the determination of the reference encoding unit may include determining the size of the reference encoding unit according to an embodiment.
- the image decoding apparatus 100 may obtain information on a determination order of at least one reference encoding unit included in at least one processing block from a bitstream, So that the order in which at least one reference encoding unit is determined can be determined.
- the information on the decision order can be defined in the order or direction in which the reference encoding units are determined in the processing block. That is, the order in which the reference encoding units are determined may be independently determined for each processing block.
- the image decoding apparatus 100 may obtain information on a determination order of a reference encoding unit from a bitstream for each specific data unit.
- a receiving unit (not shown) may acquire information on a determination order of a reference encoding unit from a bitstream for each data unit such as an image, a sequence, a picture, a slice, a slice segment, and a processing block. Since the information on the determination order of the reference encoding unit indicates the reference encoding unit determination order in the processing block, the information on the determination order can be obtained for each specific data unit including an integer number of processing blocks.
- the image decoding apparatus 100 may determine at least one reference encoding unit based on the determined order according to an embodiment.
- the receiving unit may obtain information on the reference encoding unit determination order from the bitstream as information related to the processing blocks 1602 and 1612, and the video decoding apparatus 100 may acquire information It is possible to determine the order of determining at least one reference encoding unit included in the reference frames 1602 and 1612 and determine at least one reference encoding unit included in the picture 1600 according to the determination order of the encoding units.
- the image decoding apparatus 100 may determine a determination order 1604 and 1614 of at least one reference encoding unit associated with each of the processing blocks 1602 and 1612.
- the reference encoding unit determination order associated with each processing block 1602, 1612 may be different for each processing block. If the reference encoding unit determination order 1604 related to the processing block 1602 is a raster scan order, the reference encoding unit included in the processing block 1602 can be determined according to the raster scan order. On the other hand, when the reference encoding unit determination order 1614 related to the other processing block 1612 is a reverse order of the raster scan order, the reference encoding unit included in the processing block 1612 can be determined according to the reverse order of the raster scan order.
- the image decoding apparatus 100 may decode the determined at least one reference encoding unit according to an embodiment.
- the image decoding apparatus 100 can decode an image based on the reference encoding unit determined through the above-described embodiment.
- the method of decoding the reference encoding unit may include various methods of decoding the image.
- the image decoding apparatus 100 may obtain block type information indicating a type of a current encoding unit or divided mode type information indicating a method of dividing a current encoding unit from a bitstream.
- the split mode information may be included in a bitstream associated with various data units.
- the video decoding apparatus 100 may include a sequence parameter set, a picture parameter set, a video parameter set, a slice header, a slice segment header slice segment type mode information included in the segment header can be used.
- the image decoding apparatus 100 may obtain a syntax element corresponding to the maximum encoding unit, the reference encoding unit, the block type information from the bitstream or the split mode information for each processing block from the bitstream and use the obtained syntax element.
- the image decoding apparatus 100 can determine the division rule of the image.
- the division rule may be predetermined between the video decoding apparatus 100 and the video encoding apparatus 150.
- the image decoding apparatus 100 can determine the division rule of the image based on the information obtained from the bit stream.
- the video decoding apparatus 100 includes a sequence parameter set, a picture parameter set, a video parameter set, a slice header, and a slice segment header
- the partitioning rule can be determined based on the information obtained from at least one.
- the video decoding apparatus 100 may determine the division rule differently according to a frame, a slice, a temporal layer, a maximum encoding unit, or an encoding unit.
- the image decoding apparatus 100 can determine the division rule based on the block type of the encoding unit.
- the block shape may include the size, shape, width and height ratio, direction of the encoding unit.
- the image encoding apparatus 150 and the image decoding apparatus 100 may determine in advance that the division rule is determined based on the block type of the encoding unit.
- the present invention is not limited thereto.
- the image decoding apparatus 100 can determine the segmentation rule based on the information obtained from the bit stream received from the image encoding apparatus 150.
- the shape of the encoding unit may include a square and a non-square. If the width and height of the encoding unit are the same, the image decoding apparatus 100 can determine the shape of the encoding unit as a square. Also, . If the lengths of the widths and heights of the coding units are not the same, the image decoding apparatus 100 can determine the shape of the coding unit to be non-square.
- the size of the encoding unit may include various sizes of 4x4, 8x4, 4x8, 8x8, 16x4, 16x8, ..., 256x256.
- the size of the encoding unit can be classified according to the length of the longer side of the encoding unit, the length or the width of the shorter side.
- the video decoding apparatus 100 may apply the same division rule to the coding units classified into the same group. For example, the image decoding apparatus 100 may classify encoding units having the same long side length into the same size. In addition, the image decoding apparatus 100 can apply the same division rule to coding units having the same long side length.
- the ratio of the width and height of the encoding unit may include 1: 2, 2: 1, 1: 4, 4: 1, 1: 8, 8: 1, 1:16 or 16: 1.
- the direction of the encoding unit may include a horizontal direction and a vertical direction.
- the horizontal direction may indicate the case where the length of the width of the encoding unit is longer than the length of the height.
- the vertical direction can indicate the case where the width of the encoding unit is shorter than the length of the height.
- the image decoding apparatus 100 may adaptively determine the segmentation rule based on the size of the encoding unit.
- the image decoding apparatus 100 may determine the allowable division mode differently based on the size of the encoding unit. For example, the image decoding apparatus 100 can determine whether division is allowed based on the size of an encoding unit.
- the image decoding apparatus 100 can determine the dividing direction according to the size of the coding unit.
- the video decoding apparatus 100 can determine an allowable division type according to the size of a coding unit.
- Determination of the division rule based on the size of the encoding unit may be a predetermined division rule between the image encoding device 150 and the image decoding device 100.
- the video decoding apparatus 100 can determine the division rule based on the information obtained from the bit stream.
- the image decoding apparatus 100 can adaptively determine the division rule based on the position of the encoding unit.
- the image decoding apparatus 100 may adaptively determine the segmentation rule based on the position occupied by the encoding unit in the image.
- the image decoding apparatus 100 can determine the division rule so that the encoding units generated by different division paths do not have the same block form.
- the present invention is not limited thereto, and coding units generated by different division paths may have the same block form.
- the coding units generated by different division paths may have different decoding processing orders. Since the decoding procedure has been described with reference to FIG. 12, a detailed description thereof will be omitted.
- FIGS. 17 to 29 a method and apparatus for image encoding / decoding for determining the weights of filtering reference samples and filters to be filtered, and performing adaptive intra prediction based on weights of filtering reference samples and filters I will explain in detail.
- 17 is a view for explaining intra prediction modes according to an embodiment.
- intra prediction modes may include a planar mode (mode 0) and a DC mode (mode 1).
- the intra prediction modes may include an angular mode (the second to sixth modes) having a prediction direction.
- the angular mode may include a diagonal mode (No. 2 mode or No. 66 mode), a horizontal mode (No. 18 mode), and a vertical mode (No. 50 mode).
- the present invention is not limited thereto, and various intra prediction modes can be provided by adding a new intra-prediction mode or replacing an existing intra- , It is easily understood by those skilled in the art that the mode number of each intra prediction mode may vary depending on the case.
- FIG. 18 is a diagram for explaining a method for generating a reconstructed sample using an original reference sample, according to an embodiment of the present disclosure
- the image decoding apparatus 100 may generate reconstructed samples 1820 using the circular reference samples 1810 to perform intra prediction on the current block 1800.
- the image decoding apparatus 100 may generate a reconstructed reference sample 1835 corresponding to the position of the circular reference sample using the circular reference sample 1830.
- the image decoding apparatus 100 may generate a reconstructed reference sample 1845 corresponding to the position of the circular reference sample 1840 using the circular reference sample 1830 and the circular reference sample 1840.
- the image decoding apparatus 100 may generate a reconstructed reference sample 1855 at a position corresponding to the circular reference sample 1850 in a similar manner.
- one of the circular reference samples 1810 may be used to generate a reconstructed reference sample 1855 using the circular reference samples 1850 and the circular reference samples located to the left of the reference sample 1850. That is, the image decoding apparatus 100 can generate the reconstructed reference sample a ' n based on the following Equation (1).
- a ' n may refer to a reference sample that is separated by n from the reconstructed reference sample located at the leftmost of the reconstructed reference samples of the upper adjacent line of the current block.
- a i may refer to a reference sample that is separated by i from a circle reference sample located at the leftmost of the circle reference samples of the upper adjacent line of the current block.
- W i may refer to a filter weight applied to a i samples.
- the reconstructed reference sample may be generated by performing filtering on at least one circular reference sample, as described above.
- the reconstructed reference sample can be generated by performing filtering using filters having the same weight and filter taps.
- a reconstructed reference sample may be generated by performing filtering using the [1, 4] filter.
- the video decoding apparatus 100 adaptively determines the weight of the filter and the number of filter taps according to the position of the circle reference sample or the intra prediction mode of the current block, and performs filtering based on the weight of the filter and the number of filter taps
- a reconstructed reference sample may be generated.
- the reconstructed reference sample may be generated by adaptively determining the weight of the filter and the number of filter taps according to the size of the current block, and performing filtering based on the weight of the filter and the number of filter taps.
- 19A to 19B are diagrams for explaining a method of generating a reconstructed sample using circular reference samples according to a prediction direction of an intra prediction mode of a current block, according to an embodiment of the present disclosure; to be.
- the image decoding apparatus 100 when the direction of the intra prediction mode of the current block is the prediction direction 1905, the image decoding apparatus 100 performs filtering to generate reconstructed reference samples 1920 of the upper adjacent line It is possible to determine the circular reference samples 1925 of the upper adjacent line to be performed based on the x-axis direction 1930 of the prediction direction 1905. For example, when the direction of the intra prediction mode of the current block is the prediction direction 1905, the image decoding apparatus 100 may generate the reconstructed sample a ' j according to the following equation (2).
- a ' j may mean a reference sample separated by j based on the reconstructed reference sample located at the leftmost of the reconstructed reference samples of the upper adjacent line of the current block.
- a i may refer to a circle reference sample that is separated by i from the circle reference sample located at the leftmost of the circle reference samples of the upper adjacent line of the current block.
- W i may refer to a filter weight applied to a i samples.
- the image decoding apparatus 100 When the direction of the intra prediction mode of the current block is the prediction direction 1910, the image decoding apparatus 100 generates a reconstructed reference sample 1920 of the upper neighboring line by using a circular reference of the upper adjacent line to be filtered Samples 1925 can be determined based on the x-axis direction 1935 of the prediction direction 1910. If the direction of the intra prediction mode of the current block is the prediction direction 1910, the image decoding apparatus 100 may generate the reconstructed sample a ' j according to Equation (3) as follows.
- a ' j may mean a circular reference sample separated by j based on the reconstructed reference sample located at the leftmost of the reconstructed reference samples of the upper adjacent line of the current block.
- a i may refer to a circle reference sample that is separated by i from a circle reference sample located at the leftmost of the circle reference samples of the upper adjacent line of the current block and N represents a reconstructed reference sample located at the leftmost position It may mean the distance of the furthest sample relative to the reference.
- the image decoding apparatus 100 When the direction of the intra prediction mode of the current block is the prediction direction 1915, the image decoding apparatus 100 generates a reconstructed reference sample 1920 of the upper adjacent line by using a circular reference of the upper adjacent line to be filtered The samples 1925 can be determined based on the x-axis direction 1935 of the prediction direction 1915.
- the image decoding apparatus 100 may generate the reconstructed sample a ' j according to Equation (4) when the direction of the intra prediction mode of the current block is the prediction direction 1915.
- a ' j may mean a circular reference sample separated by j based on the reconstructed reference sample located at the leftmost of the reconstructed reference samples of the upper adjacent line of the current block.
- a i may refer to a circle reference sample that is separated by i from a circle reference sample located at the leftmost of the circle reference samples of the upper adjacent line of the current block and N represents a reconstructed reference sample located at the leftmost position It may mean the distance of the furthest sample relative to the reference.
- the video decoding apparatus 100 may generate reconstruction reference samples of the left adjacent line in a manner similar to that of generating the reconstruction reference samples of the upper adjacent line.
- the image decoding apparatus 100 When the direction of the intra prediction mode of the current block is the prediction direction 1905, the image decoding apparatus 100 generates a reconstructed reference sample 1940 of the left adjacent line by using a circular reference of the left adjacent line to be filtered Samples 1945 can be determined based on the y-axis direction 1950 of the prediction direction 1905.
- the image decoding apparatus 100 When the direction of the intra prediction mode of the current block is the prediction direction 1910, the image decoding apparatus 100 generates a reference reference sample 1940 of the left neighboring line, (1945) based on the y-axis direction (1950) of the prediction direction (1910).
- the video decoding apparatus 100 When the direction of the intra prediction mode of the current block is the prediction direction 1915, the video decoding apparatus 100 generates a reference reference sample 1940 of the left adjacent line, 1945 can be determined based on the y-axis direction 1955 of the prediction direction 1915.
- 20 is a diagram for explaining a method for generating a reconstructed sample using an original reference sample, according to an embodiment of the present disclosure
- the image decoding device 100 includes, one reference sample in order to perform intra prediction of the current block (2000) a 0, ..., the sample reconstructed using a N a '0 , ..., a ' N can be generated.
- the video decoding apparatus 100 decodes 100 may generate a reconstructed reference sample 2015 corresponding to the location of the circular reference sample using the circular reference sample 2010.
- the video decoding apparatus 100 may generate a reconstructed reference sample 2025 corresponding to the position of the circular reference sample 2020 using the circular reference sample 2010 and the circular reference sample 2020.
- the image decoding apparatus 100 may generate a reconstructed reference sample 2035 at a position corresponding to the circular reference sample 2030 in a similar manner.
- the image decoding apparatus 100 can generate the reconstructed reference sample a ' j based on the following Equation (5).
- a ' j may mean a reconstructed reference sample that is separated by j based on the reconstructed reference sample located at the leftmost of the reconstructed reference samples of the upper adjacent line of the current block
- a j or a j- 1 may refer to a circle reference sample separated by j or j-1 based on the circle reference sample located at the leftmost of the circle reference samples of the upper adjacent line of the current block.
- the image decoding apparatus 100 may generate a reconstructed reference sample 2045 corresponding to the position of the circular reference sample using the circular reference sample 2040.
- the video decoding apparatus 100 may generate a reconstructed reference sample 2055 corresponding to the position of the circular reference sample 2050 using the circular reference sample 2040 and the circular reference sample 2050.
- the image decoding apparatus 100 may generate a reconstructed reference sample 2035 at a position corresponding to the circular reference sample 2030 in a similar manner.
- the reconstructed reference sample 2035 can be generated using the circular reference sample 2030 and the circular reference sample 2032 immediately adjacent to the right side. That is, the video decoding apparatus 100 can generate the reconstructed reference sample a'j based on the following Equation (6).
- a ' j may mean a reconstructed reference sample separated by j based on the reconstructed reference sample located at the leftmost of the reconstructed reference samples of the upper adjacent line of the current block
- a j or a j + 1 may refer to a circle reference sample separated by j or j + 1 based on the circle reference sample located at the leftmost of the circle reference samples of the upper adjacent line of the current block.
- the video decoding apparatus 100 may generate reconstruction reference samples of the left adjacent line in a manner similar to that of generating the reconstruction reference samples of the upper adjacent line.
- the reconstructed reference samples of the left adjacent line may be generated in a manner similar to creating reconstruction reference samples of the upper adjacent line when the x-axis direction is from right to left.
- the video decoding apparatus 100 may generate reconstruction reference samples of the left adjacent line in a manner similar to that of generating the reconstruction reference samples of the upper adjacent line.
- the image decoding apparatus 100 determines that the x- The reconstructed reference samples of the left adjacent line may be generated in a manner similar to generating reconstruction reference samples of the upper adjacent line in the case of a direction from left to right.
- a ' j may mean a reference sample separated by j based on the reconstructed reference sample located at the leftmost of the reconstructed reference samples of the upper adjacent line of the current block
- a j-1 , a j , a j + 1 may refer to a circular reference sample separated by j-1, j or j + 1 based on a circle reference sample located at the leftmost of the circle reference samples of the upper adjacent line of the current block.
- the video decoding apparatus 100 performs filtering on the circular reference samples to generate reconstructed samples and performs intra prediction on the current block using the reconstructed samples so as to perform intra prediction using the circular reference samples It is possible to obtain the effect of performing intra prediction by referring to various reference samples.
- FIG. 21 is a diagram for explaining a process in which an image decoding apparatus performs intra prediction on a current block using circular reference samples and reconstructed samples according to an embodiment of the present disclosure
- the image decoding apparatus 100 may generate the predicted value of the current sample in the current block using the circular reference samples and the reconstructed reference samples. For example, when the intra prediction mode of the current block 2100 is the vertical mode, the image decoding apparatus 100 generates a circular reference sample 2121 located on the upper side of the current sample 2110 and a reconstructed reference sample 2131, The predicted value of the current sample 2110 can be generated. For example, the image decoding apparatus 100 can generate the predicted value p n of the current sample based on Equation (9).
- W denotes a weight, and may have a value between 0 and 1.
- f may mean a prediction function such as a 4-tap filter.
- w may be a fixed value, but is not limited to this, and may have a different value for each sample of the current block, and may be a value based on a distance from a circular reference sample or a reconstructed reference sample.
- the image decoding apparatus 100 generates the value of the predicted value p n of the current sample at once based on Equation (9).
- the final predicted value p ' n of the current sample can be generated by performing intra prediction using the circular reference sample to generate an initial predicted value and performing filtering using the initial predicted value and the reconstructed reference sample.
- the video decoding apparatus 100 performs the intra prediction on the current block using both the circular reference samples and the reconstructed reference samples.
- the present invention is not limited to this, and only the circular reference samples To perform intra prediction on the current block.
- the image decoding apparatus 100 may perform intra prediction on the current block using only the reconstructed samples.
- a first intra-prediction mode of the current block for determining a circular reference sample used for intra-prediction among the circular reference samples and a reconstructed reference sample used for intra-prediction among the reconstructed reference samples are determined
- the second intraprediction mode may be determined separately from the first intraprediction mode.
- an intra prediction mode for each block or a reference sample reconstructed for each intra prediction mode of the current block for determining a circular reference sample used for intra prediction, among the circular reference samples can be determined.
- the second intraprediction mode may be determined on a picture-by-picture basis.
- 22 is a diagram for explaining a process of performing weighted prediction using a circular reference sample and reconstructed reference samples of a left adjacent line and a neighboring adjacent line.
- the video decoding apparatus 100 determines a circular reference sample to be used for intra prediction among the circular reference prediction samples based on the intra prediction mode of the current block 2200, A reconstructed sample of the left adjacent line of the current block 2200 to be used for prediction and a reconstructed sample of the upper adjacent line are determined and the weighted prediction for the current sample 2210 is determined using the circular reference sample and the reconstructed reference samples To generate a predicted value for the current sample 2210.
- the video decoding apparatus 100 determines that the current reference sample 2220 of the upper adjacent line is in the vertical direction of the current sample 2210 A circular reference sample a j can be determined.
- the image decoding apparatus 100 can determine the reconstructed reference sample a ' j located in the vertical direction of the current sample 2210. [ Also, the image decoding apparatus 100 can determine the reconstructed reference sample b ' i located in the horizontal direction of the current sample 2210.
- the image decoding apparatus 100 may generate a predicted value of the current sample 2210 using the determined circular reference sample a j and the reconstructed reference sample a ' j, b' i . That is, the image decoding apparatus 100 can generate the predicted value p ij of the current sample based on the following equation (10).
- p ij denotes the predicted value of the current sample at the (i, j) position
- a j denotes a circular reference sample used for prediction of the current sample
- a ' j denotes an upper reference May refer to a reconstructed reference sample of the adjacent line
- b ' i may refer to a reconstructed reference sample of the left adjacent line used for prediction of the current sample.
- f may mean a prediction function such as a 4-tap filter.
- w 1 , w 2 and w 3 can be determined for each sample and can be determined based on the distance between the original reference sample or the reconstructed reference sample and the current sample.
- the video decoding apparatus 100 generates a reconstructed reference sample of the left neighboring line located in the horizontal direction of the current sample and a reconstructed reference sample of the upper adjacent line located in the vertical direction of the current sample
- a prediction direction for selecting a reconstructed reference sample for a current sample can be determined in consideration of a gradient change of the reference sample. That is, the image decoding apparatus 100 can determine the gradient direction of the reference sample having the same change tendency as the gradient value of the reference sample as the prediction direction for selecting the reconstructed reference sample with respect to the current sample.
- FIG. 23 is a flowchart illustrating a process of performing weighted prediction using predicted values generated by performing intra prediction using a circular reference sample and reconstructed reference samples of a left adjacent line and an adjacent adjacent line, FIG.
- the image decoding apparatus 100 may perform intra prediction based on the circular reference prediction sample based on the intra prediction mode of the current block 2300 to generate an intermediate prediction value for the current sample in the current block.
- the video decoding apparatus 100 may determine whether at least one reference sample among the reconstructed reference samples of the left adjacent line and the reconstructed reference samples of the upper adjacent line, regardless of the intra prediction mode of the current block 2300, Can be determined.
- the video decoding apparatus 100 may use the intermediate predicted value of the current sample in the current block 2300, the reconstructed reference sample of the left adjacent line, and the reconstructed reference sample of the upper adjacent line, A final predicted value can be generated.
- the image decoding apparatus 100 may generate the final predicted value p ' ij of the current sample in the current block based on Equation (11).
- p ij may mean the intermediate predicted value of the sample located in (i, j) in the current block generated by intra prediction using the circular reference samples according to the intra prediction mode of.
- C ij denotes a matrix including filter coefficients applied to p ij , a ' i and a' j .
- Ca ' i [i, j] may be a two-dimensional matrix containing filter coefficients applied to a' i .
- Ca ' j [i, j] may be a two-dimensional matrix containing filter coefficients applied to a' j .
- Cp ij [i, j] may refer to a two-dimensional matrix containing filter coefficients applied to p ij .
- the image decoding apparatus uses C i '[i, j] 2310, Ca j ' [i, j] 2320 and C p ij [i, j] To generate a final predicted value of the current sample in the current block 2300.
- the filter coefficients disclosed in FIG. 23 perform normalization (that is, perform the normalization It is easily understood by those skilled in the art that the generated coefficients may be finally used.
- the image decoding apparatus 100 can determine the final predicted value p ' ij of the current sample based on Equation (12) as follows.
- p ij may mean the intermediate predicted value of the sample located in (i, j) in the current block generated by intra prediction using the circular reference samples according to the intra prediction mode of.
- Ca [i, j] may be a two-dimensional matrix containing filter coefficients applied to a ' i and a' j .
- Cp [i, j] may refer to a two-dimensional matrix containing filter coefficients applied to p ij .
- the video decoding apparatus 100 calculates a predicted value generated by performing intra prediction using a circular reference sample, a reconstructed sample of a left adjacent line located in the horizontal direction of the current sample, The weighted prediction is performed using the reconstructed reference samples of the upper neighboring line of the current neighboring line in the horizontal direction of the current sample instead of the reconstructed reference sample, Those skilled in the art will readily understand that weighted prediction can be performed using the sample and the circular reference samples of the upper adjacent line in the vertical direction of the current sample.
- 24 is a diagram for explaining a process of performing location-based intra prediction of a current sample when the intra-prediction mode of the current block is one of a DC mode, a planar mode, and a vertical mode.
- the image decoding apparatus 100 receives the intermediate predicted value P (x, y) of the current sample 2405 in the current block 2400, the sample value of the left adjacent reference sample 2415 of the current sample 2405 R -1, y , the sample value R -1, -1 of the upper left neighboring reference sample 2420 of the current block and the sample value R x, -1 of the upper adjacent reference sample 2410 of the current sample 2405 To determine the final predicted value P '(x, y) of the current sample 2405.
- the image decoding apparatus 100 can determine the intermediate predicted sample value P (x, y) of the current sample 2405 based on the intra prediction according to the intra prediction mode of the current block.
- the image decoding apparatus 100 may determine the final predicted sample value P '(x, y) of the current sample based on Equation (13) as follows.
- the intra prediction mode of the current block is one of a DC mode, a planar mode, a horizontal mode and a vertical mode, wT, wL and wTL can be determined according to the following Equation (14).
- width and height can mean the width and height of the current block, respectively.
- predModeIntra indicates the intra prediction mode of the current block
- INTRA DC indicates the DC mode
- 25 is a diagram for explaining a process of performing position-based intra prediction of a current sample when the intra-prediction mode of the current block is a diagonal mode in the lower left direction.
- the image decoding apparatus 100 calculates a predicted value P (x ', y') of the current sample 2505, a sample 2510 located on a line from the current sample 2505 toward the lower left diagonal direction (X, -1) of the sample 2515 located on the line in the direction opposite to the line from the current sample 2505 to the lower left diagonal direction is used as the sample value R (-1, y)
- the predicted value P (x ', y') of the current sample 2505 can be generated.
- the image decoding apparatus 100 determines the intermediate predicted sample value P (x ', y') of the current sample 2505 based on the intra prediction mode of the current block (the intra prediction according to the diagonal mode in the lower left direction)
- the diagonal mode in the lower left direction may be the second mode.
- the image decoding apparatus 100 may determine the final predicted sample value P '(x', y ') of the current sample based on Equations (15) and (16) as follows.
- 26 is a diagram for explaining a process of performing location-based intra prediction of a current sample when the intra-prediction mode of the current block is a diagonal mode in the upper right direction.
- the image decoding apparatus 100 includes an intermediate predicted value P (x ', y') of the current sample 2605, a sample 2610 located on a line from the current sample 2605 toward the upper right diagonal direction Y) of the sample 2615 located in the opposite direction of the line from the current sample 2605 to the upper right side diagonal direction from the current sample (R (x, -1) (X ', y') of the input image data 2605 can be generated.
- the image decoding apparatus 100 determines the intermediate predicted sample value P (x ', y') of the current sample 2605 based on the intraprediction mode of the conventional current block (intra prediction according to the diagonal mode in the upper right direction)
- the diagonal mode in the upper right direction may be the 66th mode.
- the image decoding apparatus 100 may determine the final predicted sample value P '(x', y ') of the current sample based on Equations 17 to 18 as follows.
- 27 is a diagram for explaining a process of performing location-based intra prediction of a current sample when the video decoding apparatus is an angular mode in which the intra prediction mode of the current block is adjacent to the diagonal mode in the lower left direction.
- the image decoding apparatus 100 obtains the intermediate predicted value P (x ', y') of the current sample 2705 in the current block 2700 and the lower left direction according to the angular mode from the current sample 2705
- a predicted value of the current sample 2705 can be generated using the sample value R (x, -1) of the sample 2710 located in the opposite direction of the line facing the line.
- the angular mode adjacent to the diagonal mode in the lower left direction may be one of the modes 3 to 10.
- the video decoding apparatus 100 determines the intermediate predicted sample value P (x ', y') of the current sample 2705 based on the intra prediction mode of the current block (the intra prediction according to the lower left directional angular mode) .
- the image decoding apparatus 100 may determine the final predicted sample value P '(x', y ') of the current sample based on Equations 19 to 20 as follows.
- the sample value of Rx, -1 can be determined based on the sample value of the adjacent two integer samples and the distance between the coordinate x and the adjacent integer sample.
- FIG. 28 is a diagram for explaining a process of performing position-based intra prediction of a current sample when the video decoding apparatus is an angular mode in which the intra prediction mode of the current block is adjacent to the diagonal mode in the upper right direction.
- the image decoding apparatus 100 receives the intermediate predicted value P (x ', y') of the current sample 2805 from the current sample 2805 in a direction opposite to the direction toward the upper right direction according to the angular mode
- the predicted value of the current sample 2805 can be generated using the sample value R (-1, y ) of the sample 2810 to be processed.
- the angular mode adjacent to the diagonal mode in the upper right direction may be one of the modes 58 to 65.
- the image decoding apparatus 100 determines the intermediate predicted sample value P (x ', y') of the current sample 2805 based on the intra prediction mode of the current block (the intra prediction according to the angular mode in the upper right direction) .
- the image decoding apparatus 100 can determine the final predicted sample value P '(x', y ') of the current sample based on the following equations (21) to (22).
- the sample value of R -1, y can be determined based on the sample value of the adjacent two integer samples and the distance between the coordinate y and the adjacent integer sample.
- the image decoding apparatus 100 may determine the weights of the filtering reference samples and the filter to be filtered similarly if the current block is a rectangle, and perform intra-prediction adaptively based on the weights of the filtering reference samples and the filter It will be readily understood by those skilled in the art.
- the number of samples of the upper adjacent reference line of the current block may be 2W and the number of samples of the left adjacent reference line may be 2H.
- the number of reference samples of the upper adjacent reference line of the current block is W + H
- the number of reference samples of the left adjacent reference line May be W + H.
- FIG. 29 is a diagram illustrating an example of a case in which a negative convolution order between coding units is determined in a forward or reverse direction on the basis of an encoding order flag according to an embodiment of the present disclosure, Lt; / RTI > can be used for intra prediction.
- the maximum coding unit 2950 is divided into a plurality of coding units 2956, 2958, 2960, 2962, 2968, 2970, 2972, 2974, 2980, 2982, 2984, 2986.
- the maximum coding unit 2950 corresponds to the top node 2900 of the tree structure.
- the plurality of encoding units 2956, 2958, 2960, 2962, 2968, 2970, 2972, 2974, 2980, 2982, 2984, 2986 each include a plurality of nodes 2906, 2908, 2910, 2912, 2918, 2922, 2924, 2930, 2932, 2934, 2936).
- the upper encoding order flags 2902, 2914 and 2926 corresponding to the encoding order in the tree structure correspond to the arrows 2952, 2964 and 2976 and the upper encoding order flags 2904 and 2916 and 2928 correspond to the arrows 2954, 2966 and 2978 ).
- the upper coding order flag indicates the coding order of two coding units located at the uppermost of four coding units of the same depth. If the upper coding order flag is 0, the coding is performed in the forward direction. Conversely, when the upper encoding order flag is 1, the encoding is performed in the reverse direction.
- the lower coding order flag indicates the coding order of two coding units located at the lower one of the four equal-depth coding units. If the lower coding order flag is 0, coding is performed in the forward direction. Conversely, when the lower coding order flag is 1, the coding is performed in the reverse direction.
- the coding order between the coding units 2968 and 2970 is determined from the forward direction to the right direction from the forward direction.
- the lower coding order flag 2916 is 1, the coding order between the coding units 2972 and 2974 is determined from the right side to the left side in the reverse direction.
- the upper coding order flag and the lower coding order flag may be set to have the same value.
- the upper coding order flag 2902 is determined to be 1
- the lower coding order flag 2904 corresponding to the upper coding order flag 2902 can also be determined to be 1.
- the value of the upper order coding order flag and the value of the lower order coding order flag are determined by 1 bit, so that the information amount of the coding order information decreases.
- the upper coding order flag and the lower coding order flag of the current coding unit can be determined with reference to at least one of an upper coding order flag and a lower coding order flag applied to a coding unit having a lower depth than the current coding unit.
- the upper coding order flag 2926 and the lower coding order flag 2928 applied to the coding units 2980, 2982, 2984 and 2986 correspond to the lower coding order flag 2916 applied to the coding units 2972 and 2974, . ≪ / RTI > Therefore, the upper encoding order flag 2926 and the lower order encoding order flag 2928 can be determined to have the same value as the encoding order flag 2916.
- the values of the upper coding order flag and the lower coding order flag are determined from the upper coding unit of the current coding unit, so that the coding order information is not obtained from the bit stream. Therefore, the information amount of the coding order information decreases.
- the video decoding apparatus 100 decodes the data of the samples contained in the right adjacent encoding unit 2958 decoded before the current encoding unit 2986 and the data of the samples included in the upper adjacent encoding units 2980 and 2982 (Right reference line) included in the right adjacent encoding unit 2958 and the data of the samples (upper reference line) included in the upper adjacent encoding units 2980 and 2982 are used It is possible to perform the prediction according to the embodiment of the present disclosure.
- a method and apparatus for determining a weight of a filtering reference sample and a filter to be filtered, and performing adaptive intra prediction based on a weight of a filtering reference sample and a filter have been described,
- the intra prediction is performed based on the circular reference samples adjacent to the upper or left corner of the current block on the premise that the subdecoding is performed according to the subdecoding sequence for the coding unit of the current block, ,
- the intra prediction is performed based on the circular reference samples adjacent to the upper or right edge of the current block when the order of the partial coding / decoding of the adjacent coding units is the right coding unit or the order of the left coding unit,
- a person skilled in the art can readily understand that the present invention can be used.
- the accuracy of prediction is improved, and the reference smoothing effect is reconstructed in the process of reconstructing the circular reference samples Is applied to the reference sample, and therefore the accuracy of the prediction can be improved.
- the above-described embodiments of the present disclosure can be embodied in a general-purpose digital computer that can be created as a program that can be executed by a computer and operates the program using a computer-readable recording medium.
- the computer readable recording medium includes a storage medium such as a magnetic storage medium (e.g., ROM, floppy disk, hard disk, etc.), optical reading medium (e.g., CD ROM, DVD, etc.).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (14)
- 비트스트림으로부터 현재 블록의 변환 계수에 관한 정보를 획득하는 단계;상기 현재 블록 내 현재 샘플의 위치 및 상기 현재 블록의 인트라 예측 모드에 기초하여 상기 현재 샘플의 인트라 예측값을 생성하는 단계;상기 현재 블록 내 현재 샘플의 위치에 기초하여 필터 적용될 적어도 하나의 필터링 참조 샘플의 샘플값 및 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치를 결정하고, 상기 결정된 필터 적용될 필터링 참조 샘플의 샘플값, 현재 샘플의 인트라 예측값, 상기 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치에 기초하여 상기 현재 샘플의 필터링된 예측 샘플값을 생성하는 단계;상기 현재 샘플의 필터링된 예측 샘플값을 포함하는 상기 현재 블록의 예측 블록을 생성하는 단계;상기 획득된 현재 블록의 변환 계수에 관한 정보를 기초로 상기 현재 블록의 레지듀얼 블록을 획득하는 단계; 및상기 현재 블록의 예측 블록 및 상기 현재 블록의 레지듀얼 블록을 기초로 상기 현재 블록을 복원하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 1 항에 있어서,상기 현재 블록 내 현재 샘플의 위치 및 상기 현재 블록의 인트라 예측 모드에 기초하여 상기 현재 샘플의 인트라 예측값을 생성하는 단계는,상기 현재 샘플의 위치 및 상기 현재 블록의 인트라 예측 모드에 기초하여 상기 현재 샘플에 대응하는 원 참조 샘플을 결정하는 단계; 및상기 원 참조 샘플의 샘플값에 기초하여 상기 현재 샘플의 인트라 예측값을 생성하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 1 항에 있어서,상기 필터링 참조 샘플에 대한 제1 가중치는, 상기 필터링 참조 샘플과 상기 현재 샘플 사이의 거리에 기초하여 결정되는 것을 특징으로 하는 영상 복호화 방법.
- 제 3 항에 있어서,상기 필터링 참조 샘플에 대한 제1 가중치는, 상기 현재 블록의 크기 대비 상기 필터링 참조 샘플과 상기 현재 샘플 사이의 거리에 기초하여 결정되는 것을 특징으로 하는 영상 복호화 방법,
- 제 3 항에 있어서,상기 필터링 참조 샘플에 대한 제1 가중치는, 상기 필터링 참조 샘플과 상기 현재 샘플 사이의 거리가 멀어질수록 작아지는 것을 특징으로 하는 영상 복호화 방법.
- 제 1 항에 있어서,상기 필터링 참조 샘플은 상기 현재 샘플의 수평 방향에 위치하는 원 참조 샘플 및 상기 현재 샘플의 수직 방향에 위치하는 원 참조 샘플 중 적어도 하나를 포함하는 것을 특징으로 하는 비디오 복호화 방법.
- 제 1 항에 있어서,상기 현재 블록의 인트라 예측 모드가 앵귤러 모드(angular mode)인 경우,상기 필터링 참조 샘플은 상기 현재 샘플을 지나는 선 상에 위치하는 상기 현재 블록의 좌측 및 상측의 인접 샘플 중 적어도 하나를 포함하는 것을 특징으로 하고, 상기 선은 상기 앵귤러 모드에 의해 나타나는 예측 방향 및 반대 방향을 향하는 것을 특징으로 하는 영상 복호화 방법.
- 제 1 항에 있어서,상기 현재 블록 내 현재 샘플의 위치에 기초하여 필터 적용될 적어도 하나의 필터링 참조 샘플의 샘플값 및 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치를 결정하고, 상기 결정된 필터 적용될 필터링 참조 샘플의 샘플값, 현재 샘플의 인트라 예측값 및 상기 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치에 기초하여 상기 현재 샘플의 필터링된 예측 샘플값을 생성하는 단계는,적어도 하나의 제2 인트라 예측 모드를 결정하는 단계; 및상기 결정된 적어도 하나의 제2 인트라 예측 모드를 이용하여 상기 현재 블록 내 현재 샘플의 위치에 기초하여 필터 적용될 적어도 하나의 필터링 참조 샘플의 샘플값 및 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치를 결정하고, 상기 결정된 필터 적용될 필터링 참조 샘플의 샘플값, 현재 샘플의 인트라 예측값, 상기 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치에 기초하여 상기 현재 샘플의 필터링된 예측 샘플값을 생성하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 8 항에 있어서,상기 적어도 하나의 제2 인트라 예측 모드는 픽처 단위마다 결정되거나, 블록 단위로 결정되는 것을 특징으로 하는 영상 복호화 방법.
- 제 1 항에 있어서,상기 적어도 하나의 제2 인트라 예측 모드는 상기 인트라 예측 모드, 상기 인트라 예측 모드에 의해 나타나는 예측 방향의 반대 방향을 나타내는 인트라 예측 모드, 수평 모드 및 수직 모드 중 적어도 하나로 결정되는 것을 특징으로 하는 영상 복호화 방법.
- 제 1 항에 있어서,상기 제1 가중치 및 상기 제2 가중치는 정규화(normalization)된 값인 것을 특징으로 하는 영상 복호화 방법.
- 제 1 항에 있어서,상기 현재 블록 내 현재 샘플의 위치에 기초하여 필터 적용될 적어도 하나의 필터링 참조 샘플의 샘플값 및 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치를 결정하고, 상기 결정된 필터 적용될 필터링 참조 샘플의 샘플값, 현재 샘플의 인트라 예측값, 상기 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치에 기초하여 상기 현재 샘플의 필터링된 예측 샘플값을 생성하는 단계는,상기 인트라 예측 모드가 소정의 인트라 예측 모드인 경우, 상기 현재 블록 내 현재 샘플의 위치에 기초하여 필터 적용될 적어도 하나의 필터링 참조 샘플의 샘플값 및 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치를 결정하고, 상기 결정된 필터 적용될 필터링 참조 샘플의 샘플값, 현재 샘플의 인트라 예측값, 상기 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치에 기초하여 상기 현재 샘플의 필터링된 예측 샘플값을 생성하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 현재 블록 내 현재 샘플의 위치 및 상기 현재 블록의 인트라 예측 모드에 기초하여 상기 현재 샘플의 인트라 예측값을 생성하는 단계;상기 현재 블록 내 현재 샘플의 위치에 기초하여 필터 적용될 적어도 하나의 필터링 참조 샘플의 샘플값 및 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치를 결정하고, 상기 결정된 필터 적용될 필터링 참조 샘플의 샘플값, 현재 샘플의 인트라 예측값, 상기 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치에 기초하여 상기 현재 샘플의 필터링된 예측 샘플값을 생성하는 단계;상기 현재 샘플의 필터링된 예측 샘플값을 포함하는 상기 현재 블록의 예측 블록을 생성하는 단계; 및상기 현재 블록의 예측 블록에 기초하여 현재 블록의 변환 계수에 관한 정보를 부호화하는 단계를 포함하는 것을 특징으로 하는 영상 부호화 방법.
- 비트스트림으로부터 현재 블록의 변환 계수에 관한 정보를 획득하고,상기 현재 블록 내 현재 샘플의 위치 및 상기 현재 블록의 인트라 예측 모드에 기초하여 상기 현재 샘플의 인트라 예측값을 생성하고,상기 현재 블록 내 현재 샘플의 위치에 기초하여 필터 적용될 적어도 하나의 필터링 참조 샘플의 샘플값 및 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치를 결정하고, 상기 결정된 필터 적용될 필터링 참조 샘플의 샘플값, 현재 샘플의 인트라 예측값 및 상기 필터링 참조 샘플에 대한 제1 가중치 및 현재 샘플의 인트라 예측값에 대한 제2 가중치에 기초하여 상기 현재 샘플의 필터링된 예측 샘플값을 생성하고,상기 현재 샘플의 필터링된 예측 샘플값을 포함하는 상기 현재 블록의 예측 블록을 생성하고,상기 획득된 현재 블록의 변환 계수에 관한 정보를 기초로 상기 현재 블록의 레지듀얼 블록을 획득하고,상기 현재 블록의 예측 블록 및 상기 현재 블록의 레지듀얼 블록을 기초로 상기 현재 블록을 복원하는 프로세서를 포함하는 것을 특징으로 하는 영상 복호화 장치.
Priority Applications (16)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1020247042667A KR20250004178A (ko) | 2017-09-28 | 2018-09-27 | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 |
| JP2020516891A JP7170718B2 (ja) | 2017-09-28 | 2018-09-27 | 映像復号方法、映像符号化方法及び映像復号装置 |
| EP18863355.6A EP3661212A4 (en) | 2017-09-28 | 2018-09-27 | IMAGE CODING PROCESS AND APPARATUS, AND IMAGE DECODING PROCESS AND APPARATUS |
| CN201880063708.8A CN111164979A (zh) | 2017-09-28 | 2018-09-27 | 图像编码方法和设备以及图像解码方法和设备 |
| KR1020227013104A KR102514436B1 (ko) | 2017-09-28 | 2018-09-27 | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 |
| KR1020237009936A KR102747326B1 (ko) | 2017-09-28 | 2018-09-27 | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 |
| KR1020207007089A KR102389868B1 (ko) | 2017-09-28 | 2018-09-27 | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 |
| US16/651,809 US11695930B2 (en) | 2017-09-28 | 2018-09-27 | Image encoding method and apparatus, and image decoding method and apparatus |
| JP2022113556A JP7532450B2 (ja) | 2017-09-28 | 2022-07-14 | 映像復号方法 |
| US18/151,570 US11805258B2 (en) | 2017-09-28 | 2023-01-09 | Image encoding and decoding method and apparatus generating an angular intra prediction mode |
| US18/471,390 US12081764B2 (en) | 2017-09-28 | 2023-09-21 | Image encoding and decoding method and apparatus generating an angular intra prediction mode |
| US18/471,371 US12075057B2 (en) | 2017-09-28 | 2023-09-21 | Image encoding and decoding method and apparatus generating an angular intra prediction mode |
| US18/471,381 US12184860B2 (en) | 2017-09-28 | 2023-09-21 | Image encoding and decoding method and apparatus generating an angular intra prediction mode |
| JP2024124997A JP7762773B2 (ja) | 2017-09-28 | 2024-07-31 | 映像復号方法、映像符号化方法および装置 |
| JP2024124999A JP7762775B2 (ja) | 2017-09-28 | 2024-07-31 | 映像復号方法、映像符号化方法および装置 |
| JP2024124998A JP7762774B2 (ja) | 2017-09-28 | 2024-07-31 | 映像復号方法、映像符号化方法および装置 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201762564681P | 2017-09-28 | 2017-09-28 | |
| US62/564,681 | 2017-09-28 |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/651,809 A-371-Of-International US11695930B2 (en) | 2017-09-28 | 2018-09-27 | Image encoding method and apparatus, and image decoding method and apparatus |
| US18/151,570 Continuation US11805258B2 (en) | 2017-09-28 | 2023-01-09 | Image encoding and decoding method and apparatus generating an angular intra prediction mode |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019066472A1 true WO2019066472A1 (ko) | 2019-04-04 |
Family
ID=65901780
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2018/011390 Ceased WO2019066472A1 (ko) | 2017-09-28 | 2018-09-27 | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 |
Country Status (6)
| Country | Link |
|---|---|
| US (5) | US11695930B2 (ko) |
| EP (1) | EP3661212A4 (ko) |
| JP (5) | JP7170718B2 (ko) |
| KR (4) | KR102389868B1 (ko) |
| CN (1) | CN111164979A (ko) |
| WO (1) | WO2019066472A1 (ko) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114651441A (zh) * | 2019-09-19 | 2022-06-21 | Lg电子株式会社 | 使用参考样本滤波的图像编码/解码方法和装置及发送比特流的方法 |
| US11381814B2 (en) | 2018-03-08 | 2022-07-05 | Samsung Electronics Co., Ltd. | Video decoding method and device, and video encoding method and device |
| US20220286688A1 (en) * | 2019-06-21 | 2022-09-08 | Vid Scale, Inc. | Precision refinement for motion compensation with optical flow |
| US12143626B2 (en) | 2019-02-07 | 2024-11-12 | Interdigital Vc Holdings, Inc. | Systems, apparatus and methods for inter prediction refinement with optical flow |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102295680B1 (ko) * | 2010-12-08 | 2021-08-31 | 엘지전자 주식회사 | 인트라 예측 방법과 이를 이용한 부호화 장치 및 복호화 장치 |
| CN111164979A (zh) * | 2017-09-28 | 2020-05-15 | 三星电子株式会社 | 图像编码方法和设备以及图像解码方法和设备 |
| US11722673B2 (en) * | 2018-06-11 | 2023-08-08 | Samsung Eleotronics Co., Ltd. | Encoding method and apparatus therefor, and decoding method and apparatus therefor |
| TWI723433B (zh) * | 2018-06-21 | 2021-04-01 | 大陸商北京字節跳動網絡技術有限公司 | 改進的邊界分割 |
| WO2020256466A1 (ko) * | 2019-06-19 | 2020-12-24 | 한국전자통신연구원 | 화면 내 예측 모드 및 엔트로피 부호화/복호화 방법 및 장치 |
| SG11202102925XA (en) * | 2019-07-10 | 2021-04-29 | Guangdong Oppo Mobile Telecommunications Corp Ltd | Image component prediction method, encoder, decoder, and storage medium |
| US12143620B2 (en) | 2020-08-21 | 2024-11-12 | Alibaba Group Holding Limited | Filtering methods for angular intra prediction |
| WO2023208063A1 (en) * | 2022-04-26 | 2023-11-02 | Mediatek Inc. | Linear model derivation for cross-component prediction by multiple reference lines |
| KR20250005892A (ko) * | 2023-07-03 | 2025-01-10 | 주식회사 케이티 | 영상 부호화/복호화 방법 및 비트스트림을 저장하는 기록 매체 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20130119494A (ko) * | 2011-01-28 | 2013-10-31 | 퀄컴 인코포레이티드 | 화소 레벨 적응 인트라-평활화 |
| KR20150140848A (ko) * | 2010-12-22 | 2015-12-16 | 엘지전자 주식회사 | 화면 내 예측 방법 및 이러한 방법을 사용하는 장치 |
| KR101587927B1 (ko) * | 2013-06-24 | 2016-01-22 | 한양대학교 산학협력단 | 인트라 예측을 이용한 비디오 부호화/복호화 방법 및 장치 |
| WO2017058635A1 (en) * | 2015-09-29 | 2017-04-06 | Qualcomm Incorporated | Improved video intra-prediction using position-dependent prediction combination for video coding |
| WO2017090993A1 (ko) * | 2015-11-24 | 2017-06-01 | 삼성전자 주식회사 | 비디오 복호화 방법 및 그 장치 및 비디오 부호화 방법 및 그 장치 |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100718135B1 (ko) | 2005-08-24 | 2007-05-14 | 삼성전자주식회사 | 멀티 포맷 코덱을 위한 영상 예측 장치 및 방법과 이를이용한 영상 부호화/복호화 장치 및 방법 |
| EP2081386A1 (en) | 2008-01-18 | 2009-07-22 | Panasonic Corporation | High precision edge prediction for intracoding |
| WO2012134046A2 (ko) | 2011-04-01 | 2012-10-04 | 주식회사 아이벡스피티홀딩스 | 동영상의 부호화 방법 |
| US10306222B2 (en) | 2011-06-20 | 2019-05-28 | Hfi Innovation Inc. | Method and apparatus of directional intra prediction |
| WO2014003421A1 (ko) | 2012-06-25 | 2014-01-03 | 한양대학교 산학협력단 | 비디오 부호화 및 복호화를 위한 방법 |
| JP2017537539A (ja) | 2014-11-05 | 2017-12-14 | サムスン エレクトロニクス カンパニー リミテッド | サンプル単位予測符号化装置及びその方法 |
| CN111164979A (zh) * | 2017-09-28 | 2020-05-15 | 三星电子株式会社 | 图像编码方法和设备以及图像解码方法和设备 |
-
2018
- 2018-09-27 CN CN201880063708.8A patent/CN111164979A/zh active Pending
- 2018-09-27 US US16/651,809 patent/US11695930B2/en active Active
- 2018-09-27 KR KR1020207007089A patent/KR102389868B1/ko active Active
- 2018-09-27 KR KR1020227013104A patent/KR102514436B1/ko active Active
- 2018-09-27 JP JP2020516891A patent/JP7170718B2/ja active Active
- 2018-09-27 WO PCT/KR2018/011390 patent/WO2019066472A1/ko not_active Ceased
- 2018-09-27 KR KR1020237009936A patent/KR102747326B1/ko active Active
- 2018-09-27 EP EP18863355.6A patent/EP3661212A4/en active Pending
- 2018-09-27 KR KR1020247042667A patent/KR20250004178A/ko active Pending
-
2022
- 2022-07-14 JP JP2022113556A patent/JP7532450B2/ja active Active
-
2023
- 2023-01-09 US US18/151,570 patent/US11805258B2/en active Active
- 2023-09-21 US US18/471,371 patent/US12075057B2/en active Active
- 2023-09-21 US US18/471,390 patent/US12081764B2/en active Active
- 2023-09-21 US US18/471,381 patent/US12184860B2/en active Active
-
2024
- 2024-07-31 JP JP2024124998A patent/JP7762774B2/ja active Active
- 2024-07-31 JP JP2024124997A patent/JP7762773B2/ja active Active
- 2024-07-31 JP JP2024124999A patent/JP7762775B2/ja active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20150140848A (ko) * | 2010-12-22 | 2015-12-16 | 엘지전자 주식회사 | 화면 내 예측 방법 및 이러한 방법을 사용하는 장치 |
| KR20130119494A (ko) * | 2011-01-28 | 2013-10-31 | 퀄컴 인코포레이티드 | 화소 레벨 적응 인트라-평활화 |
| KR101587927B1 (ko) * | 2013-06-24 | 2016-01-22 | 한양대학교 산학협력단 | 인트라 예측을 이용한 비디오 부호화/복호화 방법 및 장치 |
| WO2017058635A1 (en) * | 2015-09-29 | 2017-04-06 | Qualcomm Incorporated | Improved video intra-prediction using position-dependent prediction combination for video coding |
| WO2017090993A1 (ko) * | 2015-11-24 | 2017-06-01 | 삼성전자 주식회사 | 비디오 복호화 방법 및 그 장치 및 비디오 부호화 방법 및 그 장치 |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11381814B2 (en) | 2018-03-08 | 2022-07-05 | Samsung Electronics Co., Ltd. | Video decoding method and device, and video encoding method and device |
| US11917140B2 (en) | 2018-03-08 | 2024-02-27 | Samsung Electronics Co., Ltd. | Selection of an extended intra prediction mode |
| US12452413B2 (en) | 2018-03-08 | 2025-10-21 | Samsung Electronics Co., Ltd. | Selection of an extended intra prediction mode |
| US12143626B2 (en) | 2019-02-07 | 2024-11-12 | Interdigital Vc Holdings, Inc. | Systems, apparatus and methods for inter prediction refinement with optical flow |
| US20220286688A1 (en) * | 2019-06-21 | 2022-09-08 | Vid Scale, Inc. | Precision refinement for motion compensation with optical flow |
| US12160582B2 (en) * | 2019-06-21 | 2024-12-03 | Interdigital Vc Holdings, Inc. | Precision refinement for motion compensation with optical flow |
| CN114651441A (zh) * | 2019-09-19 | 2022-06-21 | Lg电子株式会社 | 使用参考样本滤波的图像编码/解码方法和装置及发送比特流的方法 |
| CN114651441B (zh) * | 2019-09-19 | 2023-12-19 | Lg电子株式会社 | 使用参考样本滤波的图像编码/解码方法和装置及发送比特流的方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2024147825A (ja) | 2024-10-16 |
| KR102514436B1 (ko) | 2023-03-27 |
| KR20200037384A (ko) | 2020-04-08 |
| EP3661212A4 (en) | 2020-11-25 |
| JP2024147827A (ja) | 2024-10-16 |
| US11805258B2 (en) | 2023-10-31 |
| US12081764B2 (en) | 2024-09-03 |
| JP2024147826A (ja) | 2024-10-16 |
| KR20220054705A (ko) | 2022-05-03 |
| KR102747326B1 (ko) | 2024-12-31 |
| US20240048715A1 (en) | 2024-02-08 |
| US20240015296A1 (en) | 2024-01-11 |
| JP2022137229A (ja) | 2022-09-21 |
| KR20250004178A (ko) | 2025-01-07 |
| US12075057B2 (en) | 2024-08-27 |
| JP7762775B2 (ja) | 2025-10-30 |
| US20200252614A1 (en) | 2020-08-06 |
| KR102389868B1 (ko) | 2022-04-22 |
| US20240015297A1 (en) | 2024-01-11 |
| US12184860B2 (en) | 2024-12-31 |
| EP3661212A1 (en) | 2020-06-03 |
| JP7170718B2 (ja) | 2022-11-14 |
| US20230164323A1 (en) | 2023-05-25 |
| JP7532450B2 (ja) | 2024-08-13 |
| JP7762773B2 (ja) | 2025-10-30 |
| US11695930B2 (en) | 2023-07-04 |
| KR20230045102A (ko) | 2023-04-04 |
| JP7762774B2 (ja) | 2025-10-30 |
| CN111164979A (zh) | 2020-05-15 |
| JP2020534765A (ja) | 2020-11-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2019066472A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
| WO2019172676A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
| WO2020130730A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
| WO2021006692A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
| WO2020027551A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
| WO2020040619A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
| WO2019066384A1 (ko) | 크로스-성분 예측에 의한 비디오 복호화 방법 및 장치, 크로스-성분 예측에 의한 비디오 부호화 방법 및 장치 | |
| WO2020235951A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
| WO2019135558A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
| WO2019088700A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
| WO2020076130A1 (ko) | 타일 및 타일 그룹을 이용하는 비디오 부호화 및 복호화 방법, 및 타일 및 타일 그룹을 이용하는 비디오 부호화 및 복호화 장치 | |
| WO2021141451A1 (ko) | 양자화 파라미터를 획득하기 위한 비디오 복호화 방법 및 장치, 양자화 파라미터를 전송하기 위한 비디오 부호화 방법 및 장치 | |
| WO2019066514A1 (ko) | 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 | |
| WO2019066574A1 (ko) | 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 | |
| WO2020256521A1 (ko) | 제한된 예측 모드에서 복원후 필터링을 수행하는 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 | |
| WO2020076047A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
| WO2020256483A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
| WO2020263022A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
| WO2021049894A1 (ko) | 툴 세트를 이용하는 영상 복호화 장치 및 이에 의한 영상 복호화 방법, 및 영상 부호화 장치 및 이에 의한 영상 부호화 방법 | |
| WO2020013627A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
| WO2020130712A1 (ko) | 삼각 예측 모드를 이용하는 영상 부호화 장치 및 영상 복호화 장치, 및 이에 의한 영상 부호화 방법 및 영상 복호화 방법 | |
| WO2019216718A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
| WO2020256468A1 (ko) | 주변 움직임 정보를 이용하여 움직임 정보를 부호화 및 복호화하는 장치, 및 방법 | |
| WO2019135457A1 (ko) | 움직임 예측에 의한 패딩 기법을 이용한 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 | |
| WO2019059482A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18863355 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2018863355 Country of ref document: EP Effective date: 20200228 |
|
| ENP | Entry into the national phase |
Ref document number: 20207007089 Country of ref document: KR Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 2020516891 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |