WO2014042460A1 - 영상 부호화/복호화 방법 및 장치 - Google Patents
영상 부호화/복호화 방법 및 장치 Download PDFInfo
- Publication number
- WO2014042460A1 WO2014042460A1 PCT/KR2013/008303 KR2013008303W WO2014042460A1 WO 2014042460 A1 WO2014042460 A1 WO 2014042460A1 KR 2013008303 W KR2013008303 W KR 2013008303W WO 2014042460 A1 WO2014042460 A1 WO 2014042460A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- picture
- nal unit
- leading
- nal
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/188—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a video data packet, e.g. a network abstraction layer [NAL] unit
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/31—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/2387—Stream processing in response to a playback request from an end-user, e.g. for trick-play
Definitions
- the present invention relates to video compression techniques, and more particularly, to a method and apparatus for decoding image information in a bitstream.
- video quality of the terminal device can be supported and the network environment is diversified, in general, video of general quality may be used in one environment, but higher quality video may be used in another environment. .
- a consumer who purchases video content on a mobile terminal can view the same video content on a larger screen and at a higher resolution through a large display in the home.
- UHD Ultra High Definition
- the quality of the image for example, the image quality, the resolution of the image, the size of the image. It is necessary to provide scalability in the frame rate of video and the like. In addition, various image processing methods associated with such scalability should be discussed.
- the present invention provides an image encoding / decoding method and apparatus capable of improving encoding / decoding efficiency.
- the present invention provides a method and apparatus for extracting a bitstream capable of improving encoding / decoding efficiency.
- the present invention provides a NAL unit type capable of improving encoding / decoding efficiency.
- an image decoding method decodes the NAL unit by receiving a bitstream including information on a NAL unit type and checking whether the NAL unit in the bitstream is a reference picture based on the information on the NAL unit type.
- the information on the NAL unit type is information indicating whether the NAL unit is a referenced picture or the NAL unit is a referenced picture.
- an image decoding apparatus receives a bitstream including information about a NAL unit type, and determines whether the NAL unit in the bitstream is a reference picture based on the information on the NAL unit type to entropy decode the NAL unit. And an entropy decoding unit, wherein the information on the NAL unit type is information indicating whether the NAL unit is a referenced picture or the NAL unit is a leading picture.
- a video encoding method may further include generating a residual signal for the current picture by performing inter prediction based on a current picture, and information on the NAL unit and the NAL unit generated based on the residual signal for the current picture. And transmitting a bitstream including a bit stream, wherein the information on the NAL unit includes information on a NAL unit type determined according to whether the NAL unit is a leading picture to which the NAL unit is referenced or a leading picture to which the NAL unit is not referenced. do.
- an image encoding apparatus performs an inter prediction based on a current picture to generate a residual signal for the current picture, and a NAL unit and the NAL unit generated based on the residual signal for the current picture.
- NAL unit type that provides whether a NAL unit is a reference picture referenced by another picture or a non-reference picture not referenced by another picture, it is possible to efficiently extract the NAL unit from the bitstream. In addition, by accurately deriving whether the NAL unit is a non-reference picture, it is possible to remove the NAL unit from the bitstream without affecting the decoding process.
- FIG. 1 is a block diagram schematically illustrating a video encoding apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram schematically illustrating a video decoding apparatus according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating a hierarchical structure of coded images processed by a decoding apparatus.
- FIG. 4 illustrates a time layer structure for NAL units in a bitstream supporting time scalability.
- FIG. 5 is a diagram illustrating a time layer structure for NAL units in a bitstream supporting time scalability to which the present invention can be applied.
- FIG. 6 is a diagram for describing a picture that can be randomly accessed.
- FIG. 7 is a diagram for explaining an IDR picture.
- FIG. 8 is a diagram for explaining a CRA picture.
- FIG. 9 is a diagram illustrating a time layer structure for NAL units including leading pictures in a bitstream supporting time scalability.
- FIG. 10 is a diagram illustrating that NAL units including a leading picture are removed from a bitstream according to an embodiment of the present invention.
- FIG. 11 is a flowchart schematically illustrating a method of decoding image information according to an embodiment of the present invention.
- FIG. 12 is a block diagram schematically illustrating a decoding apparatus according to an embodiment of the present invention.
- each of the components in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions in the video encoding apparatus / decoding apparatus, each component is a separate hardware or separate software It does not mean that it is implemented.
- two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
- Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
- FIG. 1 is a block diagram schematically illustrating a video encoding apparatus according to an embodiment of the present invention.
- the video encoding / decoding method or apparatus may be implemented by an extension of a general video encoding / decoding method or apparatus that does not provide scalability, and the block diagram of FIG. 1 is based on the scalable video encoding apparatus.
- An embodiment of a video encoding apparatus that may be represented.
- the encoding apparatus 100 may include a picture divider 105, a predictor 110, a transformer 115, a quantizer 120, a reordering unit 125, an entropy encoding unit 130, An inverse quantization unit 135, an inverse transform unit 140, a filter unit 145, and a memory 150 are provided.
- the picture dividing unit 105 may divide the input picture into at least one processing unit block.
- the block as the processing unit may be a prediction unit (hereinafter referred to as a PU), a transform unit (hereinafter referred to as a TU), or a coding unit (hereinafter referred to as "CU"). It may be called.
- the processing unit blocks divided by the picture divider 105 may have a quad-tree structure.
- the predictor 110 includes an inter predictor for performing inter prediction and an intra predictor for performing intra prediction, as described below.
- the prediction unit 110 generates a prediction block by performing prediction on the processing unit of the picture in the picture division unit 105.
- the processing unit of the picture in the prediction unit 110 may be a CU, a TU, or a PU.
- the prediction unit 110 may determine whether the prediction performed on the processing unit is inter prediction or intra prediction, and determine specific contents (eg, prediction mode, etc.) of each prediction method.
- the processing unit in which the prediction is performed may differ from the processing unit in which specific contents of the prediction method and the prediction method are determined.
- the method of prediction and the prediction mode may be determined in units of PUs, and the prediction may be performed in units of TUs.
- a prediction block may be generated by performing prediction based on information of at least one picture of a previous picture and / or a subsequent picture of the current picture.
- a prediction block may be generated by performing prediction based on pixel information in a current picture.
- a skip mode, a merge mode, a motion vector prediction (MVP), and the like can be used.
- a reference picture may be selected for a PU and a reference block corresponding to the PU may be selected.
- the reference block may be selected in integer pixel units.
- a prediction block is generated in which a residual signal with the current PU is minimized and the size of the motion vector is also minimized.
- the prediction block may be generated in integer sample units, or may be generated in sub-pixel units such as 1/2 pixel unit or 1/4 pixel unit.
- the motion vector may also be expressed in units of integer pixels or less.
- the residual may be used as the reconstructed block, and thus the residual may not be generated, transformed, quantized, or transmitted.
- a prediction mode When performing intra prediction, a prediction mode may be determined in units of PUs, and prediction may be performed in units of PUs. In addition, a prediction mode may be determined in units of PUs, and intra prediction may be performed in units of TUs.
- the prediction mode may have 33 directional prediction modes and at least two non-directional modes.
- the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
- a prediction block may be generated after applying a filter to a reference sample.
- whether to apply the filter to the reference sample may be determined according to the intra prediction mode and / or the size of the current block.
- the PU may be a block of various sizes / types, for example, in the case of inter prediction, the PU may be a 2N ⁇ 2N block, a 2N ⁇ N block, an N ⁇ 2N block, an N ⁇ N block (N is an integer), or the like.
- the PU In the case of intra prediction, the PU may be a 2N ⁇ 2N block or an N ⁇ N block (where N is an integer).
- the PU of the N ⁇ N block size may be set to apply only in a specific case.
- the NxN block size PU may be used only for the minimum size CU or only for intra prediction.
- PUs such as N ⁇ mN blocks, mN ⁇ N blocks, 2N ⁇ mN blocks, or mN ⁇ 2N blocks (m ⁇ 1) may be further defined and used.
- the residual value (the residual block or the residual signal) between the generated prediction block and the original block is input to the converter 115.
- the prediction mode information, the motion vector information, etc. used for the prediction are encoded by the entropy encoding unit 130 together with the residual value and transmitted to the decoding apparatus.
- the transform unit 115 performs transform on the residual block in units of transform blocks and generates transform coefficients.
- the transform block is a rectangular block of samples to which the same transform is applied.
- the transform block can be a transform unit (TU) and can have a quad tree structure.
- the transformer 115 may perform the transformation according to the prediction mode applied to the residual block and the size of the block.
- the residual block is transformed using a discrete sine transform (DST), otherwise the residual block is transformed into a discrete cosine transform (DCT). Can be converted using.
- DST discrete sine transform
- DCT discrete cosine transform
- the transform unit 115 may generate a transform block of transform coefficients by the transform.
- the quantization unit 120 may generate quantized transform coefficients by quantizing the residual values transformed by the transform unit 115, that is, the transform coefficients.
- the value calculated by the quantization unit 120 is provided to the inverse quantization unit 135 and the reordering unit 125.
- the reordering unit 125 rearranges the quantized transform coefficients provided from the quantization unit 120. By rearranging the quantized transform coefficients, the encoding efficiency of the entropy encoding unit 130 may be increased.
- the reordering unit 125 may rearrange the quantized transform coefficients in the form of a 2D block into a 1D vector form through a coefficient scanning method.
- the entropy encoding unit 130 may perform entropy encoding on the quantized transform coefficients rearranged by the reordering unit 125.
- Entropy encoding may include, for example, encoding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC).
- the entropy encoding unit 130 may include quantized transform coefficient information, block type information, prediction mode information, partition unit information, PU information, transmission unit information, and motion vector of the CUs received from the reordering unit 125 and the prediction unit 110.
- Various information such as information, reference picture information, interpolation information of a block, and filtering information may be encoded.
- the entropy encoding unit 130 may apply a constant change to a parameter set or syntax to be transmitted.
- the inverse quantizer 135 inversely quantizes the quantized values (quantized transform coefficients) in the quantizer 120, and the inverse transformer 140 inversely transforms the inverse quantized values in the inverse quantizer 135.
- the reconstructed block may be generated by combining the residual values generated by the inverse quantizer 135 and the inverse transform unit 140 and the prediction blocks predicted by the prediction unit 110.
- a reconstructed block is generated by adding a residual block and a prediction block through an adder.
- the adder may be viewed as a separate unit (restore block generation unit) for generating a reconstruction block.
- the filter unit 145 may apply a deblocking filter, an adaptive loop filter (ALF), and a sample adaptive offset (SAO) to the reconstructed picture.
- ALF adaptive loop filter
- SAO sample adaptive offset
- the deblocking filter may remove distortion generated at the boundary between blocks in the reconstructed picture.
- the adaptive loop filter may perform filtering based on a value obtained by comparing the reconstructed image with the original image after the block is filtered through the deblocking filter. ALF may be performed only when high efficiency is applied.
- the SAO restores the offset difference from the original image on a pixel-by-pixel basis to the residual block to which the deblocking filter is applied, and is applied in the form of a band offset and an edge offset.
- the filter unit 145 may not apply filtering to the reconstructed block used for inter prediction.
- the memory 150 may store the reconstructed block or the picture calculated by the filter unit 145.
- the reconstructed block or picture stored in the memory 150 may be provided to the predictor 110 that performs inter prediction.
- FIG. 2 is a block diagram schematically illustrating a video decoding apparatus according to an embodiment of the present invention.
- the scalable video encoding / decoding method or apparatus may be implemented by extension of a general video encoding / decoding method or apparatus that does not provide scalability, and the block diagram of FIG. 2 shows scalable video decoding.
- FIG. 2 shows scalable video decoding.
- the video decoding apparatus 200 includes an entropy decoding unit 210, a reordering unit 215, an inverse quantization unit 220, an inverse transform unit 225, a prediction unit 230, and a filter unit 235.
- Memory 240 may be included.
- the input bitstream may be decoded according to a procedure in which image information is processed in the video encoding apparatus.
- VLC variable length coding
- CABAC CABAC
- Information for generating the prediction block among the information decoded by the entropy decoding unit 210 is provided to the predictor 230, and a residual value where entropy decoding is performed by the entropy decoding unit 210, that is, a quantized transform coefficient It may be input to the reordering unit 215.
- the reordering unit 215 may reorder the information of the bitstream entropy decoded by the entropy decoding unit 210, that is, the quantized transform coefficients, based on the reordering method in the encoding apparatus.
- the reordering unit 215 may reorder the coefficients expressed in the form of a one-dimensional vector by restoring the coefficients in the form of a two-dimensional block.
- the reordering unit 215 may generate an array of coefficients (quantized transform coefficients) in the form of a 2D block by scanning coefficients based on the prediction mode applied to the current block (transform block) and the size of the transform block.
- the inverse quantization unit 220 may perform inverse quantization based on the quantization parameter provided by the encoding apparatus and the coefficient values of the rearranged block.
- the inverse transform unit 225 may perform inverse DCT and / or inverse DST on the DCT and the DST performed by the transform unit of the encoding apparatus with respect to the quantization result performed by the video encoding apparatus.
- the inverse transformation may be performed based on a transmission unit determined by the encoding apparatus or a division unit of an image.
- the DCT and / or DST in the encoding unit of the encoding apparatus may be selectively performed according to a plurality of pieces of information, such as a prediction method, a size and a prediction direction of the current block, and the inverse transformer 225 of the decoding apparatus may be Inverse transformation may be performed based on the performed transformation information.
- the prediction unit 230 may generate the prediction block based on the prediction block generation related information provided by the entropy decoding unit 210 and the previously decoded block and / or picture information provided by the memory 240.
- intra prediction for generating a prediction block based on pixel information in the current picture may be performed.
- inter prediction on the current PU may be performed based on information included in at least one of a previous picture or a subsequent picture of the current picture.
- motion information required for inter prediction of the current PU provided by the video encoding apparatus for example, a motion vector, a reference picture index, and the like, may be derived by checking a skip flag, a merge flag, and the like received from the encoding apparatus.
- the reconstruction block may be generated using the prediction block generated by the predictor 230 and the residual block provided by the inverse transform unit 225.
- FIG. 2 it is described that the reconstructed block is generated by combining the prediction block and the residual block in the adder.
- the adder may be viewed as a separate unit (restore block generation unit) for generating a reconstruction block.
- the residual is not transmitted and the prediction block may be a reconstruction block.
- the reconstructed block and / or picture may be provided to the filter unit 235.
- the filter unit 235 may apply deblocking filtering, sample adaptive offset (SAO), and / or ALF to the reconstructed block and / or picture.
- SAO sample adaptive offset
- the memory 240 may store the reconstructed picture or block to use as a reference picture or reference block and provide the reconstructed picture to the output unit.
- Components directly related to the decoding of an image for example, an entropy decoding unit 210, a reordering unit 215, an inverse quantization unit 220, an inverse transform unit 225, a prediction unit 230, and a filter unit ( 235) and the like may be distinguished from other components by a decoder or a decoder.
- the decoding apparatus 200 may further include a parsing unit (not shown) which parses information related to an encoded image included in the bitstream.
- the parsing unit may include the entropy decoding unit 210 or may be included in the entropy decoding unit 210. Such a parser may also be implemented as one component of the decoder.
- FIG. 3 is a diagram illustrating a hierarchical structure of coded images processed by a decoding apparatus.
- a coded picture exists between the VCL (video coding layer) that handles the decoding process of the picture and itself, a subsystem for transmitting and storing coded information, and a network adaptation function that exists between the VCL and the subsystem. It is divided into NAL (network abstraction layer).
- VCL data including compressed image data (slice data) is generated, or a picture parameter set (PSP), a sequence parameter set (SPS), and a video parameter set (Video Parameter Set: A Supplemental Enhancement Information (SEI) message may be generated in addition to a parameter set including information such as VPS) or an image decoding process.
- PSP picture parameter set
- SPS sequence parameter set
- SEI Supplemental Enhancement Information
- a NAL unit can be generated by adding header information (NAL unit header) to a raw byte sequence payload (RBSP) generated in a VCL.
- the RBSP refers to slice data, parameter set, SEI message, etc. generated in the VCL.
- the NAL unit header may include NAL unit type information specified according to RBSP data included in the corresponding NAL unit.
- the NAL unit may be divided into a VCL NAL unit and a Non-VCL NAL unit according to the RBSP generated in the VCL.
- VCL NAL unit means a NAL unit that contains information about the image (slice data)
- Non-VCL NAL unit means a NAL unit that contains information (parameter set or SEI message) necessary to decode the image. do.
- VCL NAL unit and Non-VCL NAL unit may be transmitted through a network by attaching header information according to the data standard of the subsystem.
- the NAL unit may be transformed into data of a predetermined standard such as an H.264 / AVC file format, a real-time transport protocol (RTP), a transport stream (TS), etc., and transmitted through various networks.
- a predetermined standard such as an H.264 / AVC file format, a real-time transport protocol (RTP), a transport stream (TS), etc.
- the NAL unit may be specified with the NAL unit type according to the RBSP data structure included in the corresponding NAL unit, and information on the NAL unit type may be stored and signaled in the NAL unit header.
- the NAL unit may be classified into a VCL NAL unit type and a Non-VCL NAL unit type according to whether or not the NAL unit includes information (slice data) about an image.
- the VCL NAL unit type may be classified according to the nature and type of pictures included in the VCL NAL unit, and the non-VCL NAL unit type may be classified according to the type of the parameter set.
- NAL unit type specified according to the nature and type of the picture included in the VCL NAL unit.
- TSA Temporal Sub-layer Access
- NAL unit A type for a NAL unit that contains a coded slice segment of a TSA picture.
- the TSA picture is a picture that can be switched between temporal sub-layers in a bitstream that supports temporal scalability, and indicates a position where up-switching is possible from a lower sub layer to an upper sub layer. It is a picture.
- Step-wise Temporal Sub-layer Access A type for a NAL unit that contains coded slice segments of an STSA picture.
- the STSA picture is a picture that can switch between temporal sublayers in a bitstream that supports temporal scalability, and a picture indicating a position where up-switching is possible in the lower sublayer to a higher sublayer higher than the lower sublayer. to be.
- TRAIL Type for NAL unit containing coded slice segments of non-TSA, non-STSA trailing picture.
- the trailing picture refers to a picture that follows a picture that can be randomly accessed in an output order and a decoding order.
- IDR Instantaneous Decoding Refresh
- the IDR picture is a random-accessible picture and may be the first picture in decoding order in the bitstream or may appear in the middle of the bitstream. Also, an IDR picture contains only I slices. Each IDR picture is the first picture of a coded video sequence (CVS) in decoding order.
- CVS coded video sequence
- the NAL unit type of the IDR picture may be represented by IDR_W_RADL, and if the IDR picture is not associated with the leading picture, the NAL unit type of the IDR picture may be represented by IDR_N_LP. The IDR picture is not associated with the non-decodable leading picture described below.
- Clean Random Access A type for a NAL unit that contains a coded slice segment of a CRA picture.
- the CRA picture is a random-accessible picture, which may be the first picture in decoding order in the bitstream or may appear in the middle of the bitstream.
- a CRA picture contains only I slices.
- the CRA picture may have an association with a decodable leading picture or a leading picture that may skip decoding, and may not output a leading picture that may skip decoding. Since a leading picture that can skip decoding may use a picture that does not exist in the bitstream as a reference picture, a leading picture that can skip decoding by a decoder may not be output.
- BLA Broken Link Access
- the BLA picture is a random-accessible picture and may be the first picture in decoding order in the bitstream or may appear in the middle of the bitstream.
- a BLA picture contains only I slices. Each BLA picture starts a new coded video sequence (CVS), and the same decoding process as the IDR picture may be performed.
- CVS coded video sequence
- the NAL unit type of the BLA picture may be represented as BLA_W_LP.
- the NAL unit type of the BLA picture may be represented as BLA_W_LP.
- the NAL unit type of the BLA picture may be represented as BLA_W_RADL. If the BLA picture does not have an association with the leading picture, the NAL unit type of the BLA picture may be represented as BLA_N_LP.
- NAL unit type specified according to the type of parameter set included in the Non-VCL NAL unit type.
- VPS Video Parameter Set
- Sequence Parameter Set Type for the NAL unit containing the SPS.
- PPS Physical Parameter Set
- the aforementioned NAL unit types have syntax information for the NAL unit type, and the syntax information may be stored and signaled in a NAL unit header.
- the syntax information may be nal_unit_type, and NAL unit types may be specified by a nal_unit_type value.
- a bitstream (or temporal scalable bitstream) that supports temporal scalability includes information about temporal layer that is temporally scaled.
- the information on the temporal layer may be identification information of the temporal layer specified according to the temporal scalability of the NAL unit.
- the identification information of the temporal layer may use temporal_id syntax information, and the temporal_id syntax information may be stored in an NAL unit header in the encoding apparatus and signaled to the decoding apparatus.
- a temporal layer may be referred to as a sub-layer, a temporal sub-layer, a temporal scalable layer, or the like.
- FIG. 4 illustrates a time layer structure for NAL units in a bitstream supporting time scalability.
- the NAL units included in the bitstream have identification information (eg, temporal_id) of the time layer.
- identification information eg, temporal_id
- a temporal layer composed of NAL units with a temporal_id value of 0 may provide the lowest temporal scalability
- a temporal layer composed of NAL units with a temporal_id value of 2 may provide the highest temporal scalability. Can be.
- the box marked I refers to the I picture
- the box marked B refers to the B picture.
- the arrow indicates a reference relationship as to whether the picture refers to another picture.
- NAL units of a temporal layer having a temporal_id value of 0 are reference pictures that NAL units of a temporal layer having a temporal_id value of 0, 1, or 2 may refer to.
- NAL units of a temporal layer with a temporal_id value of 1 are reference pictures that can be referenced by NAL units of a temporal layer with a temporal_id value of 1 or 2.
- FIG. NAL units of a temporal layer with a temporal_id value of 2 may be reference pictures that may be referenced by NAL units of the same temporal layer, that is, a temporal layer with a temporal_id value of 2, or may be non-reference pictures not referenced by another picture. have.
- NAL units of the temporal_id value 2 that is, the highest temporal layer, are non-referenced pictures as shown in FIG. 4, these NAL units are extracted from the bitstream without affecting other pictures in the decoding process. extraction or removal).
- information indicating whether the NAL unit is a reference picture or a non-reference picture may be provided. Such information may be provided at the NAL unit level.
- the NAL unit type according to an embodiment of the present invention may be classified according to whether the NAL unit is a reference picture referred to by another picture or a non-reference picture not referred to by another picture.
- the NAL unit type of the TSA picture may be represented by TSA_R. If the TSA picture is a non-reference picture, the NAL unit type of the TSA picture may be represented by TSA_N. If the STSA picture is a reference picture, the NAL unit type of the STSA picture may be represented by STSA_R. If the STSA picture is a non-reference picture, the NAL unit type of the STSA picture may be represented by STSA_N.
- the NAL unit type of the non-TSA, non-STSA trailing picture can be represented as TRAIL_R, and if the non-TSA, non-STSA trailing picture is a non-reference picture
- the NAL unit type of the non-TSA, non-STSA trailing picture may be represented by TRAIL_N.
- FIG. 5 is a diagram illustrating a time layer structure for NAL units in a bitstream supporting time scalability to which the present invention can be applied.
- the NAL units included in the bitstream have identification information (eg, temporal_id) of the time layer.
- a temporal layer 500 and a temporal_id value composed of NAL units having a temporal_id value of 0 are illustrated. This may be layered into a time layer 510 composed of 1 NAL units and a time layer 520 composed of NAL units having a temporal_id value of 2.
- the temporal layer 500 of NAL units having a temporal_id value of 0 may provide the lowest temporal scalability, and the temporal layer of the temporal_id value of 2 having a NAL unit having a temporal_id value of 2 is the highest. You can provide your ability.
- a box labeled I refers to an I picture and a box labeled B refers to a B picture.
- the arrow indicates a reference relationship as to whether the picture refers to another picture.
- the temporal_id value 2 is composed of pictures of type TRAIL_N.
- the TRAIL_N type is information indicating a NAL unit in which the trailing picture is a non-reference picture. Since unreferenced pictures are not referenced by other pictures during inter prediction, they can be removed from the bitstream without affecting the decoding process of other pictures during bitstream extraction. Accordingly, pictures of the TRAIL_N type of the temporal_id value 2 of the temporal layer 520 may not be affected in decoding even when removed from the bitstream.
- IDR, CRA, and BLA types are random access (or splicing) pictures, that is, random access points (RAPs) or IRAPs (Random Access Points) that become random access points.
- RAPs random access points
- IRAPs Random Access Points
- the RAP picture may be an IDR, CRA, or BLA picture, and may include only I slices.
- the first picture in decoding order in the bitstream becomes a RAP picture.
- a RAP picture (IDR, CRA, or BLA picture) is included in the bitstream, there may be a picture that is preceded by an output order but is followed by a decoding order by the RAP picture. Such pictures are called leading pictures (LPs).
- LPs leading pictures
- FIG. 6 is a diagram for describing a picture that can be randomly accessed.
- a random-accessible picture that is, a RAP or IRAP picture that becomes a random access point is the first picture in decoding order in a bitstream during random access and includes only I slices.
- FIG. 6 illustrates an output order or display order and a decoding order of a picture. As shown, the output order and decoding order of the pictures may be different. For convenience, the pictures are divided into predetermined groups and described.
- Pictures belonging to the first group (I) represent pictures that precede the IRAP picture in both output order and decoding order, and pictures belonging to the second group (II) precede the output order but follow the decoding order to IRAP pictures. Indicates. The pictures of the third group III follow in both the output order and the decoding order than the IRAP picture.
- the pictures of the first group I may be decoded and output regardless of the IRAP picture.
- Pictures belonging to the second group (II) that are output before the IRAP picture are called leading pictures, and the leading pictures may be problematic in decoding when the IRAP picture is used as a random access point.
- a picture belonging to the third group (III) whose output and decoding order follows the IRAP picture is called a normal picture.
- the normal picture is not used as the reference picture of the leading picture.
- the random access point where random access occurs in the bitstream becomes an IRAP picture, and random access starts as the first puncher of the second group (II) is output.
- FIG. 7 is a diagram for explaining an IDR picture.
- An IDR picture is a picture that becomes a random access point when a group of pictures has a closed structure. Since the IDR picture is an IRAP picture as described above, it includes only an I slice, and may be the first picture in decoding order in the bitstream, or may be in the middle of the bitstream. When an IDR picture is decoded, all reference pictures stored in a decoded picture buffer (DPB) are marked as “unused for reference”.
- DPB decoded picture buffer
- Bars shown in FIG. 7 indicate pictures, and arrows indicate reference relationships as to whether a picture can use another picture as a reference picture.
- the x mark displayed on the arrow indicates that the picture (s) cannot refer to the picture to which the arrow points.
- a picture with a POC of 32 is an IDR picture.
- the pictures that have a POC of 25 to 31 and are output before the IDR picture are the leading pictures 710.
- Pictures having a POC of 33 or more correspond to the normal picture 720.
- the leading pictures 710 preceding the IDR picture in the output order may use the leading picture different from the IDR picture as the reference picture, but use the past picture 730 preceding the leading pictures 710 in the output order and decoding order. Not available as a reference picture.
- the normal pictures 720 following the IDR picture in output order and decoding order may be decoded with reference to the IDR picture, leading picture and other normal pictures.
- FIG. 8 is a diagram for explaining a CRA picture.
- a CRA picture is a picture that becomes a random access point when a group of pictures has an open structure. As described above, since the CRA picture is also an IRAP picture, it includes only an I slice, and may be the first picture in decoding order in the bitstream, or may be in the middle of the bitstream for normal play.
- Bars shown in FIG. 8 represent pictures and arrows represent reference relationships as to whether a picture can use another picture as a reference picture.
- the x mark displayed on the arrow indicates that the picture or pictures cannot refer to the picture indicated by the arrow.
- the leading pictures 810 preceding the CRA picture in output order may use both the CRA picture, other leading pictures, and past pictures 830 preceding the leading pictures 810 in output order and decoding order as reference pictures. have.
- the normal pictures 820 following the CRA picture in the output order and the decoding order may be decoded by referring to a normal picture different from the CRA picture.
- the normal pictures 820 may not use the leading pictures 810 as a reference picture.
- a BLA picture has similar functions and properties to a CRA picture, and refers to a picture existing in the middle of the bitstream as a random access point when the coded picture is spliced or the bitstream is broken in the middle.
- random access occurs, since the BLA picture is regarded as the start of a new sequence, when the BLA picture is received by the decoder, all the parameter information about the image may be received again.
- the BLA picture may be determined from an encoding device, or a CRA picture may be changed into a BLA picture in a system that receives a bitstream from the encoding device. For example, when the bitstream is spliced, the system converts the CRA picture into a BLA picture and provides the decoder to decode the image. In this case, parameter information about the image is newly provided from the system to the decoder.
- the decoder refers to a device including an image processor that decodes an image.
- the decoder may be implemented by the decoding apparatus of FIG. 2 or may mean a decoding module which is a core module for processing an image.
- leading pictures are output before the CRA picture, but decoded after the CRA picture.
- Leading pictures may refer to at least one of the preceding preceding pictures.
- preceding pictures preceding the CRA picture in decoding order may not be available. That is, since the preceding pictures, which may be the reference pictures of the leading pictures, are not available, the leading picture referring to the unavailable picture may not be decoded normally.
- the leading picture refers to a picture that does not exist in the bitstream, or the picture that the leading picture refers to does not exist in the DPB (Decoded Picture Buffer), or “unused for reference in the DPB. The case of a picture marked with “” refers to.
- DPB Decoded Picture Buffer
- NAL unit type which is an embodiment of the NAL unit type for the leading picture.
- DLP_NUT NAL Unit Type (NUT) for a NAL unit containing a coded slice segment of a decodable leading picture (DLP).
- the decodable leading picture refers to a decodable leading picture for random access. Decodable leading pictures for all random accesses are leading pictures. Decodeable leading pictures for random access are not used as reference pictures in the decoding process of trailing pictures associated with the same RAP (or IRAP) picture. If there are decodable leading pictures for random access, the decodable leading pictures for random access precede trailing pictures associated with the same RAP (or IRAP) picture in decoding order.
- TFD_NUT Type for a NAL unit containing a coded slice segment of a leading picture that can be discarded without being decoded normally if the picture preceding the RAP picture is not available (Tagged For Discard: TFD).
- a TFD leading picture is a picture that can be discarded without decoding.
- the TFD leading picture may be referred to as a skipped leading picture for random access.
- the leading pictures that are skipped for random access are the leading pictures associated with the BLA or CRA picture. Since skipped leading pictures for random access may refer to pictures that do not exist in the bitstream, skipped leading pictures for random access are not output and cannot be decoded correctly.
- the skipped leading pictures for random access are not used as reference pictures in the decoding process of pictures that are not skipped leading pictures for random access. If there are skipped leading pictures for random access, the skipped leading pictures for random access precede the trailing pictures associated with the same RAP (or IRAP) picture in decoding order.
- the DLP and TFD leading pictures may be processed in the same manner as trailing pictures in the normal decoding process when random access or splicing does not occur.
- FIG. 9 is a diagram illustrating a time layer structure for NAL units including leading pictures in a bitstream supporting time scalability.
- the NAL units included in the bitstream have identification information (eg, temporal_id) of the time layer.
- a temporal_id value of a temporal layer 900 including NAL units having a temporal_id value of 0 is shown in FIG. 9. This may be layered into a time layer 910 composed of 1 NAL units and a time layer 920 composed of NAL units having a temporal_id value of 2.
- the temporal layer 900 of NAL units having a temporal_id value of 0 may provide the lowest temporal scalability, and the temporal scaler of the temporal_id value of 2 having a NAL unit having a temporal_id value of 2 is the highest. You can provide your ability.
- a box labeled I refers to an I picture
- a box labeled B refers to a B picture.
- the arrow indicates a reference relationship as to whether the picture refers to another picture.
- a TRAIL_R picture of a temporal_id value of 1 is used as a reference picture using an IDR_N_LP picture and a TRAIL_R picture of a temporal_id value of 0 as a reference layer
- a temporal_id value of 2 is used as a reference picture. Is used as a reference picture by TRAIL_N pictures.
- TRAIL_N pictures of the temporal_id value 2 have a picture that is not referenced by another picture.
- TRAIL_N pictures that are not referenced when bitstream extraction may be removed from the bitstream.
- the leading picture of the DLP_NUT and TFD_NUT of the temporal_id value 2 of the temporal layer 920 does not include information for distinguishing whether the picture is referred to by another picture. Therefore, it is difficult to determine whether removing the leading picture from the bitstream during the bitstream extraction does not affect the decoding process.
- a NAL unit type for a leading picture is defined as follows.
- DLP_R Referenced Decodable Leading Picture.
- a type for a NAL unit that includes a coded slice segment of a Random Access Decodable Leading (RADL) picture for referenced by another picture.
- RDL Random Access Decodable Leading
- DLP_N Non-referenced Decodable Leading Picture.
- RDL random access decodeable lead
- TFD_R non-decodable picture referenced (referenced TFD picture).
- referenced TFD picture a type for a NAL unit that includes a coded slice segment of a leading picture that may not be decoded normally and that is referenced by another picture if the picture preceding the RAP picture is not available.
- a TFD leading picture is a picture that can be discarded (skipped), and can be referred to as a random access skipped lead (RASL) picture for random access.
- RASL random access skipped lead
- TFD_N non-referenced decodeable picture (unreferenced TFD picture).
- a type for a NAL unit that includes a coded slice segment of a leading picture that may not be decoded normally and is not referenced by another picture if the picture preceding the RAP picture is not available.
- a TFD leading picture is a picture that can be discarded (skipped), and can be referred to as a random access skipped lead (RASL) picture for random access.
- RASL random access skipped lead
- the NAL unit types DLP_R, DLP_N, TFD_R, and TFD_N for the leading picture according to the embodiment of the present invention described above may be defined using preliminary NAL unit types that have not yet been used by other types.
- the NAL unit types DLP_R, DLP_N, TFD_R, and TFD_N for the leading picture according to an embodiment of the present invention may be stored and signaled in syntax information (eg, nal_unit_type) for the NAL unit type of the NAL unit header.
- the decoded picture is not included in the reference picture set (RPS) of the picture having identification information (eg temporal_id) of the same temporal layer.
- RPS reference picture set
- the reference picture set refers to a set of reference pictures of the current picture and may be composed of reference pictures that precede the current picture in decoding order.
- the reference picture may be used in inter prediction of the current picture.
- the reference picture set is a short term reference picture set (for example, RefPicSetStCurrBefore, RefPicSetStCurrAfter) and a long reference picture set (long term reference picture set) consisting of reference pictures before or after the current picture in a Picture Order Count (POC) order.
- term reference picture set eg, RefPicSetLtCurr.
- a coded picture whose NAL unit type (eg nal_unit_type) is TFD_N or DLP_N can be discarded without affecting the decoding process of other pictures having identification information (eg temporal_id) of the same temporal layer. Because it is possible to know whether a coded picture that is TFD_N or DLP_N is a picture that is not referenced by other pictures through the NAL unit type, and since it is not used as a reference picture during decoding, it can be extracted from the bitstream.
- NAL unit type eg nal_unit_type
- a coded picture with a NAL unit type (e.g., nal_unit_type) of TFD_N or DLP_N can be processed similarly to the TRAIL_N, TSA_N, or STSA_N picture described above, unless the decoding starts from a random access point associated with the leading picture. .
- NAL unit type e.g., nal_unit_type
- a coded picture with a NAL unit type (eg, nal_unit_type) of TFD_R or DLP_R may be processed similar to the TRAIL_R, TSA_R, or STSA_R picture described above, unless the decoding starts from a random access point associated with the leading picture. .
- NAL unit type eg, nal_unit_type
- FIG. 10 is a diagram illustrating that NAL units including a leading picture are removed from a bitstream according to an embodiment of the present invention.
- the NAL units included in the bitstream have identification information (eg, temporal_id) of the time layer.
- a temporal_id value of a temporal layer 1000 including NAL units having a temporal_id value of 0 is shown in FIG. 10. This may be layered into a time layer 1010 composed of 1 NAL units and a time layer 1020 composed of NAL units having a temporal_id value of 2.
- the temporal layer 1000 composed of NAL units having a temporal_id value of 0 may provide the lowest temporal scalability, and the temporal scaler having a temporal_id value of 2 has the highest temporal scaler. You can provide your ability.
- a box labeled I refers to an I picture and a box labeled B refers to a B picture.
- the arrow indicates a reference relationship as to whether the picture refers to another picture.
- the temporal layer 1020 having a temporal_id value of 2 includes TRAIL_N pictures, TFD_N, and DLP_N leading pictures.
- TRAIL_N picture is a trailing picture that is not referenced by another picture, it can be removed from the bitstream without affecting the decoding process of other pictures.
- TFD_N and DLP_N leading pictures are leading pictures that are not referenced by other pictures, and can be removed from the bitstream without affecting the decoding process of other pictures. That is, since information on whether a leading picture is a reference picture or a non-reference picture may be derived from the NAL unit type, the bitstream extraction process for the leading picture may be performed similarly to the bitstream extraction process of the trailing picture. Accordingly, since the pictures corresponding to the temporal_id value 2 are non-reference pictures, the non-reference pictures of the temporal_id value 2 with the temporal_id value 2 may be extracted from the bitstream during decoding.
- FIG. 11 is a flowchart schematically illustrating a method of encoding image information according to an embodiment of the present invention. The method of FIG. 11 may be performed in the encoding apparatus of FIG.
- the encoding apparatus determines a NAL unit type according to whether it is a NAL unit reference picture (S1100).
- the NAL unit may be a NAL unit including a residual signal for the current picture generated by performing inter prediction on the basis of the current picture.
- the encoding apparatus may determine the NAL unit type according to the information included in the NAL unit (the residual signal for the current picture). For example, the NAL unit type may be determined according to whether the NAL unit is a leading picture referenced by another picture or whether the NAL unit is a leading picture not referenced by another picture. As described above, the leading picture refers to a picture in which the output order is preceded by the random access point picture and the decoding order is a later picture, and may include a first decodeable picture and a second non-decodable picture.
- the encoding apparatus may determine DLP_R or RADL_R as the NAL unit type, and the NAL unit is the first leading picture not referenced by another picture.
- the encoding apparatus may determine DLP_N or RADL_N as the NAL unit type.
- the encoding apparatus may determine TFD_R or RASL_R as the NAL unit type, and the NAL unit is a second leading picture not referenced by another picture. In this case, the encoding apparatus may determine TFD_N or RASL_N as the NAL unit type.
- the encoding apparatus encodes and transmits a bitstream including information on the NAL unit and the NAL unit type (S1110).
- the encoding apparatus may encode information about the NAL unit type in the nal_unit_type syntax and store the information in the NAL unit header.
- the encoding apparatus may generate a bitstream further including identification information of a temporal layer for identifying a temporal scalable layer of a NAL unit. Identification information of the temporal layer may be encoded with a temporal_id syntax and stored in the NAL unit header.
- FIG. 12 is a flowchart schematically illustrating a method of decoding image information according to an embodiment of the present invention. The method of FIG. 12 may be performed by the decoding apparatus of FIG. 2 described above.
- the decoding apparatus receives a bitstream including information about a NAL unit (S1200).
- the information on the NAL unit includes information on the NAL unit type determined according to the nature and type of the picture included in the NAL unit.
- the NAL unit type may be derived together with information on whether the picture included in the NAL unit is a reference picture as well as the nature and type of the picture included in the corresponding NAL unit.
- the information on the NAL unit type may be stored in the NAL unit header with the nal_unit_type syntax and included in the bitstream. Since the detailed description of the NAL unit type has been described above, the description thereof will be omitted.
- the information on the NAL unit further includes identification information of a time layer that supports time scalability.
- the identification information of the temporal layer may be layer identification information for identifying a temporal scalable layer of the corresponding NAL unit.
- the identification information of the temporal layer may be stored in the NAL unit header with the temporal_id syntax and included in the bitstream.
- the decoding apparatus determines whether the NAL unit in the bitstream is a reference picture and decodes the NAL unit based on the information on the NAL unit type (S1210).
- the information on the NAL unit type may derive whether the NAL unit is a reference picture referenced by another picture or a non-reference picture not referenced by another picture. If the NAL unit is a non-reference picture not referenced by another picture, the NAL unit may be extracted and removed from the bitstream during decoding.
- the information on the NAL unit type may be information indicating whether the NAL unit is a leading picture referenced by another picture or whether the NAL unit is a leading picture not referenced by another picture.
- the leading picture is a picture in which the output order is preceded by the random access point picture and the decoding order is followed by, and may include a decodable first leading picture and a non-decodable second leading picture.
- the decoding apparatus may know that the NAL unit is the first leading picture referred to by another picture, and the NAL unit type included in the bitstream is DLP_N or In the case of RADL_N, the decoding apparatus may know that the NAL unit is a first leading picture that is not referenced by another picture.
- the decoding apparatus may know that the NAL unit is a second leading picture referenced by another picture, and the NAL unit type included in the bitstream is TFD_N. Or, in the case of RASL_N, the decoding apparatus may know that the NAL unit is a second leading picture that is not referenced by another picture.
- the decoding apparatus may extract and decode the NAL unit corresponding to the NAL unit type from the bitstream.
- the decoding apparatus may derive the time layer of the NAL unit through the identification information of the time layer. If NAL units of the same temporal layer are pictures not referenced by another picture (eg, a DLP_N or RADL_N picture, a TFD_N or RASL_N picture), the NAL units of the corresponding time layer may be removed from the bitstream.
- NAL units of the same time layer refer to NAL units having the same identification value of the time layer.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Description
Claims (12)
- NAL 유닛 타입에 대한 정보를 포함하는 비트스트림을 수신하는 단계; 및상기 NAL 유닛 타입에 대한 정보를 기반으로 상기 비트스트림 내 NAL 유닛이 참조 픽처인지 여부를 확인하여 상기 NAL 유닛을 디코딩하는 단계를 포함하며,상기 NAL 유닛 타입에 대한 정보는, 상기 NAL 유닛이 참조되는 리딩 픽처인지 상기 NAL 유닛이 참조되지 않는 리딩 픽처인지를 지시하는 정보인 것을 특징으로 하는 영상 복호화 방법.
- 제1항에 있어서,랜덤 억세스(Random Access)가 있는 경우, 상기 랜덤 억세스에 대한 리딩 픽처의 NAL 유닛 타입이 상기 참조되지 않는 리딩 픽처인 것을 지시하면 상기 비트스트림으로부터 상기 NAL 유닛을 제거하는 단계를 더 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제2항에 있어서,시간 스케일러빌러티를 지원하는 시간 레이어의 식별 정보를 더 수신하는 단계를 포함하며,상기 NAL 유닛을 제거하는 단계에서는,상기 시간 레이어의 식별 정보를 기반으로 상기 비트스트림 내 동일한 시간 레이어의 NAL 유닛들이 상기 참조되지 않는 리딩 픽처인 것을 지시하는 NAL 유닛 타입인 경우에 상기 비트스트림으로부터 상기 동일한 시간 레이어의 NAL 유닛들을 제거하며,상기 동일한 시간 레이어의 NAL 유닛들은, 상기 시간 레이어의 식별 값이 동일한 NAL 유닛들인 것을 특징으로 하는 영상 복호화 방법.
- 제1항에 있어서,상기 리딩 픽처는 랜덤 억세스 포인트 픽처보다 출력 순서가 선행하고 디코딩 순서는 후행하는 픽처이며,상기 리딩 픽처는 디코딩 가능한 제1 리딩 픽처와 디코딩 불가능한 제2 리딩 픽처를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- NAL 유닛 타입에 대한 정보를 포함하는 비트스트림을 수신하고, 상기 NAL 유닛 타입에 대한 정보를 기반으로 상기 비트스트림 내 NAL 유닛이 참조 픽처인지 여부를 확인하여 상기 NAL 유닛을 엔트로피 디코딩하는 엔트로피 디코딩부를 포함하며,상기 NAL 유닛 타입에 대한 정보는, 상기 NAL 유닛이 참조되는 리딩 픽처인지 상기 NAL 유닛이 참조되지 않는 리딩 픽처인지를 지시하는 정보인 것을 특징으로 하는 영상 복호화 장치.
- 제5항에 있어서,상기 엔트로피 디코딩부는,랜덤 억세스(Random Access)가 있는 경우, 상기 랜덤 억세스에 대한 리딩 픽처의 NAL 유닛 타입이 상기 참조되지 않는 리딩 픽처인 것을 지시하면 상기 비트스트림으로부터 상기 NAL 유닛을 제거하는 것을 특징으로 하는 영상 복호화 장치.
- 제6항에 있어서,상기 엔트로피 디코딩부는,시간 스케일러빌러티를 지원하는 시간 레이어의 식별 정보를 더 수신하며,상기 시간 레이어의 식별 정보를 기반으로 상기 비트스트림 내 동일한 시간 레이어의 NAL 유닛들이 상기 참조되지 않는 리딩 픽처인 것을 지시하는 NAL 유닛 타입인 경우에 상기 비트스트림으로부터 상기 동일한 시간 레이어의 NAL 유닛들을 제거하며,상기 동일한 시간 레이어의 NAL 유닛들은, 상기 시간 레이어의 식별 값이 동일한 NAL 유닛들인 것을 특징으로 하는 영상 복호화 장치.
- 제5항에 있어서,상기 리딩 픽처는 랜덤 억세스 포인트 픽처보다 출력 순서가 선행하고 디코딩 순서는 후행하는 픽처이며,상기 리딩 픽처는 디코딩 가능한 제1 리딩 픽처와 디코딩 불가능한 제2 리딩 픽처를 포함하는 것을 특징으로 하는 영상 복호화 장치.
- 현재 픽처를 기반으로 인터 예측을 수행하여 상기 현재 픽처에 대한 레지듀얼 신호를 생성하는 단계; 및상기 현재 픽처에 대한 레지듀얼 신호를 기반으로 생성된 NAL 유닛 및 상기NAL 유닛에 대한 정보를 포함하는 비트스트림을 전송하는 단계를 포함하며,상기 NAL 유닛에 대한 정보는,상기 NAL 유닛이 참조되는 리딩 픽처인지 상기 NAL 유닛이 참조되지 않는 리딩 픽처인지에 따라 결정된 NAL 유닛 타입에 대한 정보를 포함하는 것을 특징으로 하는 영상 부호화 방법.
- 제9항에 있어서,상기 리딩 픽처는 랜덤 억세스 포인트 픽처보다 출력 순서가 선행하고 디코딩 순서는 후행하는 픽처이며,상기 리딩 픽처는 디코딩 가능한 제1 리딩 픽처와 디코딩 불가능한 제2 리딩 픽처를 포함하는 것을 특징으로 하는 영상 부호화 방법.
- 현재 픽처를 기반으로 인터 예측을 수행하여 상기 현재 픽처에 대한 레지듀얼 신호를 생성하는 예측부; 및상기 현재 픽처에 대한 레지듀얼 신호를 기반으로 생성된 NAL 유닛 및 상기NAL 유닛에 대한 정보를 엔트로피 인코딩하여 비트스트림을 출력하는 엔트로피 인코딩부를 포함하며,상기 NAL 유닛에 대한 정보는,상기 NAL 유닛이 참조되는 리딩 픽처인지 상기 NAL 유닛이 참조되지 않는 리딩 픽처인지에 따라 결정된 NAL 유닛 타입에 대한 정보를 포함하는 것을 특징으로 하는 영상 부호화 장치.
- 제11항에 있어서,상기 리딩 픽처는 랜덤 억세스 포인트 픽처보다 출력 순서가 선행하고 디코딩 순서는 후행하는 픽처이며,상기 리딩 픽처는 디코딩 가능한 제1 리딩 픽처와 디코딩 불가능한 제2 리딩 픽처를 포함하는 것을 특징으로 하는 영상 부호화 장치.
Priority Applications (14)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1020217003703A KR102259794B1 (ko) | 2012-09-13 | 2013-09-13 | 영상 부호화/복호화 방법 및 장치 |
| KR1020227042534A KR102515017B1 (ko) | 2012-09-13 | 2013-09-13 | 영상 부호화/복호화 방법 및 장치 |
| KR1020227000377A KR102397259B1 (ko) | 2012-09-13 | 2013-09-13 | 영상 부호화/복호화 방법 및 장치 |
| KR1020207029208A KR102215438B1 (ko) | 2012-09-13 | 2013-09-13 | 영상 부호화/복호화 방법 및 장치 |
| KR1020227029148A KR102477476B1 (ko) | 2012-09-13 | 2013-09-13 | 영상 부호화/복호화 방법 및 장치 |
| KR1020217016212A KR102349338B1 (ko) | 2012-09-13 | 2013-09-13 | 영상 부호화/복호화 방법 및 장치 |
| US14/427,815 US9794594B2 (en) | 2012-09-13 | 2013-09-13 | Method and apparatus for encoding/decoding images |
| KR1020157006323A KR102167096B1 (ko) | 2012-09-13 | 2013-09-13 | 영상 부호화/복호화 방법 및 장치 |
| KR1020227015441A KR102444264B1 (ko) | 2012-09-13 | 2013-09-13 | 영상 부호화/복호화 방법 및 장치 |
| US15/710,985 US10075736B2 (en) | 2012-09-13 | 2017-09-21 | Method and apparatus for encoding/decoding images |
| US16/056,087 US10602189B2 (en) | 2012-09-13 | 2018-08-06 | Method and apparatus for encoding/decoding images |
| US16/785,114 US10972757B2 (en) | 2012-09-13 | 2020-02-07 | Method and apparatus for encoding/decoding images |
| US17/189,582 US11477488B2 (en) | 2012-09-13 | 2021-03-02 | Method and apparatus for encoding/decoding images |
| US17/896,359 US11831922B2 (en) | 2012-09-13 | 2022-08-26 | Method and apparatus for encoding/decoding images |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201261700335P | 2012-09-13 | 2012-09-13 | |
| US61/700,335 | 2012-09-13 |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/427,815 A-371-Of-International US9794594B2 (en) | 2012-09-13 | 2013-09-13 | Method and apparatus for encoding/decoding images |
| US15/710,985 Continuation US10075736B2 (en) | 2012-09-13 | 2017-09-21 | Method and apparatus for encoding/decoding images |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2014042460A1 true WO2014042460A1 (ko) | 2014-03-20 |
Family
ID=50278476
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2013/008303 Ceased WO2014042460A1 (ko) | 2012-09-13 | 2013-09-13 | 영상 부호화/복호화 방법 및 장치 |
Country Status (3)
| Country | Link |
|---|---|
| US (6) | US9794594B2 (ko) |
| KR (8) | KR102259794B1 (ko) |
| WO (1) | WO2014042460A1 (ko) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113994704A (zh) * | 2019-06-18 | 2022-01-28 | 松下电器(美国)知识产权公司 | 编码装置、解码装置、编码方法和解码方法 |
| CN115104315A (zh) * | 2019-12-23 | 2022-09-23 | Lg电子株式会社 | 基于nal单元相关信息的图像或视频编码 |
| CN115104316A (zh) * | 2019-12-23 | 2022-09-23 | Lg电子株式会社 | 用于切片或图片的基于nal单元类型的图像或视频编码 |
| CN115244936A (zh) * | 2020-03-05 | 2022-10-25 | Lg电子株式会社 | 基于混合nal单元类型的图像编码/解码方法和装置及发送比特流的方法 |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106416250B (zh) | 2013-12-02 | 2020-12-04 | 诺基亚技术有限公司 | 视频编码和解码 |
| US10547834B2 (en) | 2014-01-08 | 2020-01-28 | Qualcomm Incorporated | Support of non-HEVC base layer in HEVC multi-layer extensions |
| KR102191878B1 (ko) * | 2014-07-04 | 2020-12-16 | 삼성전자주식회사 | 멀티미디어 시스템에서 미디어 패킷을 수신하는 방법 및 장치 |
| CN104768011B (zh) * | 2015-03-31 | 2018-03-06 | 浙江大学 | 图像编解码方法和相关装置 |
| KR102477964B1 (ko) * | 2015-10-12 | 2022-12-16 | 삼성전자주식회사 | 미디어 전송 시스템에서 비디오 비트스트림의 임의 접근 및 재생을 가능하게 하는 기법 |
| CN107592540B (zh) * | 2016-07-07 | 2020-02-11 | 腾讯科技(深圳)有限公司 | 一种视频数据处理方法及装置 |
| US11265580B2 (en) * | 2019-03-22 | 2022-03-01 | Tencent America LLC | Supplemental enhancement information messages for neural network based video post processing |
| KR102874357B1 (ko) * | 2019-07-03 | 2025-10-20 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 참조 화상 목록 내의 참조 화상의 유형 |
| US11265357B2 (en) * | 2019-10-10 | 2022-03-01 | Microsoft Technology Licensing, Llc | AV1 codec for real-time video communication |
| KR102838856B1 (ko) * | 2019-12-23 | 2025-07-24 | 엘지전자 주식회사 | Nal 유닛 타입 기반 영상 또는 비디오 코딩 |
| KR20220141794A (ko) * | 2020-03-05 | 2022-10-20 | 엘지전자 주식회사 | 혼성 nal 유닛 타입에 기반하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 |
| WO2021194208A1 (ko) * | 2020-03-23 | 2021-09-30 | 엘지전자 주식회사 | 혼성 nal 유닛 타입에 기반하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 |
| WO2021194211A1 (ko) * | 2020-03-23 | 2021-09-30 | 엘지전자 주식회사 | 혼성 nal 유닛 타입에 기반하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 |
| US12489945B2 (en) * | 2023-07-26 | 2025-12-02 | Adeia Guides Inc. | Client-side decoding and playout at channel changes |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20050019864A (ko) * | 2002-07-16 | 2005-03-03 | 노키아 코포레이션 | 비디오 코딩시 랜덤 액세스 및 점진적 화상 리프레시를위한 방법 |
| KR20070016659A (ko) * | 2005-08-04 | 2007-02-08 | 삼성전자주식회사 | 픽쳐 스킵 방법 및 장치 |
| US20100027964A1 (en) * | 2004-04-28 | 2010-02-04 | Tadamasa Toma | Stream generation apparatus, stream generation method, coding apparatus, coding method, recording medium and program thereof |
| WO2010123198A2 (ko) * | 2009-04-21 | 2010-10-28 | 엘지전자 주식회사 | 다시점 비디오 신호 처리 방법 및 장치 |
| KR20110106465A (ko) * | 2009-01-28 | 2011-09-28 | 노키아 코포레이션 | 비디오 코딩 및 디코딩을 위한 방법 및 장치 |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101778235B (zh) * | 2004-04-28 | 2013-06-19 | 松下电器产业株式会社 | 运动画面产生装置,编码装置,解码装置及多路复用装置 |
| US9674525B2 (en) * | 2011-07-28 | 2017-06-06 | Qualcomm Incorporated | Multiview video coding |
| US9338474B2 (en) * | 2011-09-23 | 2016-05-10 | Qualcomm Incorporated | Reference picture list construction for video coding |
| US9264717B2 (en) * | 2011-10-31 | 2016-02-16 | Qualcomm Incorporated | Random access with advanced decoded picture buffer (DPB) management in video coding |
| US20130272619A1 (en) * | 2012-04-13 | 2013-10-17 | Sharp Laboratories Of America, Inc. | Devices for identifying a leading picture |
| US9351016B2 (en) * | 2012-04-13 | 2016-05-24 | Sharp Kabushiki Kaisha | Devices for identifying a leading picture |
| US9532055B2 (en) * | 2012-04-16 | 2016-12-27 | Microsoft Technology Licensing, Llc | Constraints and unit types to simplify video random access |
| US20140003520A1 (en) * | 2012-07-02 | 2014-01-02 | Cisco Technology, Inc. | Differentiating Decodable and Non-Decodable Pictures After RAP Pictures |
-
2013
- 2013-09-13 KR KR1020217003703A patent/KR102259794B1/ko active Active
- 2013-09-13 KR KR1020227000377A patent/KR102397259B1/ko active Active
- 2013-09-13 KR KR1020217016212A patent/KR102349338B1/ko active Active
- 2013-09-13 KR KR1020227042534A patent/KR102515017B1/ko active Active
- 2013-09-13 US US14/427,815 patent/US9794594B2/en active Active
- 2013-09-13 KR KR1020157006323A patent/KR102167096B1/ko active Active
- 2013-09-13 KR KR1020207029208A patent/KR102215438B1/ko active Active
- 2013-09-13 KR KR1020227015441A patent/KR102444264B1/ko active Active
- 2013-09-13 WO PCT/KR2013/008303 patent/WO2014042460A1/ko not_active Ceased
- 2013-09-13 KR KR1020227029148A patent/KR102477476B1/ko active Active
-
2017
- 2017-09-21 US US15/710,985 patent/US10075736B2/en active Active
-
2018
- 2018-08-06 US US16/056,087 patent/US10602189B2/en active Active
-
2020
- 2020-02-07 US US16/785,114 patent/US10972757B2/en active Active
-
2021
- 2021-03-02 US US17/189,582 patent/US11477488B2/en active Active
-
2022
- 2022-08-26 US US17/896,359 patent/US11831922B2/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20050019864A (ko) * | 2002-07-16 | 2005-03-03 | 노키아 코포레이션 | 비디오 코딩시 랜덤 액세스 및 점진적 화상 리프레시를위한 방법 |
| US20100027964A1 (en) * | 2004-04-28 | 2010-02-04 | Tadamasa Toma | Stream generation apparatus, stream generation method, coding apparatus, coding method, recording medium and program thereof |
| KR20070016659A (ko) * | 2005-08-04 | 2007-02-08 | 삼성전자주식회사 | 픽쳐 스킵 방법 및 장치 |
| KR20110106465A (ko) * | 2009-01-28 | 2011-09-28 | 노키아 코포레이션 | 비디오 코딩 및 디코딩을 위한 방법 및 장치 |
| WO2010123198A2 (ko) * | 2009-04-21 | 2010-10-28 | 엘지전자 주식회사 | 다시점 비디오 신호 처리 방법 및 장치 |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113994704A (zh) * | 2019-06-18 | 2022-01-28 | 松下电器(美国)知识产权公司 | 编码装置、解码装置、编码方法和解码方法 |
| CN113994704B (zh) * | 2019-06-18 | 2024-05-17 | 松下电器(美国)知识产权公司 | 编码装置、解码装置、编码方法和解码方法 |
| CN115104315A (zh) * | 2019-12-23 | 2022-09-23 | Lg电子株式会社 | 基于nal单元相关信息的图像或视频编码 |
| CN115104316A (zh) * | 2019-12-23 | 2022-09-23 | Lg电子株式会社 | 用于切片或图片的基于nal单元类型的图像或视频编码 |
| CN115244936A (zh) * | 2020-03-05 | 2022-10-25 | Lg电子株式会社 | 基于混合nal单元类型的图像编码/解码方法和装置及发送比特流的方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| KR20150054834A (ko) | 2015-05-20 |
| US20210185360A1 (en) | 2021-06-17 |
| US10075736B2 (en) | 2018-09-11 |
| KR20220165289A (ko) | 2022-12-14 |
| KR102515017B1 (ko) | 2023-03-29 |
| US20220408117A1 (en) | 2022-12-22 |
| US9794594B2 (en) | 2017-10-17 |
| KR102167096B1 (ko) | 2020-10-16 |
| US20200177924A1 (en) | 2020-06-04 |
| US11831922B2 (en) | 2023-11-28 |
| US10972757B2 (en) | 2021-04-06 |
| US20180054631A1 (en) | 2018-02-22 |
| KR102444264B1 (ko) | 2022-09-16 |
| KR102397259B1 (ko) | 2022-05-12 |
| KR20220121917A (ko) | 2022-09-01 |
| KR20220008385A (ko) | 2022-01-20 |
| US20180352261A1 (en) | 2018-12-06 |
| KR20210064438A (ko) | 2021-06-02 |
| US20150237377A1 (en) | 2015-08-20 |
| KR20200121374A (ko) | 2020-10-23 |
| US11477488B2 (en) | 2022-10-18 |
| KR20210018535A (ko) | 2021-02-17 |
| US10602189B2 (en) | 2020-03-24 |
| KR102477476B1 (ko) | 2022-12-14 |
| KR20220062694A (ko) | 2022-05-17 |
| KR102215438B1 (ko) | 2021-02-15 |
| KR102259794B1 (ko) | 2021-06-02 |
| KR102349338B1 (ko) | 2022-01-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2014042460A1 (ko) | 영상 부호화/복호화 방법 및 장치 | |
| WO2020071830A1 (ko) | 히스토리 기반 움직임 정보를 이용한 영상 코딩 방법 및 그 장치 | |
| WO2013162249A1 (ko) | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 | |
| WO2013157826A1 (ko) | 영상 정보 디코딩 방법, 영상 디코딩 방법 및 이를 이용하는 장치 | |
| WO2014092407A1 (ko) | 영상의 디코딩 방법 및 이를 이용하는 장치 | |
| WO2014163241A1 (ko) | 동영상 처리 방법 및 장치 | |
| WO2014003379A1 (ko) | 영상 디코딩 방법 및 이를 이용하는 장치 | |
| WO2014084656A1 (ko) | 복수의 레이어를 지원하는 영상 부호화/복호화 방법 및 장치 | |
| WO2015009036A1 (ko) | 시간적 서브 레이어 정보에 기반한 인터 레이어 예측 방법 및 장치 | |
| WO2017061671A1 (ko) | 영상 코딩 시스템에서 적응적 변환에 기반한 영상 코딩 방법 및 장치 | |
| WO2015056941A1 (ko) | 다계층 기반의 영상 부호화/복호화 방법 및 장치 | |
| WO2014007515A1 (ko) | 영상 정보 코딩 방법 및 이를 이용하는 장치 | |
| WO2021066618A1 (ko) | 변환 스킵 및 팔레트 코딩 관련 정보의 시그널링 기반 영상 또는 비디오 코딩 | |
| WO2018128222A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
| WO2021177791A1 (ko) | 혼성 nal 유닛 타입에 기반하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
| WO2014051372A1 (ko) | 영상 복호화 방법 및 이를 이용하는 장치 | |
| WO2014038905A2 (ko) | 영상 복호화 방법 및 이를 이용하는 장치 | |
| WO2021132964A1 (ko) | Nal 유닛 관련 정보 기반 영상 또는 비디오 코딩 | |
| WO2021066609A1 (ko) | 변환 스킵 및 팔레트 코딩 관련 고급 문법 요소 기반 영상 또는 비디오 코딩 | |
| WO2021201617A1 (ko) | 레이어간 정렬된 서브픽처 정보에 기반하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 | |
| WO2021066610A1 (ko) | 변환 스킵 및 팔레트 코딩 관련 정보 기반 영상 또는 비디오 코딩 | |
| WO2021235895A1 (ko) | 영상 코딩 방법 및 그 장치 | |
| WO2021246791A1 (ko) | 영상/비디오 코딩 시스템에서 상위 레벨 신택스를 처리하는 방법 및 장치 | |
| WO2021132963A1 (ko) | 슬라이스 또는 픽처에 대한 nal 유닛 타입 기반 영상 또는 비디오 코딩 | |
| WO2016204372A1 (ko) | 영상 코딩 시스템에서 필터 뱅크를 이용한 영상 필터링 방법 및 장치 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13837648 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 20157006323 Country of ref document: KR Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 14427815 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 13837648 Country of ref document: EP Kind code of ref document: A1 |