WO2008004816A1 - Scalable video encoding/decoding method and apparatus thereof - Google Patents
Scalable video encoding/decoding method and apparatus thereof Download PDFInfo
- Publication number
- WO2008004816A1 WO2008004816A1 PCT/KR2007/003256 KR2007003256W WO2008004816A1 WO 2008004816 A1 WO2008004816 A1 WO 2008004816A1 KR 2007003256 W KR2007003256 W KR 2007003256W WO 2008004816 A1 WO2008004816 A1 WO 2008004816A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- weight value
- enhancement layer
- current frame
- scalable video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/34—Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/187—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to a scalable video encoding/decoding method and apparatus, and more particularly, to a scalable video encoding/decoding method and apparatus, in which, in adaptive reference fine grain scalability (AR-FGS), when a macroblock mode of a base layer is a skip block, a weight value of a macroblock in an enhancement layer is overridden by a skip-mode weight value that is greater than a previous weight value in order to generate a reference block, thereby improving coding efficiency.
- AR-FGS adaptive reference fine grain scalability
- adaptive reference fine grain scalability is a technique for improving coding efficiency by performing temporal prediction in fine grain scalability (FGS) coding of signal-to-noise ratio (SNR) scalability.
- FGS is a representative SNR scalable technique, and is used to receive a bitstream that is cut according to network conditions and to improve display quality in proportion to the amount of bitstream transmitted.
- FGS cannot know a bitrate to be received and thus cannot have a temporal prediction scheme that produces high coding efficiency improvement in a video codec. If a temporal prediction scheme is used in FGS with no regard for such a characteristic of FGS, drift occurs due to a mismatch between reference images for motion compensation in an encoder and a decoder, resulting in sharp performance degradation in terms of a reproduced image and coding efficiency.
- AR-FGS Adaptive reference fine grain scalability
- FIG. 1 is a conceptual diagram illustrating generation of a reference block in AR-FGS according to the prior art.
- the size of a block is MxN and X" is a signal of a block to be coded in an FGS layer (enhancement layer).
- R" is a signal of a motion compensation reference block generated by a weighted sum of a base layer and the enhancement layer.
- a quantized transformation coefficient of the base layer is indicated by Ql(u,v) .
- a reference block is generated in the following two ways.
- a reference block is generated in a transformation coefficient domain. If a transformation coefficient of the transformation coefficient domain in a position corresponding to the base layer is 0, a transformation coefficient corresponding to the base layer is multiplied by ⁇ - ⁇ and a transformation coefficient corresponding to the enhancement layer is multiplied by ⁇ in the transformation coefficient domain, thereby obtaining a sum of the multiplication results as a transformation coefficient as in Equation 2. If a transformation coefficient of the transformation coefficient domain in a position corresponding to the base layer is not 0, a signal of the base layer is used as in Equation 3. A reference block is generated by inverse transformation with respect to the obtained transformation coefficient.
- Weight values are provided for each slice, and a weight value a for a case where values of residues of all pixels in a block of a base layer are all '0' and a weight value ⁇ for a case where some values of residues of all pixels in a block of a base layer are not '0' and thus some transformation coefficients obtained by transformation into a discrete cosine transformation (DCT) domain are not 1 O' are separately transmitted.
- the weight values (a , ⁇ ) are weight values of an upper layer and range between 0 and 1. Weight values of a lower layer are ( 1 - a , 1 - ⁇ ).
- FGS coding is performed using the generated reference block by exploiting the advantage of a temporal prediction scheme.
- FGS coding exhibits improved performance in real-time based video coding as well as general video coding.
- Video coding techniques such as the MPEG-4 standard and the H.264 standard use various prediction schemes.
- a skip mode is a mode in which block data of a base layer does not exist and data of a reference picture is used, i.e., there is no temporal data change.
- performance improvement may be expected using data of a reference picture on the assumption that there may be no data change in an enhancement layer. Even if transmission is not performed, drift is not likely to occur due to incorrect reference in a skip-mode block.
- FIG. 1 is a conceptual diagram illustrating a method of generating a reference block in adaptive reference-fine grain scalability (AR-FGS) according to the prior art.
- FIG. 2 is a flowchart illustrating a scalable video encoding method according to an exemplary embodiment of the present invention.
- FIG. 3 is a flowchart illustrating a scalable video decoding method according to an exemplary embodiment of the present invention.
- FIG. 4 is a flowchart illustrating a scalable video encoding method according to another exemplary embodiment of the present invention.
- FIG. 5 is a flowchart illustrating a scalable video decoding method according to another exemplary embodiment of the present invention.
- FIG. 6 illustrates a syntax for expressing a scalable video encoding method according to a first exemplary embodiment of the present invention.
- FIG. 7 illustrates a syntax for expressing a scalable video encoding method according to a second exemplary embodiment of the present invention.
- FIG. 8 illustrates a syntax for expressing a scalable video encoding method according to a third exemplary embodiment of the present invention.
- FIGS. 9A to 9C illustrate the syntax of a slice header in scalable extension including a syntax for expressing a scalable video encoding method according to an exemplary embodiment of the present invention.
- FIG. 10 is a block diagram schematically illustrating the internal structure of a scalable video encoding apparatus according to an exemplary embodiment of the present invention.
- FIG. 11 is a block diagram schematically illustrating the internal structure of a scalable video decoding apparatus according to an exemplary embodiment of the present invention.
- FIG. 12 is a graph for comparing peak signal-to-noise ratio (PSNR) versus bitrate performance of a scalable video encoding method according to a first exemplary embodiment of the present invention with PSNR versus bitrate performance of a method suggested in JSVM 5.10.
- PSNR peak signal-to-noise ratio
- FIG. 13 is a graph for comparing PSNR versus bitrate performance of a scalable video encoding method according to a second exemplary embodiment of the present invention with PSNR versus bitrate performance of the method suggested in JSVM 5.10.
- FIG. 14 is a graph for comparing PSNR versus bitrate performance of a scalable video encoding method according to a third exemplary embodiment of the present invention with PSNR versus bitrate performance of the method suggested in JSVM 5.10.
- FIG. 15 is a graph for comparing PSNR versus bitrate performance of a scalable video encoding method according to a fourth exemplary embodiment of the present invention with PSNR versus bitrate performance of the method suggested in JSVM 5.10.
- the present invention provides a scalable video coding method and apparatus to improve coding performance and reduce the probability of drift when video data of a macroblock of a base layer is in a skip mode.
- a scalable video encoding method including determining whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be encoded, is in a skip mode, overriding a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode, and generating a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
- a skip-mode weight value that is greater than a previous weight value provided for each slice in a counterpart block of an enhancement layer of a reference frame overrides the previous weight value when a reference block for an enhancement layer of the current frame is generated, thereby improving scalable video coding efficiency.
- a scalable video encoding method including determining whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be encoded, is in a skip mode, overriding a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode, and generating a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
- a scalable video decoding method including determining whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be decoded, is in a skip mode, overriding a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode, and generating a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
- a scalable video encoding apparatus including a mode determination unit, a weight value overriding unit, and a reference block generation unit.
- the mode determination unit determines whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be encoded, is in a skip mode.
- the weight value overriding unit overrides a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode.
- the reference block generation unit generates a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
- a scalable video decoding apparatus including a mode determination unit, a weight value overriding unit, and a reference block generation unit.
- the mode determination unit determines whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be decoded, is in a skip mode;
- the weight value overriding unit overrides a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode;
- the reference block generation unit generates a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
- a computer-readable recording medium having embodied thereon a program for executing the scalable video encoding method and the scalable video decoding method.
- FIG. 2 is a flowchart illustrating a scalable video encoding method according to an exemplary embodiment of the present invention.
- a scalable video encoding apparatus determines whether a mode of a block of a base layer of a current frame to be encoded, which corresponds to a block of an enhancement layer of the current frame to be encoded, is a skip mode in operation S210.
- the skip mode is a mode in which a block of a base layer of the current frame uses block data of a base layer of a reference frame without transmission of additional data of the base layer and there is no temporal data change.
- the scalable video encoding apparatus can determine whether the mode of the counterpart block of the base layer of the current frame is the skip mode by comparing the counterpart block of the base layer of the current frame with a counterpart block of the base layer of the reference frame and determining whether block data of the current frame is the same as block data of the reference frame in the temporal direction. If the mode of the block of the base layer of the current frame is the skip mode, the scalable video encoding apparatus overrides a previous weight value that has been set for a block of the enhancement layer of the reference frame with a new weight value which will hereinafter be referred to as a 'skip-mode weight value', in operation S220.
- the rate of the use of data of an enhancement layer can increase, leading to improvement in coding efficiency.
- the skip-mode weight value can be transmitted with the previous weight value after being coded in a slice header.
- a decoder then checks a mode for each block, uses the skip-mode weight value only for a skip-mode block, and uses the previous weight value for blocks other than the skip-mode block, thereby generating a reference block.
- the scalable video encoding apparatus generates a reference block for the block of the enhancement layer of the current frame to be encoded using a weighted sum in operation S230. If a block mode of the base layer of the current frame is the skip mode, the scalable video encoding apparatus generates the reference block by means of a weighted sum of a counterpart block of an enhancement layer of the reference frame to which the skip-mode weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the skip-mode weight value is applied.
- the scalable video encoding apparatus If the block mode of the base layer of the current frame is not the skip mode, the scalable video encoding apparatus generates the reference block by means of a weighted sum of the counterpart block of the enhancement layer of the reference frame and the block of the base layer of the current frame by using the previous weight value.
- AR-FGS block encoding is performed on the block of the enhancement layer of the current frame based on the generated reference block.
- FIG. 3 is a flowchart illustrating a scalable video decoding method according to an exemplary embodiment of the present invention.
- a scalable video decoding apparatus receives an encoded bitstream from a scalable video encoding apparatus in operation S310.
- the received bitstream may include a block that has been encoded in the skip mode, skip-mode information, and skip-mode weight value information for reference block generation.
- the scalable video decoding apparatus determines whether a block mode of a base layer corresponding to a block of an enhancement layer of the current frame to be decoded in the received bitstream is a skip mode in operation S320.
- the determination of whether the block mode is the skip mode can be performed by referring to the skip-mode information included in the received bitstream, e.g., information indicating that a block has no data, information, such as a specific syntax element like a skip flag, indicating a block is in a skip mode, and the like. If the block mode of the base layer of the current frame is the skip mode, the scalable video decoding apparatus overrides a previous weight value that has been set for a block of an enhancement layer of the reference frame with a skip-mode weight value in operation S330.
- the scalable video decoding apparatus extracts the skip-mode weight value included in the received bitstream and overrides the previous weight value set for the block of the enhancement layer of the reference frame with the extracted skip-mode weight value.
- the skip-mode weight value may be extracted from a slice header included in the bitstream.
- the scalable video decoding apparatus generates a reference block for the block of the enhancement layer of the current frame to be decoded using a weighted sum in operation S340.
- the scalable video decoding apparatus determines that the block of the base layer of the current frame is in the skip mode, it generates the reference block by means of a weighted sum of a counterpart block of the enhancement layer of the reference frame to which the skip-mode weight value is applied and the counterpart block of the base layer of the current frame to which a weight value calculated from the skip-mode weight value is applied. If the scalable video decoding apparatus determines that the block of the base layer of the current frame is not in the skip mode, it generates the reference block by means of a weighted sum of the counterpart block of the enhancement layer of the reference frame and the counterpart block of the base layer of the current frame using the previous weight value.
- FIG. 4 is a flowchart illustrating a scalable video encoding method according to another exemplary embodiment of the present invention. In the following description of the scalable video encoding method of FIG. 4, description similar to that of the method of FIG. 2 will be omitted.
- the scalable video encoding apparatus determines whether to set a flag indicating overriding of a previous weight value with a skip-mode weight value, which will hereinafter be referred to as an overriding flag, in operation S410.
- the scalable video encoding apparatus determines whether a block mode of a base layer of the current frame is a skip mode in operation S420. If so, the scalable video encoding apparatus overrides a previous weight value with the skip-mode weight value in operation S430.
- the scalable video encoding apparatus generates a reference block for a block of an enhancement layer of the current frame using a weighted sum in operation S440. More specifically, if the overriding flag is set to T and the block of the base layer is determined to be in the skip mode, the scalable video encoding apparatus generates the reference block by means of a weighted sum of a counterpart block of an enhancement layer of a reference frame to which the skip-mode weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the skip-mode weight value is applied.
- the scalable video encoding apparatus If the overriding flag is not set to '1 ' or the block mode of the base layer of the current frame is not the skip mode, the scalable video encoding apparatus generates the reference block by means of a weighted sum of the counterpart block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value.
- the scalable video encoding apparatus performs AR-FGS block encoding on the enhancement layer of the current frame based on the generated reference block in operation S450.
- FIG. 5 is a flowchart illustrating a scalable video decoding method according to another exemplary embodiment of the present invention.
- description similar to that of the method of FIG. 3 will be omitted.
- a scalable video decoding apparatus receives a bitstream including a block that has been encoded in the skip mode from a scalable video encoding apparatus in operation S510.
- the scalable video decoding apparatus determines whether a flag indicating overriding of a previous weight value with a skip-mode weight value, which will hereinafter be referred to as an overriding flag, has been set in operation S520.
- the received bitstream may include the block that has been encoded in the skip mode, information indicating whether the skip mode has been implemented, skip-mode information, and a skip-mode weight value for reference block generation.
- the scalable video decoding apparatus determines whether a mode of a block of a base layer of the current frame is a skip mode in operation S530. If the block of the base layer is in the skip mode, the scalable video decoding apparatus overrides a previous weight value that has been set for a block of an enhancement layer of a reference frame with the skip-mode weight value in operation S540. The scalable video decoding apparatus generates a reference block for a block of an enhancement layer of the current frame to be decoded using a weighted sum in operation S550.
- the scalable video decoding apparatus generates the reference block by means of a weighted sum of a counterpart block of an enhancement layer of the reference frame to which the skip-mode weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the skip-mode weight value is applied. If the overriding flag is not set to '1' or the block mode of the base layer of the current frame is not the skip mode, the scalable video decoding apparatus generates the reference block by means of a weighted sum of the counterpart block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value.
- the scalable video decoding apparatus performs AR-FGS block decoding on the enhancement layer of the current frame based on the generated reference block in operation S560.
- the block mode of the base layer is the skip mode in FIGS. 2 through 5, it can be easily understood by those of ordinary skill in the art that a a previous weight value may be overridden with a new weight value when a block of a base layer of the current frame is within a specific range from a value that is predicted from reference pictures, i.e., blocks located to the left of, to the left of and above, and above the block of the base layer of the current frame according to the H.264 standard, as well as when the block of the base layer of the current frame is in the skip mode.
- the skip-mode weight value used for overriding can be coded into a slice header using n-bit fixed-length coding or variable length coding.
- FIG. 6 illustrates a syntax for expressing a scalable video encoding method according to a first exemplary embodiment of the present invention.
- scalable video coding is performed in the syntax of a slice header in scalable extension.
- a flag
- override_max_diff_ref_scala_for_zero_base_block_flag indicating whether to override with a skip-mode weight value is coded. If the flag is T 1 skip-mode weight value overriding information "max_diff_ref_scale_for_skipped_base_block” is coded using 2 bits. If the flag is 1 O', "max_diff_ref_scale_for_skipped_base_block” is not coded.
- maximum_diff_ref_scale_for_skipped_base_block ranges between 0 and 3.
- a weight value for an enhancement layer is set to 32/32, the weight value is set to 31/32 for 1 , the weight value is set to
- FIG. 7 illustrates a syntax for expressing a scalable video encoding method according to a second exemplary embodiment of the present invention.
- scalable video coding is performed in the syntax of a slice header in scalable extension and skip-mode weight value overriding information "max_diff_ref_scale_for_skipped_base_block" is coded using 5 bits.
- FIG. 8 illustrates a syntax for expressing a scalable video encoding method according to a third exemplary embodiment of the present invention.
- scalable video coding is performed in the syntax of a slice header in scalable extension and skip-mode weight value overriding information "max_diff_ref_scale_for_skipped_base_block" is coded using a variable-length code, e.g., an Exp-Golomb code used in H.264.
- a variable-length code e.g., an Exp-Golomb code used in H.264.
- a pseudo code that is applied in scalable video coding standardization is as follows:
- FIGS. 9A to 9C illustrate the syntax of a slice header in scalable extension including a syntax for expressing a scalable video encoding method according to an exemplary embodiment of the present invention.
- the pseudo code is used as a syntax according to the scalable video coding international standard and semantics of parameters used in FIGS. 9A to 9C are as follows: override_max_diff_ref_scale_for_zero_base_block_flag equal to 1 specifies that max_diff_ref_scale_for_skipped_base_block presence in the progressive slice of a key picture.
- max_diff_ref_scale_for_skipped_base_block specifies the maximum scaling factor to be used for scaling the differential reference signal in constructing the inter prediction samples used in decoding the progressive slice of a key picture, when the transform block in the base layer is skipped.
- the value of max_diff_ref_scale_for_skipped_base_block shall be in the range of 0 to 3, inclusive.
- MaxDlffRefScaleSkippedBaseBlock is derived as follows.
- MaxDlffRefScaleSkippedBaseBlock is set equal to max_diff_ref_scale_for_skipped_base_block.
- the following shows embodiments of a decoding process with respect to the pseudo code, i.e., a scaling process for differential interprediction samples of 4x4 luma blocks, a scaling process for differential interprediction samples for 8x8 luma blocks, and a scaling process for differential interprediction samples for chroma blocks.
- a scaling factor sF is derived as follows.
- sF is set equal to MaxDiffRefScaleSkippedBlock.
- sF is set equal to MaxDiffRefScaleZeroBaseBlock. Otherwise (ctx4x4ld is not equal to 0), sF is set equal to max( 0,MaxDiffRefScaleZeroBaseBlock-4).
- numBaseSig be the number of values equal to 1 inside the 8x8 array sBC. Depending on numBaseSig the following applies. - If numBaseSig is equal to 0, the following applies.
- numBaseSigAC be the number of values equal to 1 inside the 4x4 array sBC[chroma4x4Blkldx].
- a scaling factor sF is derived as follows.
- sF is set equal to MaxDiffRefScaleSkippedBlock. Otherwise sF is set equal to MaxDiffRefScaleZeroBaseBIock.
- FIG. 10 is a block diagram schematically illustrating the internal structure of a scalable video encoding apparatus according to an exemplary embodiment of the present invention. In the following description of FIG. 10, description similar to that of previous embodiments will be omitted.
- the scalable video encoding apparatus includes a mode determination unit 1010, a weight value overriding unit 1020, a reference block generation unit 1030, and an encoding unit 1040.
- the mode determination unit 1010 determines whether a counterpart block of a base layer of a current frame to be encoded, which corresponds to a block of an enhancement layer of the current frame, is in a skip mode.
- the mode determination unit 1010 also determines whether to set a flag indicating overriding of a previous weight value with a skip-mode weight value, which will hereinafter be referred to as an overriding flag.
- the mode determination unit 1010 determines whether the counterpart block of the base layer is in the skip mode if it sets the overriding flag to '1 ', and does not determines whether the counterpart block of the base layer is in the skip mode if it does not set the overriding flag.
- the weight value overriding unit 1020 overrides a previous weight value that has been set for a block of an enhancement layer of a reference frame with a skip-mode weight value set greater than the previous weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame.
- the reference block generation unit 1030 generates a reference block based on a weight value set for the block of the enhancement layer of the reference frame. If the mode determination unit 1010 sets the overriding flag to '1 ' and determines that the block of the base layer of the current frame is in the skip mode, the reference block generation unit 1030 generates the reference block by means of a weighted sum of a block of the enhancement layer of the reference frame to which a new weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the new weight value is applied.
- the reference block generation unit 1030 If the mode determination unit 1010 determines that the block of the base layer of the current frame is not in the skip mode, the reference block generation unit 1030 generates the reference block by means of a weighted sum of the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value. If the mode determination unit 1010 does not set the overriding flag to '1 ', the reference block generation unit 1030 generates the reference block by means of a weighted sum of the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value.
- the encoding unit 1040 performs AR-FGS encoding on a block of the enhancement layer of the current frame using the generated reference block, thereby generating a bitstream.
- FIG. 11 is a block diagram schematically illustrating the internal structure of a scalable video decoding apparatus according to an exemplary embodiment of the present invention. In the following description of FIG. 11 , description similar to that of previous embodiments will be omitted.
- the scalable video decoding apparatus includes a reception unit 1110, a mode determination unit 1120, a weight value overriding unit 1130, a reference block generation unit 1140, and a decoding unit 1150.
- the reception unit 1110 receives a bitstream including a block that has been encoded in a skip mode.
- the mode determination unit 1120 determines whether a block of a base layer of a current frame, which corresponds to a block of an enhancement layer of the current frame to be decoded, is in the skip mode. The mode determination unit 1120 also determines whether a flag indicating overriding of a previous weight value with a skip-mode weight value, which will hereinafter be referred to as an overriding flag, has been set in the received bitstream. The mode determination unit 1120 determines whether the block of the base layer is in the skip mode if it confirms that the overriding flag is set to '1 ', and does not determine whether the block of the base layer is in the skip mode if the overriding flag is not set to '1 '.
- the weight value overriding unit 1130 extracts the skip-mode weight value from the bitstream and overrides a previous weight value set for a counterpart block of an enhancement layer of a reference frame corresponding to the block of the enhancement layer of the current frame with the extracted skip-mode weight value.
- the reference block generation unit 1140 generates a reference block based on a weight value set for the block of the enhancement layer of the reference frame. If the mode determination unit 1120 confirms that the overriding flag is set to T and the block of the base layer of the current frame is in the skip mode, the reference block generation unit 1140 generates the reference block by means of a weighted sum of a counterpart block of the enhancement layer of the reference frame to which the skip-mode weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the skip-mode weight value is applied.
- the reference block generation unit 1140 If the mode determination unit 1120 determines that the block of the base layer of the current frame is not in the skip mode, the reference block generation unit 1140 generates the reference block based on a weighted sum of the counterpart block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value. If the mode determination unit 1120 confirms that the overriding flag is not set to '1', the reference block generation unit 1140 generates the reference block by means of a weighted sum of the counterpart block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value.
- the decoding unit 1150 performs AR-FGS block decoding on the block of the enhancement layer of the current frame using the generated reference block and reconstructs the block.
- FIGS. 12 through 15 are graphs for comparing peak signal-to-noise ratio (PSNR) versus bitrate performance of a method for scalable video coding according to exemplary embodiments of the present invention with PSNR versus bitrate performance of a method suggested in JSVM 5.10.
- PSNR peak signal-to-noise ratio
- Coding is performed using the syntax applied according to the scalable video coding international standard as illustrated in FIGS. 9A through 9C, the semantics, and the decoding process and a previous weight value parameter "max_diff_ref_scale_for_zero_base_coeff is fixed to 18/32 for an upper layer.
- a skip-mode weight value "max_diff_ref_scale_for_zero_base_block" for a base layer is set to 28/32 as expressed as a graph marked with circles, is set to 16/32 as expressed as a graph marked with triangles, and is set to 8/32 as expressed as a graph marked with diamond shapes, for an upper layer.
- scalable video coding efficiency can be improved by an encoding/decoding method and apparatus to which a method of generating a reference block according to the present invention is applied While the scalable video encoding/decoding method has been described as being implemented in units of a macroblock or a block, it can be easily predicted by those of ordinary skill in the art that the present invention can also be applied to a scalable video encoding/decoding method implemented in units of a slice or a frame.
- an FGS layer is a single layer in the foregoing description, it can also be easily predicted by those of ordinary skill in the art that the present invention can also be applied to a case where there are two FGS layers or more.
- the present invention can be embodied as code that is readable by a computer on a computer-readable recording medium.
- the computer-readable recording medium includes all kinds of recording devices storing data that is readable by a computer system. Examples of the computer-readable recording medium include read-only memory (ROM), random access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and carrier waves such as transmission over the Internet.
- the computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. Also, functional programs, code, and code segments for implementing the present invention can be easily construed by programmers skilled in the art.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Provided is a scalable video encoding method and apparatus, in which in adaptive reference fine grain scalability (AR-FGS) of scalable video coding, a weight value that is greater than a previous weight value provided for each slice overrides the previous weight value in order to generate a reference block for a enhancement layer when a macroblock mode of a base layer is a skip mode.
Description
SCALABLE VIDEO ENCODING/DECODING METHOD AND APPARATUS THEREOF
TECHNICAL FIELD
The present invention relates to a scalable video encoding/decoding method and apparatus, and more particularly, to a scalable video encoding/decoding method and apparatus, in which, in adaptive reference fine grain scalability (AR-FGS), when a macroblock mode of a base layer is a skip block, a weight value of a macroblock in an enhancement layer is overridden by a skip-mode weight value that is greater than a previous weight value in order to generate a reference block, thereby improving coding efficiency.
BACKGROUND ART
In scalable video coding (SVC) that has been standardized by the Joint Video Team (JVT) of the Moving Picture Experts Group (MPEG) and the International Telecommunication Union-Telecommunication Standardization Sector (ITU-T), adaptive reference fine grain scalability (AR-FGS) is a technique for improving coding efficiency by performing temporal prediction in fine grain scalability (FGS) coding of signal-to-noise ratio (SNR) scalability.
SNR scalable techniques improve display quality in proportion to a received bitrate according to variable network conditions. FGS is a representative SNR scalable technique, and is used to receive a bitstream that is cut according to network conditions and to improve display quality in proportion to the amount of bitstream transmitted. However, FGS cannot know a bitrate to be received and thus cannot have a temporal prediction scheme that produces high coding efficiency improvement in a video codec. If a temporal prediction scheme is used in FGS with no regard for such a characteristic of FGS, drift occurs due to a mismatch between reference images for motion compensation in an encoder and a decoder, resulting in sharp performance degradation in terms of a reproduced image and coding efficiency.
Adaptive reference fine grain scalability (AR-FGS) exploits both efficient drift control and improvement in the performance of a temporal prediction scheme. AR-FGS generates a reference block or a reference macroblock for motion compensation using a weighted sum of reference blocks obtained in partially decoded upper layer and lower layer. Using an AR-FGS method implemented in this way, FGS coding performance can be improved and drift can be controlled.
FIG. 1 is a conceptual diagram illustrating generation of a reference block in AR-FGS according to the prior art.
Referring to FIG. 1 , the size of a block is MxN and X" is a signal of a block to be coded in an FGS layer (enhancement layer). R" is a signal of a motion compensation reference block generated by a weighted sum of a base layer and the enhancement layer. A signal of a reference block in the enhancement layer is indicated by R"~\ and a quantized coefficient of the base layer is indicated by Ql and transformation is indicated by Fx = f(X) . A quantized transformation coefficient of the base layer is indicated by Ql(u,v) . In AR-FGS, a reference block is generated in the following two ways.
1. If quantized coefficients in a base layer are all 0, a reference block is generated by a weighted sum of a counterpart block in the base layer and a counterpart block in an enhancement layer using a as a weight value for the enhancement layer and l - a as a weight value for the base layer, as follows: RZ = (I - U) - X" + U - Rr1 if QS = O (1 )
2. In the other cases in which at least one quantized coefficient in the base layer is not 0, a reference block is generated in a transformation coefficient domain. If a transformation coefficient of the transformation coefficient domain in a position corresponding to the base layer is 0, a transformation coefficient corresponding to the base layer is multiplied by \ -β and a transformation coefficient corresponding to the enhancement layer is multiplied by β in the transformation coefficient domain, thereby obtaining a sum of the multiplication results as a transformation coefficient as in Equation 2. If a transformation coefficient of the transformation coefficient domain in a position corresponding to the base layer is not 0, a signal of the base layer is used as in Equation 3. A reference block is generated by inverse transformation with respect to the obtained transformation coefficient.
FRΛu,v) = (l -β) -Fγ, (u,v) + β - FRrl (u,v) »mB(M,v) = 0 (2)
Fκ (u,v) = Fχ,, (u,v) if QA" u, v) ≠ 0 (3)
Weight values are provided for each slice, and a weight value a for a case where values of residues of all pixels in a block of a base layer are all '0' and a weight value β for a case where some values of residues of all pixels in a block of a base layer are not '0' and thus some transformation coefficients obtained by transformation
into a discrete cosine transformation (DCT) domain are not 1O' are separately transmitted. The weight values (a , β) are weight values of an upper layer and range between 0 and 1. Weight values of a lower layer are ( 1 - a , 1 - β ).
FGS coding is performed using the generated reference block by exploiting the advantage of a temporal prediction scheme. When compared to conventional FGS coding, such FGS coding exhibits improved performance in real-time based video coding as well as general video coding.
Video coding techniques such as the MPEG-4 standard and the H.264 standard use various prediction schemes. Among these prediction schemes, a skip mode is a mode in which block data of a base layer does not exist and data of a reference picture is used, i.e., there is no temporal data change. Thus, performance improvement may be expected using data of a reference picture on the assumption that there may be no data change in an enhancement layer. Even if transmission is not performed, drift is not likely to occur due to incorrect reference in a skip-mode block.
DESCRIPTION OF THE DRAWINGS
FIG. 1 is a conceptual diagram illustrating a method of generating a reference block in adaptive reference-fine grain scalability (AR-FGS) according to the prior art.
FIG. 2 is a flowchart illustrating a scalable video encoding method according to an exemplary embodiment of the present invention.
FIG. 3 is a flowchart illustrating a scalable video decoding method according to an exemplary embodiment of the present invention.
FIG. 4 is a flowchart illustrating a scalable video encoding method according to another exemplary embodiment of the present invention. FIG. 5 is a flowchart illustrating a scalable video decoding method according to another exemplary embodiment of the present invention.
FIG. 6 illustrates a syntax for expressing a scalable video encoding method according to a first exemplary embodiment of the present invention.
FIG. 7 illustrates a syntax for expressing a scalable video encoding method according to a second exemplary embodiment of the present invention.
FIG. 8 illustrates a syntax for expressing a scalable video encoding method according to a third exemplary embodiment of the present invention.
FIGS. 9A to 9C illustrate the syntax of a slice header in scalable extension including a syntax for expressing a scalable video encoding method according to an exemplary embodiment of the present invention.
FIG. 10 is a block diagram schematically illustrating the internal structure of a scalable video encoding apparatus according to an exemplary embodiment of the present invention.
FIG. 11 is a block diagram schematically illustrating the internal structure of a scalable video decoding apparatus according to an exemplary embodiment of the present invention. FIG. 12 is a graph for comparing peak signal-to-noise ratio (PSNR) versus bitrate performance of a scalable video encoding method according to a first exemplary embodiment of the present invention with PSNR versus bitrate performance of a method suggested in JSVM 5.10.
FIG. 13 is a graph for comparing PSNR versus bitrate performance of a scalable video encoding method according to a second exemplary embodiment of the present invention with PSNR versus bitrate performance of the method suggested in JSVM 5.10.
FIG. 14 is a graph for comparing PSNR versus bitrate performance of a scalable video encoding method according to a third exemplary embodiment of the present invention with PSNR versus bitrate performance of the method suggested in JSVM 5.10.
FIG. 15 is a graph for comparing PSNR versus bitrate performance of a scalable video encoding method according to a fourth exemplary embodiment of the present invention with PSNR versus bitrate performance of the method suggested in JSVM 5.10.
DETAILED DESCRIPTION OF THE INVENTION
TECHNICAL PROBLEM
The present invention provides a scalable video coding method and apparatus to improve coding performance and reduce the probability of drift when video data of a macroblock of a base layer is in a skip mode.
TECHNICAL SOLUTION
According to one aspect of the present invention, there is provided a scalable video encoding method including determining whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be encoded, is in a skip mode, overriding a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode, and generating a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
ADVANTAGEOUS EFFECTS
According to the present invention, when video data of a macroblock of a base layer of the current frame is in a skip mode, a skip-mode weight value that is greater than a previous weight value provided for each slice in a counterpart block of an enhancement layer of a reference frame overrides the previous weight value when a reference block for an enhancement layer of the current frame is generated, thereby improving scalable video coding efficiency.
Moreover, it is possible to reduce the probability of drift due to a mismatch between reference images for motion compensation in an encoder and a decoder, when compared to the use of a temporal prediction scheme irrespective of whether the macroblock of the base layer is in the skip mode.
BEST MODE According to one aspect of the present invention, there is provided a scalable video encoding method including determining whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be encoded, is in a skip mode, overriding a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode, and generating a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
According to another aspect of the present invention, there is provided a scalable video decoding method including determining whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be decoded, is in a skip mode, overriding a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode, and generating a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
According to another aspect of the present invention, there is provided a scalable video encoding apparatus including a mode determination unit, a weight value overriding unit, and a reference block generation unit. The mode determination unit determines whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be encoded, is in a skip mode. The weight value overriding unit overrides a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode. The reference block generation unit generates a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
According to another aspect of the present invention, there is provided a scalable video decoding apparatus including a mode determination unit, a weight value overriding unit, and a reference block generation unit. The mode determination unit determines whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be decoded, is in a skip mode; the weight value overriding unit overrides a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode; and the reference block generation unit generates a reference block for the block of the enhancement layer of the current frame based on the block of the
enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
According to another aspect of the present invention, there is provided a computer-readable recording medium having embodied thereon a program for executing the scalable video encoding method and the scalable video decoding method.
MODE OF THE INVENTION Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be noted that like reference numerals refer to like elements illustrated in one or more of the drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted for conciseness and clarity. In the following description, the terms "picture" and "frame" indicate video data in a video sequence and are interchangeable.
FIG. 2 is a flowchart illustrating a scalable video encoding method according to an exemplary embodiment of the present invention.
Referring to FIG. 2, a scalable video encoding apparatus according to an exemplary embodiment of the present invention determines whether a mode of a block of a base layer of a current frame to be encoded, which corresponds to a block of an enhancement layer of the current frame to be encoded, is a skip mode in operation S210. The skip mode is a mode in which a block of a base layer of the current frame uses block data of a base layer of a reference frame without transmission of additional data of the base layer and there is no temporal data change. Thus, the scalable video encoding apparatus can determine whether the mode of the counterpart block of the base layer of the current frame is the skip mode by comparing the counterpart block of the base layer of the current frame with a counterpart block of the base layer of the reference frame and determining whether block data of the current frame is the same as block data of the reference frame in the temporal direction. If the mode of the block of the base layer of the current frame is the skip mode, the scalable video encoding apparatus overrides a previous weight value that has been set for a block of the enhancement layer of the reference frame with a new weight value which will hereinafter be referred to as a 'skip-mode weight value', in operation S220. By setting
the skip-mode weight value greater than a previous weight value set for each slice, the rate of the use of data of an enhancement layer can increase, leading to improvement in coding efficiency. The skip-mode weight value can be transmitted with the previous weight value after being coded in a slice header. A decoder then checks a mode for each block, uses the skip-mode weight value only for a skip-mode block, and uses the previous weight value for blocks other than the skip-mode block, thereby generating a reference block.
The scalable video encoding apparatus generates a reference block for the block of the enhancement layer of the current frame to be encoded using a weighted sum in operation S230. If a block mode of the base layer of the current frame is the skip mode, the scalable video encoding apparatus generates the reference block by means of a weighted sum of a counterpart block of an enhancement layer of the reference frame to which the skip-mode weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the skip-mode weight value is applied. If the block mode of the base layer of the current frame is not the skip mode, the scalable video encoding apparatus generates the reference block by means of a weighted sum of the counterpart block of the enhancement layer of the reference frame and the block of the base layer of the current frame by using the previous weight value.
In operation S240, AR-FGS block encoding is performed on the block of the enhancement layer of the current frame based on the generated reference block.
FIG. 3 is a flowchart illustrating a scalable video decoding method according to an exemplary embodiment of the present invention.
Referring to FIG. 3, a scalable video decoding apparatus according to an exemplary embodiment of the present invention receives an encoded bitstream from a scalable video encoding apparatus in operation S310. The received bitstream may include a block that has been encoded in the skip mode, skip-mode information, and skip-mode weight value information for reference block generation.
The scalable video decoding apparatus determines whether a block mode of a base layer corresponding to a block of an enhancement layer of the current frame to be decoded in the received bitstream is a skip mode in operation S320. The determination of whether the block mode is the skip mode can be performed by referring to the skip-mode information included in the received bitstream, e.g., information indicating that a block has no data, information, such as a specific syntax element like a skip flag, indicating a block is in a skip mode, and the like.
If the block mode of the base layer of the current frame is the skip mode, the scalable video decoding apparatus overrides a previous weight value that has been set for a block of an enhancement layer of the reference frame with a skip-mode weight value in operation S330. More specifically, if the block mode of the base layer is the skip mode, the scalable video decoding apparatus extracts the skip-mode weight value included in the received bitstream and overrides the previous weight value set for the block of the enhancement layer of the reference frame with the extracted skip-mode weight value. The skip-mode weight value may be extracted from a slice header included in the bitstream. The scalable video decoding apparatus generates a reference block for the block of the enhancement layer of the current frame to be decoded using a weighted sum in operation S340. If the scalable video decoding apparatus determines that the block of the base layer of the current frame is in the skip mode, it generates the reference block by means of a weighted sum of a counterpart block of the enhancement layer of the reference frame to which the skip-mode weight value is applied and the counterpart block of the base layer of the current frame to which a weight value calculated from the skip-mode weight value is applied. If the scalable video decoding apparatus determines that the block of the base layer of the current frame is not in the skip mode, it generates the reference block by means of a weighted sum of the counterpart block of the enhancement layer of the reference frame and the counterpart block of the base layer of the current frame using the previous weight value.
The scalable video decoding apparatus performs AR-FGS block decoding on the block of the enhancement layer of the current frame based on the generated reference block in operation S350. FIG. 4 is a flowchart illustrating a scalable video encoding method according to another exemplary embodiment of the present invention. In the following description of the scalable video encoding method of FIG. 4, description similar to that of the method of FIG. 2 will be omitted.
Referring to FIG. 4, the scalable video encoding apparatus determines whether to set a flag indicating overriding of a previous weight value with a skip-mode weight value, which will hereinafter be referred to as an overriding flag, in operation S410.
If the overriding flag is set to '1', the scalable video encoding apparatus determines whether a block mode of a base layer of the current frame is a skip mode in operation S420.
If so, the scalable video encoding apparatus overrides a previous weight value with the skip-mode weight value in operation S430.
The scalable video encoding apparatus generates a reference block for a block of an enhancement layer of the current frame using a weighted sum in operation S440. More specifically, if the overriding flag is set to T and the block of the base layer is determined to be in the skip mode, the scalable video encoding apparatus generates the reference block by means of a weighted sum of a counterpart block of an enhancement layer of a reference frame to which the skip-mode weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the skip-mode weight value is applied. If the overriding flag is not set to '1 ' or the block mode of the base layer of the current frame is not the skip mode, the scalable video encoding apparatus generates the reference block by means of a weighted sum of the counterpart block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value. The scalable video encoding apparatus performs AR-FGS block encoding on the enhancement layer of the current frame based on the generated reference block in operation S450.
FIG. 5 is a flowchart illustrating a scalable video decoding method according to another exemplary embodiment of the present invention. In the following description of the scalable video decoding method of FIG. 5, description similar to that of the method of FIG. 3 will be omitted.
Referring to FIG. 5, a scalable video decoding apparatus receives a bitstream including a block that has been encoded in the skip mode from a scalable video encoding apparatus in operation S510. The scalable video decoding apparatus determines whether a flag indicating overriding of a previous weight value with a skip-mode weight value, which will hereinafter be referred to as an overriding flag, has been set in operation S520. The received bitstream may include the block that has been encoded in the skip mode, information indicating whether the skip mode has been implemented, skip-mode information, and a skip-mode weight value for reference block generation.
If the overriding flag is set to '1', the scalable video decoding apparatus determines whether a mode of a block of a base layer of the current frame is a skip mode in operation S530.
If the block of the base layer is in the skip mode, the scalable video decoding apparatus overrides a previous weight value that has been set for a block of an enhancement layer of a reference frame with the skip-mode weight value in operation S540. The scalable video decoding apparatus generates a reference block for a block of an enhancement layer of the current frame to be decoded using a weighted sum in operation S550. More specifically, if the overriding flag is set to T and the block of the base layer is in the skip mode, the scalable video decoding apparatus generates the reference block by means of a weighted sum of a counterpart block of an enhancement layer of the reference frame to which the skip-mode weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the skip-mode weight value is applied. If the overriding flag is not set to '1' or the block mode of the base layer of the current frame is not the skip mode, the scalable video decoding apparatus generates the reference block by means of a weighted sum of the counterpart block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value.
The scalable video decoding apparatus performs AR-FGS block decoding on the enhancement layer of the current frame based on the generated reference block in operation S560. While the block mode of the base layer is the skip mode in FIGS. 2 through 5, it can be easily understood by those of ordinary skill in the art that a a previous weight value may be overridden with a new weight value when a block of a base layer of the current frame is within a specific range from a value that is predicted from reference pictures, i.e., blocks located to the left of, to the left of and above, and above the block of the base layer of the current frame according to the H.264 standard, as well as when the block of the base layer of the current frame is in the skip mode.
The skip-mode weight value used for overriding can be coded into a slice header using n-bit fixed-length coding or variable length coding.
FIG. 6 illustrates a syntax for expressing a scalable video encoding method according to a first exemplary embodiment of the present invention.
Referring to FIG. 6, scalable video coding is performed in the syntax of a slice header in scalable extension. Thus, a flag
"override_max_diff_ref_scala_for_zero_base_block_flag" indicating whether to override with a skip-mode weight value is coded. If the flag is T1 skip-mode weight value
overriding information "max_diff_ref_scale_for_skipped_base_block" is coded using 2 bits. If the flag is 1O', "max_diff_ref_scale_for_skipped_base_block" is not coded.
The skip-mode weight value overriding information
"max_diff_ref_scale_for_skipped_base_block" ranges between 0 and 3. For skip-mode weight value overriding information of 0, a weight value for an enhancement layer is set to 32/32, the weight value is set to 31/32 for 1 , the weight value is set to
30/32 for 2, and the weight value is set to 29/32 for 3. In order to code a block of an enhancement layer with respect to the skip-mode block of the base layer, if
"override_max_diff_ref_scala_for_zero_base_block_flag" is 1 , a skip-mode weight value "max_diff_ref_scale_for_skipped_base_block" overrides
"max_diff_ref_scale_for_zero_base_block".
FIG. 7 illustrates a syntax for expressing a scalable video encoding method according to a second exemplary embodiment of the present invention.
Referring to FIG. 7, scalable video coding is performed in the syntax of a slice header in scalable extension and skip-mode weight value overriding information "max_diff_ref_scale_for_skipped_base_block" is coded using 5 bits.
FIG. 8 illustrates a syntax for expressing a scalable video encoding method according to a third exemplary embodiment of the present invention.
Referring to FIG. 8, scalable video coding is performed in the syntax of a slice header in scalable extension and skip-mode weight value overriding information "max_diff_ref_scale_for_skipped_base_block" is coded using a variable-length code, e.g., an Exp-Golomb code used in H.264.
A pseudo code that is applied in scalable video coding standardization is as follows:
lf(mb_type==P_Skip&&override_max_diff_ref_scale_for_zero_base_block_flag==1){ max_diff_ref_scale_for_zero_base_block=max_diff_ref_scale_for_skipped_base_block }
FIGS. 9A to 9C illustrate the syntax of a slice header in scalable extension including a syntax for expressing a scalable video encoding method according to an exemplary embodiment of the present invention.
Referring to FIGS. 9A to 9C, the pseudo code is used as a syntax according to the scalable video coding international standard and semantics of parameters used in FIGS. 9A to 9C are as follows:
override_max_diff_ref_scale_for_zero_base_block_flag equal to 1 specifies that max_diff_ref_scale_for_skipped_base_block presence in the progressive slice of a key picture.
max_diff_ref_scale_for_skipped_base_block specifies the maximum scaling factor to be used for scaling the differential reference signal in constructing the inter prediction samples used in decoding the progressive slice of a key picture, when the transform block in the base layer is skipped. The value of max_diff_ref_scale_for_skipped_base_block shall be in the range of 0 to 3, inclusive.
A variable MaxDlffRefScaleSkippedBaseBlock is derived as follows.
The variable MaxDlffRefScaleSkippedBaseBlock is set equal to max_diff_ref_scale_for_skipped_base_block.
The following shows embodiments of a decoding process with respect to the pseudo code, i.e., a scaling process for differential interprediction samples of 4x4 luma blocks, a scaling process for differential interprediction samples for 8x8 luma blocks, and a scaling process for differential interprediction samples for chroma blocks.
Scaling process for differential interprediction samples of 4x4 luma blocks
- if numBaseSig is equal to 0, the following applies. - A scaling factor sF is derived as follows.
If mbjype is equal to P_Skip and override_max_diff_ref_scale_for_zero_base_block_flag is equal to 1, sF is set equal to MaxDiffRefScaleSkippedBlock.
Else If ctx4x4ld is equal to 0, sF is set equal to MaxDiffRefScaleZeroBaseBlock. Otherwise (ctx4x4ld is not equal to 0), sF is set equal to max( 0,MaxDiffRefScaleZeroBaseBlock-4).
- The 4x4 array diffPred4x4 of differential luma prediction samples is modified by diffPred4x4[ x,y]=(sF*diffPred4x4[ x,y] + 16) » 5 with x,y=0...3
Scaling process for differential interprediction samples of 8x8 luma blocks Let numBaseSig be the number of values equal to 1 inside the 8x8 array sBC. Depending on numBaseSig the following applies. - If numBaseSig is equal to 0, the following applies.
- A scaling factor sF is derived as follows.
If mb_type is equal to P_Skip and override_max_diff_ref_scale_for_zero_base_block_flag is equal to 1 , sF is set equal to MaxDiffRefScaleSkippedBlock. Otherwise sF is set equal to MaxDiffRefScaleZeroBaseBIock. - The 8x8 array diffPredδxδ of differential luma prediction samples is modified by diffPred8x8[ x,y]=(sF*diffPred8x8[ x,y] + 16) » 5 with x,y=0...7
Scaling process for differential interprediction samples of chroma blocks Let numBaseSigDC be the number of values diffPred4x4[ chroma4x4Blkldx][0,0] that are equal to 1 for chroma4x4Blkldx=0..numChroma4x4Blks-1. Depending on numBaseSigDC the following applies. - If numBaseSigDC is equal to 0, for each 4x4 chroma block with chroma4x4Blkldx=0..numChroma4x4Blks-1 the following applies.
- Let numBaseSigAC be the number of values equal to 1 inside the 4x4 array sBC[chroma4x4Blkldx].
- Depending on numBaseSigAC the following applies.
- If numBaseSigAC is equal to 0, the following applies. - A scaling factor sF is derived as follows.
If mbjype is equal to P_Skip and override_max_diff_ref_scale_for_zero_base_block_flag is equal to 1 , sF is set equal to MaxDiffRefScaleSkippedBlock. Otherwise sF is set equal to MaxDiffRefScaleZeroBaseBIock. The 4x4 array diffPred4x4[chroma4x4Blkldx ] is modified by diffPred4x4[chroma4x4Blkldx ][ x,y]= (sF*diffPred4x4[chroma4x4Blkldx ][ x,y] + 16) » 5 with x,y=0...3
FIG. 10 is a block diagram schematically illustrating the internal structure of a scalable video encoding apparatus according to an exemplary embodiment of the present invention. In the following description of FIG. 10, description similar to that of previous embodiments will be omitted.
Referring to FIG. 10, the scalable video encoding apparatus according to the current embodiment of the present invention includes a mode determination unit 1010, a weight value overriding unit 1020, a reference block generation unit 1030, and an encoding unit 1040. The mode determination unit 1010 determines whether a counterpart block of a base layer of a current frame to be encoded, which corresponds to a block of an
enhancement layer of the current frame, is in a skip mode. The mode determination unit 1010 also determines whether to set a flag indicating overriding of a previous weight value with a skip-mode weight value, which will hereinafter be referred to as an overriding flag. The mode determination unit 1010 determines whether the counterpart block of the base layer is in the skip mode if it sets the overriding flag to '1 ', and does not determines whether the counterpart block of the base layer is in the skip mode if it does not set the overriding flag.
If the block of the base layer of the current frame is in the skip mode, the weight value overriding unit 1020 overrides a previous weight value that has been set for a block of an enhancement layer of a reference frame with a skip-mode weight value set greater than the previous weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame.
The reference block generation unit 1030 generates a reference block based on a weight value set for the block of the enhancement layer of the reference frame. If the mode determination unit 1010 sets the overriding flag to '1 ' and determines that the block of the base layer of the current frame is in the skip mode, the reference block generation unit 1030 generates the reference block by means of a weighted sum of a block of the enhancement layer of the reference frame to which a new weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the new weight value is applied. If the mode determination unit 1010 determines that the block of the base layer of the current frame is not in the skip mode, the reference block generation unit 1030 generates the reference block by means of a weighted sum of the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value. If the mode determination unit 1010 does not set the overriding flag to '1 ', the reference block generation unit 1030 generates the reference block by means of a weighted sum of the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value. The encoding unit 1040 performs AR-FGS encoding on a block of the enhancement layer of the current frame using the generated reference block, thereby generating a bitstream.
FIG. 11 is a block diagram schematically illustrating the internal structure of a scalable video decoding apparatus according to an exemplary embodiment of the
present invention. In the following description of FIG. 11 , description similar to that of previous embodiments will be omitted.
Referring to FIG. 11 , the scalable video decoding apparatus according to the current embodiment of the present invention includes a reception unit 1110, a mode determination unit 1120, a weight value overriding unit 1130, a reference block generation unit 1140, and a decoding unit 1150.
The reception unit 1110 receives a bitstream including a block that has been encoded in a skip mode.
The mode determination unit 1120 determines whether a block of a base layer of a current frame, which corresponds to a block of an enhancement layer of the current frame to be decoded, is in the skip mode. The mode determination unit 1120 also determines whether a flag indicating overriding of a previous weight value with a skip-mode weight value, which will hereinafter be referred to as an overriding flag, has been set in the received bitstream. The mode determination unit 1120 determines whether the block of the base layer is in the skip mode if it confirms that the overriding flag is set to '1 ', and does not determine whether the block of the base layer is in the skip mode if the overriding flag is not set to '1 '.
If the block of the base layer of the current frame is in the skip mode, the weight value overriding unit 1130 extracts the skip-mode weight value from the bitstream and overrides a previous weight value set for a counterpart block of an enhancement layer of a reference frame corresponding to the block of the enhancement layer of the current frame with the extracted skip-mode weight value.
The reference block generation unit 1140 generates a reference block based on a weight value set for the block of the enhancement layer of the reference frame. If the mode determination unit 1120 confirms that the overriding flag is set to T and the block of the base layer of the current frame is in the skip mode, the reference block generation unit 1140 generates the reference block by means of a weighted sum of a counterpart block of the enhancement layer of the reference frame to which the skip-mode weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the skip-mode weight value is applied. If the mode determination unit 1120 determines that the block of the base layer of the current frame is not in the skip mode, the reference block generation unit 1140 generates the reference block based on a weighted sum of the counterpart block of the enhancement layer of the reference frame and the block of the base layer of the current frame using
the previous weight value. If the mode determination unit 1120 confirms that the overriding flag is not set to '1', the reference block generation unit 1140 generates the reference block by means of a weighted sum of the counterpart block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value.
The decoding unit 1150 performs AR-FGS block decoding on the block of the enhancement layer of the current frame using the generated reference block and reconstructs the block.
FIGS. 12 through 15 are graphs for comparing peak signal-to-noise ratio (PSNR) versus bitrate performance of a method for scalable video coding according to exemplary embodiments of the present invention with PSNR versus bitrate performance of a method suggested in JSVM 5.10.
Coding is performed using the syntax applied according to the scalable video coding international standard as illustrated in FIGS. 9A through 9C, the semantics, and the decoding process and a previous weight value parameter "max_diff_ref_scale_for_zero_base_coeff is fixed to 18/32 for an upper layer. A skip-mode weight value "max_diff_ref_scale_for_zero_base_block" for a base layer is set to 28/32 as expressed as a graph marked with circles, is set to 16/32 as expressed as a graph marked with triangles, and is set to 8/32 as expressed as a graph marked with diamond shapes, for an upper layer.
Referring to FIG. 12, after a Foreman CIF 15Hz sequence is coded as "IPPP...", performance comparison is carried out up to 2 FGS layers. When "max_diff_ref_scale_for_zero_base_block" is 28/32,
"max_diff_ref_scale_for_skipped_base_block" is set to 30/32. In the other cases, "max_diff_ref_scale_for__skipped_base_block" is set to 32/32. As can be seen from FIG. 14, a performance improvement of up to 0.15dB is obtained for "max__diff__ref_scale_for_zero_base_block" = 28/32, a performance improvement of up to 1dB is obtained for "max_diff_ref_scale_for_zero_base_block" = 16/32, and a performance improvement of up to 1.35dB is obtained for Mmax_diff_ref_scale_for_zero_base_block" = 8/32.
Referring to FIG. 13, after a Bus CIF 15Hz sequence is coded as "IPPP...", performance comparison is carried out up to 2 FGS layers. When "max_diff_ref_scale_for_zero_base_block" is 28/32,
"max_diff_ref_scale_for_skipped_base_block" is set to 30/32. In the other cases,
"max_diff_ref_scale_for_skipped_base_block" is set to 32/32. As can be seen from FIG. 15, a performance improvement of up to 0.1dB is obtained for "max_diff_ref_scale_for_zero_base_block" = 28/32, a performance improvement of up to 0.43dB is obtained for "max_diff_ref_scale_for_zero_base_b!ock" = 16/32, and a performance improvement of up to 0.73dB is obtained for "max_diff_ref_scale_for_zero_base_block" = 8/32.
Referring to FIG. 14, after a Mobile CIF 15Hz sequence is coded as "IPPP...", performance comparison is carried out up to 2 FGS layers. When "max_diff_ref_scale_for_zero_base_block" is 28/32, "max_diff_ref_scale_for_skipped_base_block" is set to 29/32. In the other cases, "max_diff_ref_scale_for_skipped_base_block" is set to 32/32. As can be seen from FIG. 16, a performance improvement of up to 0.09dB is obtained for "max_diff_ref__scale_for_zero_base_block" = 28/32, a performance improvement of up to 0.84dB is obtained for "max_diff_ref_scale_for_zero_base_block" = 16/32, and a performance improvement of up to 2.07dB is obtained for "max_diff_ref_scale_for_zero_base_block" = 8/32.
Referring to FIG. 15, after a Football CIF 15Hz sequence is coded as "IPPP...", performance comparison is carried out up to 2 FGS layers. When "max_diff_ref_scale_for_zero_base_block" is 28/32, "max_diff_ref_scale_for_skipped_base_block" is set to 29/32. In the other cases, "max_diff__ref_scale_for_skipped_base_block" is set to 32/32. As can be seen from FIG. 15, a performance improvement of up to 0.01 dB is obtained for "max_diff_ref_scale_for_zero_base_block" = 28/32, a performance improvement of up to 0.39dB is obtained for "max_diff_ref_scale_for_zero_base_block" = 16/32, and a performance improvement of up to 0.47dB is obtained for "max_diff_ref_scale_for_zero_base_block" = 8/32.
As described above, scalable video coding efficiency can be improved by an encoding/decoding method and apparatus to which a method of generating a reference block according to the present invention is applied While the scalable video encoding/decoding method has been described as being implemented in units of a macroblock or a block, it can be easily predicted by those of ordinary skill in the art that the present invention can also be applied to a scalable video encoding/decoding method implemented in units of a slice or a frame.
Although an FGS layer is a single layer in the foregoing description, it can also
be easily predicted by those of ordinary skill in the art that the present invention can also be applied to a case where there are two FGS layers or more.
Meanwhile, the present invention can be embodied as code that is readable by a computer on a computer-readable recording medium. The computer-readable recording medium includes all kinds of recording devices storing data that is readable by a computer system. Examples of the computer-readable recording medium include read-only memory (ROM), random access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and carrier waves such as transmission over the Internet. The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. Also, functional programs, code, and code segments for implementing the present invention can be easily construed by programmers skilled in the art.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.
Claims
1. A scalable video encoding method, comprising:
(a) determining whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be encoded, is in a skip mode;
(b) overriding a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode; and
(c) generating a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
2. The scalable video encoding method of claim 1 , further comprising performing adaptive reference fine grain scalability (AR-FGS) encoding on the block of the enhancement layer of the current frame based on the generated reference block.
3. The scalable video encoding method of claim 1 , wherein (a) comprises determining that the block of the base layer is in the skip mode if block data of the base layer of the current frame is the same as block data of a base layer of the reference frame in the temporal direction.
4. The scalable video encoding method of claim 1 , wherein the new weight value is set greater than the previous weight value in order to improve the rate of the use of block data of the enhancement layer of the reference frame.
5. The scalable video encoding method of claim 1 , wherein (c) comprises generating the reference block by means of a weighted sum of the block of the enhancement layer of the reference frame to which the new weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the new weight value is applied.
6. The scalable video encoding method of claim 1 , further comprising (d) generating the reference block based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value if the block of the base layer of the current frame is not in the skip mode.
7. The scalable video encoding method of claim 1 , wherein (a) comprises: (a1) determining whether to set a flag indicating overriding of a previous weight value with a new weight value; and
(a2) determining whether the block of the base layer of the current frame is in the skip mode if the flag is set.
8. The scalable video encoding method of claim 7, further comprising (e) generating the reference block based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value if the flag is not set.
9. The scalable video encoding method of claim 1 , wherein the new weight value is set to have a fixed length of predetermined bits or a variable length, together with the previous weight value, in a slice header.
10. A scalable video decoding method, comprising:
(a) determining whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be decoded, is in a skip mode;
(b) overriding a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode; and
(c) generating a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
11. The scalable video decoding method of claim 10, further comprising performing adaptive reference fine grain scalability (AR-FGS) decoding on the block of the enhancement layer of the current frame based on the generated reference block.
12. The scalable video decoding method of claim 10, wherein (a) comprises determining whether the block of the base layer is in the skip mode based on skip-mode information included in a bitstream.
13. The scalable video decoding method of claim 10, wherein the new weight value is set greater than the previous weight value in order to improve the rate of the use of block data of the enhancement layer of the reference frame.
14. The scalable video decoding method of claim 10, wherein (b) comprises: (b1 ) extracting the new weight value included in the bitstream; and (b2) overriding the previous weight value set for the block of the enhancement layer of the reference frame with the new weight value.
15. The scalable video decoding method of claim 14, wherein the new weight value is extracted from a slice header included in the bitstream.
16. The scalable video decoding method of claim 10, wherein (c) comprises generating the reference block for the block of the enhancement layer of the current frame by means of a weighted sum of the block of the enhancement layer of the reference frame to which the new weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the new weight value is applied.
17. The scalable video decoding method of claim 10, further comprising (d) generating the reference block based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value if the block of the base layer of the current frame is not in the skip mode.
18. The scalable video decoding method of claim 10, wherein (a) comprises: (a1) determining whether a flag indicating overriding of a previous weight value with a new weight value has been set; and
(a2) determining whether the block of the base layer of the current frame is in the skip mode if the flag has been set.
19. The scalable video decoding method of claim 18, further comprising (f) generating the reference block based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value if the flag has not been set.
20. A scalable video encoding apparatus, comprising: a mode determination unit determining whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be encoded, is in a skip mode; a weight value overriding unit overriding a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode; and a reference block generation unit generating a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
21. The scalable video encoding apparatus of claim 20, further comprising an encoding unit performing adaptive reference fine grain scalability (AR-FGS) encoding on the block of the enhancement layer of the current frame based on the generated reference block.
22. The scalable video encoding apparatus of claim 20, wherein the mode determination unit determines that the block of the base layer is in the skip mode if block data of the base layer of the current frame is the same as block data of a base layer of the reference frame in the temporal direction.
23. The scalable video encoding apparatus of claim 20, wherein the new weight value is set greater than the previous weight value in order to improve the rate of the use of block data of the enhancement layer of the reference frame.
24. The scalable video encoding apparatus of claim 20, wherein the reference block generation unit generates the reference block by means of a weighted sum of the block of the enhancement layer of the reference frame to which the new weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the new weight value is applied.
25. The scalable video encoding apparatus of claim 20, wherein the reference block generation unit generates the reference block based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value if the block of the base layer of the current frame is not in the skip mode.
26. The scalable video encoding apparatus of claim 20, wherein the mode determination unit determines whether to set a flag indicating overriding of a previous weight value with a new weight value and determines whether the block of the base layer of the current frame is in the skip mode if the flag has been set.
27. The scalable video encoding apparatus of claim 26, wherein the reference block generation unit generates the reference block based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value if the flag has not been set.
28. A scalable video decoding apparatus comprising: a mode determination unit determining whether a block of a base layer, which corresponds to a block of an enhancement layer of a current frame to be decoded, is in a skip mode; a weight value overriding unit overriding a previous weight value that has been set for a block of an enhancement layer of a reference frame with a new weight value, the block of the enhancement layer of the reference frame corresponding to the block of the enhancement layer of the current frame, if the block of the base layer is in the skip mode; and a reference block generation unit generating a reference block for the block of the enhancement layer of the current frame based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the new weight value.
29. The scalable video decoding apparatus of claim 28, further comprising a decoding unit performing adaptive reference fine grain scalability (AR-FGS) decoding on the block of the enhancement layer of the current frame based on the generated reference block.
30. The scalable video decoding apparatus of claim 28, wherein the mode determination unit determines whether the block of the base layer is in the skip mode based on skip-mode information included in a bitstream.
31. The scalable video decoding apparatus of claim 28, wherein the new weight value is greater than the previous weight value in order to improve the rate of the use of block data of the enhancement layer of the reference frame.
32. The scalable video decoding apparatus of claim 28, wherein the weight value overriding unit extracts the new weight value included in the bitstream and overrides the previous weight value of the block of the enhancement layer of the reference frame with the new weight value.
33. The scalable video decoding apparatus of claim 28, wherein the reference block generation unit generates the reference block for the block of the enhancement layer of the current frame by means of a weighted sum of the block of the enhancement layer of the reference frame to which the new weight value is applied and the block of the base layer of the current frame to which a weight value calculated from the new weight value is applied.
34. The scalable video decoding apparatus of claim 28, wherein the reference block generation unit generates the reference block based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value if the block of the base layer of the current frame is not in the skip mode.
35. The scalable video decoding apparatus of claim 28, wherein the mode determination unit determines whether a flag indicating overriding of a previous weight value with a new weight value has been set and determines whether the block of the base layer of the current frame is in the skip mode if the flag has been set.
36. The scalable video decoding apparatus of claim 35, wherein the reference block generation unit generates the reference block based on the block of the enhancement layer of the reference frame and the block of the base layer of the current frame using the previous weight value if the flag has not been set.
37. A computer-readable recording medium having embodied thereon a program for executing the method of claims 1 through 19.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/305,420 US8630352B2 (en) | 2006-07-04 | 2007-07-04 | Scalable video encoding/decoding method and apparatus thereof with overriding weight value in base layer skip mode |
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR20060062611 | 2006-07-04 | ||
| KR10-2006-0062611 | 2006-07-04 | ||
| KR10-2007-0040969 | 2007-04-26 | ||
| KR1020070040969A KR20080004340A (en) | 2006-07-04 | 2007-04-26 | Scalable coding method of video data and apparatus therefor |
| KR10-2007-0067031 | 2007-07-04 | ||
| KR1020070067031A KR101352979B1 (en) | 2006-07-04 | 2007-07-04 | Scalable video encoding/decoding method and apparatus thereof |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2008004816A1 true WO2008004816A1 (en) | 2008-01-10 |
Family
ID=38894744
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2007/003256 Ceased WO2008004816A1 (en) | 2006-07-04 | 2007-07-04 | Scalable video encoding/decoding method and apparatus thereof |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2008004816A1 (en) |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7956930B2 (en) | 2006-01-06 | 2011-06-07 | Microsoft Corporation | Resampling and picture resizing operations for multi-resolution video coding and decoding |
| US8213503B2 (en) | 2008-09-05 | 2012-07-03 | Microsoft Corporation | Skip modes for inter-layer residual video coding and decoding |
| US8340177B2 (en) | 2004-07-12 | 2012-12-25 | Microsoft Corporation | Embedded base layer codec for 3D sub-band coding |
| US8374238B2 (en) | 2004-07-13 | 2013-02-12 | Microsoft Corporation | Spatial scalability in 3D sub-band decoding of SDMCTF-encoded video |
| US8442108B2 (en) | 2004-07-12 | 2013-05-14 | Microsoft Corporation | Adaptive updates in motion-compensated temporal filtering |
| US8953673B2 (en) | 2008-02-29 | 2015-02-10 | Microsoft Corporation | Scalable video coding and decoding with sample bit depth and chroma high-pass residual layers |
| US8964854B2 (en) | 2008-03-21 | 2015-02-24 | Microsoft Corporation | Motion-compensated prediction of inter-layer residuals |
| US9571856B2 (en) | 2008-08-25 | 2017-02-14 | Microsoft Technology Licensing, Llc | Conversion operations in scalable video encoding and decoding |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030215011A1 (en) * | 2002-05-17 | 2003-11-20 | General Instrument Corporation | Method and apparatus for transcoding compressed video bitstreams |
| WO2006006777A1 (en) * | 2004-07-15 | 2006-01-19 | Samsung Electronics Co., Ltd. | Method and apparatus for predecoding and decoding bitstream including base layer |
| US20060062299A1 (en) * | 2004-09-23 | 2006-03-23 | Park Seung W | Method and device for encoding/decoding video signals using temporal and spatial correlations between macroblocks |
| US20060078053A1 (en) * | 2004-10-07 | 2006-04-13 | Park Seung W | Method for encoding and decoding video signals |
-
2007
- 2007-07-04 WO PCT/KR2007/003256 patent/WO2008004816A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030215011A1 (en) * | 2002-05-17 | 2003-11-20 | General Instrument Corporation | Method and apparatus for transcoding compressed video bitstreams |
| WO2006006777A1 (en) * | 2004-07-15 | 2006-01-19 | Samsung Electronics Co., Ltd. | Method and apparatus for predecoding and decoding bitstream including base layer |
| US20060062299A1 (en) * | 2004-09-23 | 2006-03-23 | Park Seung W | Method and device for encoding/decoding video signals using temporal and spatial correlations between macroblocks |
| US20060078053A1 (en) * | 2004-10-07 | 2006-04-13 | Park Seung W | Method for encoding and decoding video signals |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8340177B2 (en) | 2004-07-12 | 2012-12-25 | Microsoft Corporation | Embedded base layer codec for 3D sub-band coding |
| US8442108B2 (en) | 2004-07-12 | 2013-05-14 | Microsoft Corporation | Adaptive updates in motion-compensated temporal filtering |
| US8374238B2 (en) | 2004-07-13 | 2013-02-12 | Microsoft Corporation | Spatial scalability in 3D sub-band decoding of SDMCTF-encoded video |
| US7956930B2 (en) | 2006-01-06 | 2011-06-07 | Microsoft Corporation | Resampling and picture resizing operations for multi-resolution video coding and decoding |
| US9319729B2 (en) | 2006-01-06 | 2016-04-19 | Microsoft Technology Licensing, Llc | Resampling and picture resizing operations for multi-resolution video coding and decoding |
| US8953673B2 (en) | 2008-02-29 | 2015-02-10 | Microsoft Corporation | Scalable video coding and decoding with sample bit depth and chroma high-pass residual layers |
| US8964854B2 (en) | 2008-03-21 | 2015-02-24 | Microsoft Corporation | Motion-compensated prediction of inter-layer residuals |
| US9571856B2 (en) | 2008-08-25 | 2017-02-14 | Microsoft Technology Licensing, Llc | Conversion operations in scalable video encoding and decoding |
| US10250905B2 (en) | 2008-08-25 | 2019-04-02 | Microsoft Technology Licensing, Llc | Conversion operations in scalable video encoding and decoding |
| US8213503B2 (en) | 2008-09-05 | 2012-07-03 | Microsoft Corporation | Skip modes for inter-layer residual video coding and decoding |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR101106856B1 (en) | Video encoding method and video encoding system | |
| US9578343B2 (en) | Scalable video encoder/decoder with drift control | |
| KR100679035B1 (en) | Deblock filtering method considering intra bit mode, and multi-layer video encoder / decoder using the method | |
| KR101454495B1 (en) | Method and apparatus for encoding and/or decoding video data using adaptive prediction order for spatial and bit depth prediction | |
| EP2428042B1 (en) | Scalable video coding method, encoder and computer program | |
| US8331453B2 (en) | Method for modeling coding information of a video signal to compress/decompress the information | |
| US7245662B2 (en) | DCT-based scalable video compression | |
| WO2008004816A1 (en) | Scalable video encoding/decoding method and apparatus thereof | |
| KR20110056388A (en) | Video Encoding System and Method Using Adaptive Segmentation | |
| US10931945B2 (en) | Method and device for processing prediction information for encoding or decoding an image | |
| KR20010080644A (en) | System and Method for encoding and decoding enhancement layer data using base layer quantization data | |
| EP2001242A9 (en) | Error control system, method, encoder and decoder for video coding | |
| Nguyen et al. | Adaptive downsampling/upsampling for better video compression at low bit rate | |
| US20040179606A1 (en) | Method for transcoding fine-granular-scalability enhancement layer of video to minimized spatial variations | |
| US8184702B2 (en) | Method for encoding/decoding a video sequence based on hierarchical B-picture using adaptively-adjusted GOP structure | |
| EP1811785A2 (en) | Method and apparatus for video encoding/decoding based on orthogonal transform and vector quantization | |
| US8630352B2 (en) | Scalable video encoding/decoding method and apparatus thereof with overriding weight value in base layer skip mode | |
| KR101352979B1 (en) | Scalable video encoding/decoding method and apparatus thereof | |
| US8422810B2 (en) | Method of redundant picture coding using polyphase downsampling and the codec using the method | |
| US20030118099A1 (en) | Fine-grain scalable video encoder with conditional replacement | |
| EP1720356A1 (en) | A frequency selective video compression | |
| US20030118113A1 (en) | Fine-grain scalable video decoder with conditional replacement | |
| JPH09149420A (en) | Method and device for compressing dynamic image | |
| Benierbah et al. | Generalized hybrid intra and Wyner-Ziv video coding | |
| Wang | Robust image and video coding with adaptive rate control |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07793191 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 12305420 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| NENP | Non-entry into the national phase |
Ref country code: RU |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 07793191 Country of ref document: EP Kind code of ref document: A1 |