HK1210556B - Image encoding device, image encoding method, and image decoding device - Google Patents
Image encoding device, image encoding method, and image decoding device Download PDFInfo
- Publication number
- HK1210556B HK1210556B HK15111166.8A HK15111166A HK1210556B HK 1210556 B HK1210556 B HK 1210556B HK 15111166 A HK15111166 A HK 15111166A HK 1210556 B HK1210556 B HK 1210556B
- Authority
- HK
- Hong Kong
- Prior art keywords
- filter
- image
- decoded image
- variable length
- processing
- Prior art date
Links
Description
The present invention is a divisional application entitled "image encoding apparatus, image decoding apparatus, image encoding method, and image decoding method" having application No. 201080027052.8 and having application date of 2010, 5.25.25.
Technical Field
The present invention relates to an image encoding device and an image encoding method for compression-encoding and transmitting an image, and an image decoding device and an image decoding method for decoding an image from encoded data transmitted from the image encoding device.
Background
Conventionally, in international standard video coding systems such as MPEG and ITU-t h.26x, an input video frame is divided into units of macroblocks each composed of 16 × 16 pixel blocks, motion-compensated prediction is performed, and then a prediction error signal is orthogonally transformed and quantized into units of blocks to perform information compression.
However, if the compression rate becomes high, the quality of the prediction reference image used for performing the motion compensation prediction is degraded, and there is a problem that the compression efficiency is hindered.
Therefore, in the coding scheme of MPEG-4AVC/h.264 (see non-patent document 1), the block distortion of the prediction reference image caused by quantization of the orthogonal transform coefficient is removed by performing the processing of the in-loop blocking filter.
Here, fig. 17 is a configuration diagram showing the image coding apparatus disclosed in non-patent document 1.
In this image encoding device, if an image signal to be encoded is input to the block dividing unit 101, the image signal is divided into macroblock units, and the macroblock unit image signal is output to the prediction unit 102 as a divided image signal.
Upon receiving the divided image signal from the block dividing unit 101, the prediction unit 102 predicts the image signal of each color component in the intra frame or the inter frame and calculates a prediction error signal.
In particular, when motion-compensated prediction is performed between frames, a motion vector is searched for in units of a macroblock itself or subblocks into which the macroblock is divided more finely.
Then, by using the motion vector, motion compensation prediction is performed on the reference image signal stored in the memory 107 to generate a motion compensation predicted image, and a difference between a prediction signal representing the motion compensation predicted image and the divided image signal is obtained to calculate a prediction error signal.
The prediction unit 102 outputs the parameter for generating the prediction signal determined when the prediction signal is obtained to the variable length coding unit 108.
The parameters for generating the prediction signal include, for example, information such as an intra prediction mode indicating how to perform spatial prediction in a frame, and a motion vector indicating an amount of motion between frames.
Upon receiving the prediction error signal from the prediction unit 102, the compression unit 103 performs DCT (discrete cosine transform) processing on the prediction error signal to remove signal correlation, and then performs quantization to obtain compressed data.
Upon receiving the compressed data from the compression unit 103, the local decoding unit 104 performs inverse quantization on the compressed data and performs inverse DCT processing, thereby calculating a prediction error signal corresponding to the prediction error signal output from the prediction unit 102.
Upon receiving the prediction error signal from the local decoding unit 104, the adder 105 adds the prediction error signal and the prediction signal output from the prediction unit 102 to generate a local decoded image.
The loop filter 106 removes block distortion superimposed on the local decoded image signal representing the local decoded image generated by the adder 105, and stores the local decoded image signal from which the distortion has been removed in the memory 107 as a reference image signal.
Upon receiving the compressed data from the compression unit 103, the variable length coding unit 108 entropy-codes the compressed data and outputs a bit stream as a result of the coding.
When outputting a bit stream, the variable length coding unit 108 multiplexes the prediction signal generation parameter output from the prediction unit 102 into the bit stream and outputs the multiplexed parameter.
Here, in the scheme disclosed in non-patent document 1, the loop filter 106 determines the smoothing intensity for the peripheral pixels of the block boundary of DCT based on information such as the quantization thickness, the coding mode, and the degree of motion vector variation, and reduces distortion generated at the block boundary.
This improves the quality of the reference image signal, and can improve the efficiency of motion compensated prediction in subsequent encoding.
On the other hand, the scheme disclosed in non-patent document 1 has a problem that high-frequency components of a signal are lost as the coding is performed at a higher compression rate, and the entire screen is excessively smoothed, thereby blurring a video.
In order to solve this problem, non-patent document 2 proposes a technique of applying a Wiener Filter (Wiener Filter) as the loop Filter 106 to configure the loop Filter 106 so that distortion of a square error (squared error) between an image signal to be encoded as an original image signal and a reference image signal corresponding thereto is minimized.
Fig. 18 is an explanatory diagram illustrating a principle of improving the quality of a reference image signal by a wiener filter in the image encoding device disclosed in non-patent document 2.
In fig. 18, a signal s corresponds to an image signal to be encoded which is input to the block dividing unit 101 in fig. 17, and a signal s' corresponds to a local decoded image signal which is output from the adder 105 in fig. 17 or a local decoded image signal in which distortion generated at a block boundary by the loop filter 106 in non-patent document 1 is reduced.
That is, the signal s' is a signal in which coding distortion (noise) e is superimposed on the signal s.
A wiener filter is defined as a filter that is implemented on a signal s 'in such a way that the coding distortion (noise) e is minimized in the specification of the square error distortion, and in general, the autocorrelation matrix R of the signal s' can be passed throughs’s’And the cross-correlation matrix R of the sum signals s, sss’The filter coefficient w is obtained from the following equation (1). Matrix Rs’s’、Rss’The size of (d) corresponds to the number of taps of the filter to be found.
By performing wiener filtering of the filter coefficient w, a signal of improved quality is obtained as a signal equivalent to the reference image signal
In non-patent document 2, a filter coefficient w of a plurality of kinds of tap numbers is obtained in the entire frame of an image to be encoded, and the code amount of the filter coefficient w and distortion after filter processing are determinedAfter the filter having the optimal number of taps in the rate distortion specification (rate distortion criterion), the signal s' is further divided into blocks of a plurality of sizes, a wiener filter is selected whether or not the optimal number of taps obtained as described above is applied to each block, and information ON/OFF of the filter is transmitted for each block.
This can suppress the amount of additional code required for the wiener filter processing, and improve the quality of a predicted image.
[ non-patent document 1 ]
MPEG-4AVC (ISO/IEC 14496-10)/ITU-T H.264 standard
[ non-patent document 2 ]
T.Chujoh,G.Yasuda,N.Wada,T.Watanabe,T.Yamakage,“Block-based AdaptiveLoop Filter”,VCEG-AI18,ITU-T SG16/Q.6meeting,July 2008
Disclosure of Invention
Since the conventional image encoding apparatus is configured as described above, 1 wiener filter is designed for the entire frame of an image to be encoded, and whether or not the wiener filter process is applied to each of a plurality of blocks constituting the frame is applied. However, since the same wiener filter is applied to any block, there is a problem that the wiener filter is not necessarily an optimal filter for the block, and image quality cannot be sufficiently improved in some cases.
The present invention has been made to solve the above-described problems, and an object of the present invention is to provide an image encoding device, an image decoding device, an image encoding method, and an image decoding method that can improve the accuracy of improvement in image quality.
In the image encoding device of the present invention, the filter operation means includes: a region classification unit that extracts feature values of a plurality of regions constituting the local decoded image obtained by the local decoding unit, and classifies a category to which each region belongs based on the feature values; and a filter design processing unit that generates, for each class to which 1 or more regions out of a plurality of regions constituting the local decoded image belong, a filter that minimizes an error between the local decoded image and the input image in the 1 or more regions belonging to the class, and compensates for distortion superimposed on the region using the filter.
According to the present invention, the filter action unit includes: a region classification unit that extracts feature values of a plurality of regions constituting the local decoded image obtained by the local decoding unit, and classifies a category to which each region belongs based on the feature values; and a filter design processing unit that generates a filter that minimizes an error between the input image and the local decoded image in 1 or more regions belonging to the category, for each category to which 1 or more regions among the plurality of regions constituting the local decoded image belong, and compensates distortion superimposed on the region using the filter.
Drawings
Fig. 1 is a configuration diagram showing an image coding apparatus according to embodiment 1 of the present invention.
Fig. 2 is a block diagram showing the loop filter 6 of the image encoding device according to embodiment 1 of the present invention.
Fig. 3 is a flowchart showing the processing contents of the loop filter 6 of the image encoding device according to embodiment 1 of the present invention.
Fig. 4 is an explanatory diagram showing an example of a category to which 4 regions (region a, region B, region C, and region D) constituting a local decoded image are assigned.
Fig. 5 is an explanatory diagram showing 16 blocks (K) constituting a local decoded image.
Fig. 6 is an explanatory diagram showing an example of the bit stream generated by the variable length coding unit 8.
Fig. 7 is a block diagram showing an image decoding device according to embodiment 1 of the present invention.
Fig. 8 is a block diagram showing the loop filter 25 of the image decoding apparatus according to embodiment 1 of the present invention.
Fig. 9 is a block diagram showing the loop filter 25 of the image decoding apparatus according to embodiment 1 of the present invention.
Fig. 10 is a flowchart showing the processing contents of the loop filter 25 of the image decoding apparatus according to embodiment 1 of the present invention.
Fig. 11 is a flowchart showing the processing contents of the loop filter 6 of the image encoding device according to embodiment 2 of the present invention.
Fig. 12 is an explanatory diagram showing an example of selection of wiener filters in a plurality of blocks (K) constituting a local decoded image.
Fig. 13 is a flowchart showing the processing contents of the loop filter 25 of the image decoding apparatus according to embodiment 2 of the present invention.
Fig. 14 is a flowchart showing the processing contents of the loop filter 6 of the image encoding device according to embodiment 3 of the present invention.
Fig. 15 is a flowchart showing the processing content of the loop filter 6 in the 1 st frame.
Fig. 16 is a flowchart showing the processing contents of the loop filter 6 after the 2 nd frame.
Fig. 17 is a block diagram showing an image coding apparatus disclosed in non-patent document 1.
Fig. 18 is an explanatory diagram showing a principle of improving the quality of the reference image signal by the wiener filter.
Detailed Description
Hereinafter, specific embodiments will be described with reference to the drawings in order to explain the present invention in more detail.
Embodiment 1.
Fig. 1 is a configuration diagram showing an image coding apparatus according to embodiment 1 of the present invention.
In fig. 1, the block dividing unit 1 performs the following processes: an image signal to be encoded as an input image is divided into macroblock units, and the macroblock unit image signal is output to the prediction unit 2 as a divided image signal.
Upon receiving the divided image signal from the block dividing unit 1, the prediction unit 2 performs prediction in a frame or between frames with respect to the divided image signal to generate a prediction signal.
In particular, when motion-compensated prediction is performed between frames, a motion vector is detected in units of a macroblock itself or a subblock obtained by dividing a macroblock into more fine parts from the divided image signal and a reference image signal indicating a reference image stored in the memory 7, and a prediction signal indicating a predicted image is generated based on the motion vector and the reference image signal.
After the prediction signal is generated, a process of calculating a prediction error signal which is a difference between the divided image signal and the prediction signal is performed.
The prediction unit 2 determines a parameter for generating a prediction signal when generating a prediction signal, and outputs the parameter for generating a prediction signal to the variable length coding unit 8.
The parameters for generating the prediction signal include, for example, information such as an intra prediction mode indicating how to perform spatial prediction in a frame, and a motion vector indicating an amount of motion between frames.
The block dividing unit 1 and the prediction unit 2 constitute a prediction processing unit.
The compression unit 3 performs the following processes: DCT (discrete cosine transform) processing is performed on the prediction error signal calculated by the prediction unit 2 to calculate a DCT coefficient, the DCT coefficient is quantized, and compressed data that is the quantized DCT coefficient is output to the local decoding unit 4 and the variable length coding unit 8. The compression unit 3 constitutes differential image compression means.
The local decoding unit 4 performs the following processing: the compressed data output from the compression unit 3 is inversely quantized and subjected to inverse DCT processing, thereby calculating a prediction error signal corresponding to the prediction error signal output from the prediction unit 2.
The adder 5 performs the following processing: the prediction error signal calculated by the local decoding unit 4 and the prediction signal generated by the prediction unit 2 are added to generate a local decoded image signal representing a local decoded image.
The local decoding unit 4 and the adder 5 constitute local decoding means.
The loop filter 6 performs the following processing: the filter processing for compensating for the distortion superimposed on the local decoded image signal generated by the adder 5 is performed, the local decoded image signal after the filter processing is output to the memory 7 as a reference image signal, and information of the filter used when the filter processing is performed is output to the variable length coding unit 8. The loop filter 6 constitutes a filter operation means.
The memory 7 is a recording medium for storing the reference image signal output from the loop filter 6.
The variable length coding unit 8 performs the following processing: the compressed data output from the compression unit 3, the filter information output from the loop filter 6, and the parameter for prediction signal generation output from the prediction unit 2 are entropy-encoded, and a bit stream indicating the encoding results thereof is generated. The variable length coding unit 8 constitutes a variable length coding unit.
Fig. 2 is a block diagram showing the loop filter 6 of the image encoding device according to embodiment 1 of the present invention.
In fig. 2, a frame memory 11 is a recording medium that stores 1 frame of the local decoded image signal generated by the adder 5.
The region classification unit 12 performs the following processing: feature values of a plurality of regions constituting a local decoded image represented by a local decoded image signal of 1 frame stored in the frame memory 11 are extracted, and the class to which each region belongs is classified based on the feature values.
The filter design processing unit 13 performs the following processing: for each class to which 1 or more regions out of a plurality of regions constituting a local decoded image belong, a wiener filter is generated that minimizes an error between an image signal to be encoded in the 1 or more regions belonging to the class and the local decoded image signal, and distortion superimposed on the region is compensated for using the wiener filter.
The filter design processing unit 13 performs a process of outputting filter information on the wiener filter to the variable length encoding unit 8.
Next, the operation will be described.
When an image signal to be encoded is input, the block dividing unit 1 divides the image signal into macroblock units and outputs the macroblock unit image signal to the prediction unit 2 as a divided image signal.
Upon receiving the divided image signal from the block dividing unit 1, the prediction unit 2 detects a parameter for prediction signal generation for predicting whether the divided image signal is in a frame or in an inter frame. Then, a prediction signal representing a prediction image is generated using the parameter for prediction signal generation.
In particular, a motion vector, which is a parameter for generating a prediction signal for performing prediction between frames, is detected from the divided image signal and the reference image signal stored in the memory 7.
Then, if a motion vector is detected, the prediction unit 2 performs motion-compensated prediction on the reference image signal using the motion vector, thereby generating a prediction signal.
If a prediction signal representing a predicted image is generated, the prediction unit 2 calculates a prediction error signal which is the difference between the prediction signal and the divided image signal, and outputs the prediction error signal to the compression unit 3.
The prediction unit 2 determines a parameter for generating a prediction signal when generating the prediction signal, and outputs the parameter for generating the prediction signal to the variable length coding unit 8.
The parameters for generating the prediction signal include, for example, information such as an intra prediction mode indicating how to perform spatial prediction in a frame, and a motion vector indicating an amount of motion between frames.
Upon receiving the prediction error signal from the prediction unit 2, the compression unit 3 performs DCT (discrete cosine transform) processing on the prediction error signal to calculate a DCT coefficient, and quantizes the DCT coefficient.
Then, the compression unit 3 outputs the compressed data, which is the quantized DCT coefficient, to the local decoding unit 4 and the variable length coding unit 8.
Upon receiving the compressed data from the compression unit 3, the local decoding unit 4 performs inverse quantization on the compressed data and performs inverse DCT processing to calculate a prediction error signal corresponding to the prediction error signal output from the prediction unit 2.
If the local decoding unit 4 calculates the prediction error signal, the adder 5 adds the prediction error signal and the prediction signal generated by the prediction unit 2 to generate a local decoded image signal representing a local decoded image.
When the adder 5 generates the local decoded image signal, the loop filter 6 performs a filtering process for compensating for distortion superimposed on the local decoded image signal, and stores the local decoded image signal after the filtering process in the memory 7 as a reference image signal.
The loop filter 6 outputs information of the filter used when the filtering process is performed to the variable length coding unit 8.
The variable length coding unit 8 performs the following processing: the compressed data output from the compression unit 3, the filter information output from the loop filter 6, and the parameter for prediction signal generation output from the prediction unit 2 are entropy-encoded, and a bit stream indicating the encoding results thereof is generated.
Here, although the prediction signal generation parameter is also entropy-encoded, the prediction signal generation parameter may be multiplexed into the generated bit stream and output without entropy-encoding the prediction signal generation parameter.
The processing content of the loop filter 6 will be specifically described below.
Fig. 3 is a flowchart showing the processing contents of the loop filter 6 of the image encoding device according to embodiment 1 of the present invention.
First, the frame memory 11 of the loop filter 6 stores the local decoded image signal generated by the adder 5 for 1 frame.
The region classification unit 12 extracts feature values of a plurality of regions constituting the local decoded image represented by the 1-frame local decoded image signal stored in the frame memory 11, and classifies the category to which each region belongs based on the feature values (step ST 1).
For example, for each of certain regions (blocks of arbitrary size (M × M pixels)), the variance (variance) of the local decoded image signal in the region, the DCT coefficient, the motion vector, the quantization parameter of the DCT coefficient, and the like are extracted as feature quantities, and classification is performed based on these pieces of information. M is an integer of 1 or more.
When a plurality of areas are assigned to any one of category 1 to category N (N is an integer of 1 or more), for example, when a variance value of a local decoded image signal in the area is used as a feature amount, N-1 thresholds are prepared in advance, and the variance value of the local decoded image signal and N-1 thresholds (th) are set1<th2<…<thN-1) The comparison is performed to classify the category to which the region belongs.
For example, the variance value in the locally decoded image signal is thN-3Above and below thN-2In the case of (1), the region is assigned to the class N-2, and the variance value in the local decoded image signal is th2Above and below th3In the case of (2), the area is assigned to the category 3.
Here, N-1 thresholds are prepared in advance, but these thresholds may be dynamically changed for each sequence and each frame.
For example, when a motion vector in the region is used as the feature amount, an average vector or an intermediate vector of the motion vector is calculated, and the class to which the region belongs is classified according to the magnitude or direction of the vector.
Here, as for the average vector, the vector component is obtained by averaging the components (x component, y component) of each motion vector.
In addition, as for the intermediate vector, a result obtained by taking an intermediate value for each component (x component, y component) of the motion vector is regarded as a vector component.
If the region classification unit 12 assigns the plurality of regions to any one of the classes 1 to N, the filter design processing unit 13 generates a wiener filter that minimizes an error between the local decoded image signal and the image signal to be encoded in 1 or more regions belonging to the class, for each of the classes to which 1 or more regions out of the plurality of regions constituting the local decoded image belong (steps ST2 to ST 8).
For example, as shown in fig. 4, when the local decoded image is composed of 4 regions (region a, region B, region C, and region D), if the region a and the region C are assigned to the category 3, the region B is assigned to the category 5, and the region D is assigned to the category 6, a wiener filter is generated that minimizes an error between the local decoded image signal and the image signal to be encoded in the region a and the region C belonging to the category 3.
Further, a wiener filter is generated which minimizes an error between the image signal to be encoded in the region B belonging to the category 5 and the local decoded image signal, and a wiener filter is generated which minimizes an error between the image signal to be encoded in the region D belonging to the category 6 and the local decoded image signal.
In addition, when generating a wiener filter that minimizes the error, for example, when filter design is performed with various numbers of taps, the filter design processing unit 13 calculates costs as described below, and determines the number of taps and coefficient values of the filter that minimizes the costs.
Cost D + lambda R (2)
Where D is the sum of squared errors between the image signal to be encoded in the region to which the filter to be applied and the local decoded image signal after the filtering process is applied, λ is a constant, and R is the amount of code generated in the loop filter 6.
Here, the cost is expressed by the equation (2), but this is merely an example, and for example, only the sum of squared errors D may be used as the cost.
Further, instead of the sum of squared errors D, other evaluation values such as the sum of absolute values of errors may be used.
If the filter design processing unit 13 generates a wiener filter for each of the categories to which 1 or more regions belong, it determines whether or not each of a plurality of blocks constituting the local decoded image (for example, local regions smaller than the regions a to D constituting the local decoded image) is a block to which the filtering process is to be applied (steps ST9 to ST 16).
That is, the filter design processing unit 13 compares, for each of a plurality of blocks constituting the local decoded image, the error between the image signal to be encoded and the local decoded image signal in the block before and after the filtering process.
For example, as shown in fig. 5, when the local decoded image is composed of 16 blocks (K) (K is 1, 2, … 16), the sum of square errors between the image signal to be encoded and the local decoded image signal in the block (K) before and after the filtering process is compared for each block.
In addition, block 1, block 2, block 5, and block 6 of fig. 5 correspond to region a of fig. 4, block 3, block 4, block 7, and block 8 correspond to region B, block 9, block 10, block 13, and block 14 correspond to region C, and block 11, block 12, block 15, and block 16 correspond to region D.
Here, the sum of squared errors before and after the filtering process is compared, but the cost (D + λ · R) shown by equation (2) before and after the filtering process may be compared, or the sum of absolute values of errors before and after the filtering process may be compared.
If the sum of squared errors after the filtering process is smaller than the sum of squared errors before the filtering process, the filter design processing unit 13 determines that the block (K) is a block to which the filtering process is applied.
On the other hand, if the sum of squared errors after the filtering process is larger than the sum of squared errors before the filtering process, it is determined that the block (K) is a block to which the filtering process is not applied.
Then, the filter design processing unit 13 calculates the cost of performing the filtering process when the cost is the minimum in steps ST1 to ST16 and the cost of not performing the filtering process for the entire frame, and determines whether or not to perform the filtering process for the entire frame (steps ST17 to ST 18).
In the frame determined to be subjected to the filtering process in step ST18, a flag (frame _ filter _ ON _ off _ flag) is set to 1(ON), the filtering process is performed in the case where the cost is the minimum in steps ST1 to ST16, and the local decoded image signal after the filtering process is output to the memory 7 as a reference image signal (steps ST19 to ST 20).
For example, if the region including the block (K) is the region B and the type to which the region B belongs is the type 5, the filter processing in the block (K) is performed using the wiener filter of the type 5, and the local decoded image signal after the filter processing is output to the memory 7 as the reference image signal.
In this case, when the cost is the minimum in steps ST1 to ST16, if the process of selecting whether or not to perform the filtering process for each block is performed (when the flag (block _ filter _ ON _ off _ flag) is 1 (ON)), the filtering process in the block (K) determined not to be performed is not performed for the block (K), and the local decoded image signal before the filtering process is output to the memory 7 as it is as the reference image signal. On the other hand, if the cost is the minimum in steps ST1 to ST16, if the process of selecting whether or not to perform the filtering process for each block is not performed (if the flag (block _ filter _ on _ OFF _ flag) is 0 (OFF)), the filtering process is performed on all the local decoded image signals in the frame using the wiener filters of the types to which the regions to which the respective signals belong, and the local decoded image signals after the filtering process are output to the memory 7 as the reference image signals.
In addition, in the frame determined not to be subjected to the filtering process in step ST18, a flag (frame _ filter _ on _ OFF _ flag) is set to 0(OFF), and the local decoded image signal before the filtering process is output to the memory 7 as it is as a reference image signal (steps ST21 to ST 22).
In steps ST2 to ST22 in the flowchart, "min _ cost" is a variable for storing the minimum value of the cost, "i" is an index and loop counter of the filter tap number tap [ i ], and "j" is an index and loop counter of the block size bi _ size [ j ].
Further, "min _ tap _ idx" is an index (i) of the number of filter taps whose cost is the smallest, and "min _ bl _ size _ idx" is an index (j) of the block size whose cost is the smallest.
Further, "MAX" is an initial value (sufficiently large value) of the minimum cost value.
·tap[i](i=0~N1)
A predetermined selectable array of filter tap numbers of N1(N1 ≧ 1) types is stored.
·bl_size[j](j=0~N2)
The arrangement of block sizes (bl _ size [ j ] × bl _ size [ j ] pixels) of predetermined selectable N2(N2 ≧ 1) types is stored.
·block_filter_on_off_flag
A flag indicating whether or not to perform processing for selecting whether or not to perform filter processing for each block in the frame.
·frame_filter_on_off_flag
And a flag indicating whether or not the filtering process is performed in the frame.
Step ST2 is a step of setting an initial value, and steps ST3 to ST8 are loops of a process of selecting the number of filter taps.
Further, step ST9 is a step of setting an initial value, and steps ST10 to ST16 are a loop of processing for selecting a block size and determining whether or not to perform a filtering process for each block of the selected block size.
Further, steps ST17 to ST18 are steps of determining whether or not to perform the filtering process ON the entire frame, steps ST19 to ST20 are steps of setting frame _ filter _ ON _ OFF _ flag to 1(ON) and performing the optimal filtering process determined in steps ST1 to ST16, and steps ST21 to ST22 are steps of setting frame _ filter _ ON _ OFF _ flag to 0(OFF) and not performing the filtering process ON the frame.
As described above, if the filter design processing unit 13 generates a wiener filter and performs filtering processing, the filter design processing unit outputs filter information on the wiener filter to the variable length encoding unit 8.
In the filter information, a flag (frame _ filter _ on _ off _ flag) indicating whether or not the filtering process is performed in the frame is included.
When the flag is ON (filtering is performed), the following information is included as the filter information.
(1) Number of wiener filters (number of classes to which more than 1 region belongs)
The number of wiener filters may also be different for each frame.
(2) Information (index) of the number of taps of a wiener filter
In the case of common across all filters within a frame, a common number of taps is included.
If the number of taps is different for each filter, the number of taps of each filter is included.
(3) Information on coefficients of wiener filters actually used (wiener filters of a class to which 1 or more regions belong)
Information relating to a wiener filter that is not actually used even if generated is not included.
(4) ON/OFF information of filter per block and block size information
A flag (block _ filter _ ON _ OFF _ flag) indicating whether or not to perform ON/OFF (filter process is present) for each block in the frame.
Only in the case where block _ filter _ ON _ OFF _ flag is ON, including block size information (index) and ON/OFF information of the filtering process for each block.
Here, the information (1) to (4) is included as the filter information, but the number of wiener filters, the number of taps of the wiener filter, and the ON/OFF block size may be held as information that is commonly specified in the image encoding device and the image decoding device, without being transmitted by encoding.
Although fig. 3 has been described as a specific processing content of the loop filter 6 in the above description, processing content of ON/OFF (no (4) as filter information) in which the filtering processing is not performed for each block may be set as the processing content of the loop filter 6 without omitting steps ST9 to ST 16.
The filter information output from the filter design processing unit 13 is entropy-encoded by the variable length coding unit 8 and transmitted to the image decoding apparatus as described above.
Fig. 6 is an explanatory diagram showing an example of the bit stream generated by the variable length coding unit 8.
Fig. 7 is a block diagram showing an image decoding device according to embodiment 1 of the present invention.
In fig. 7, upon receiving a bit stream from the image encoding apparatus, the variable length decoding unit 21 performs a process of variable length decoding compressed data, filter information, and a parameter for prediction signal generation from the bit stream. The variable length decoding unit 21 constitutes a variable length decoding unit.
The prediction unit 22 performs processing for generating a prediction signal representing a prediction image using the parameter for prediction signal generation variable-length decoded by the variable-length decoding unit 21. In particular, when a motion vector is used as a parameter for generating a prediction signal, a process of generating a prediction signal from the motion vector and a reference image signal stored in the memory 26 is performed. The prediction unit 22 constitutes prediction image generation means.
The prediction error decoding unit 23 performs the following processing: the compressed data variable-length decoded by the variable-length decoding unit 21 is inversely quantized and subjected to inverse DCT processing, thereby calculating a prediction error signal corresponding to the prediction error signal output from the prediction unit 2 in fig. 1.
The adder 24 performs a process of adding the prediction error signal calculated by the prediction error decoding unit 23 to the prediction signal generated by the prediction unit 22 to calculate a decoded image signal corresponding to the decoded image signal output from the adder 5 in fig. 1.
The prediction error decoding unit 23 and the adder 24 constitute decoding means.
The loop filter 25 performs filtering processing for compensating for distortion superimposed on the decoded image signal output from the adder 24, and performs processing for outputting the filtered decoded image signal to the outside and the memory 26 as a filtered decoded image signal. The loop filter 25 constitutes a filter operation means.
The memory 26 is a recording medium that stores the filtered decoded image signal output from the loop filter 25 as a reference image signal.
Fig. 8 is a block diagram showing the loop filter 25 of the image decoding apparatus according to embodiment 1 of the present invention.
In fig. 8, the frame memory 31 is a recording medium that stores the decoded image signal output from the adder 24 for 1 frame.
The region classification unit 32 performs processing of extracting feature quantities of a plurality of regions constituting a decoded image represented by a 1-frame decoded image signal stored in the frame memory 31, and classifying a category to which each region belongs based on the feature quantities, as in the region classification unit 12 of fig. 2.
The filter processing unit 33 performs processing of generating a wiener filter suitable for the type to which each region classified by the region classification unit 32 belongs by referring to the filter information variable-length decoded by the variable-length decoding unit 21, and compensating for distortion superimposed on the region using the wiener filter.
In the example of fig. 8, the loop filter 25 in which the frame memory 31 is mounted on the previous stage is shown, but when the filtering process is performed on a macroblock-by-macroblock basis, the frame memory 31 may not be mounted on the previous stage, and the region classification unit 32 may extract the feature values of a plurality of regions constituting the decoded image of the macroblock, as shown in fig. 9.
However, in this case, it is necessary to perform processing that can be independent for each macroblock in the filtering processing in the image encoding device.
Next, the operation will be described.
Upon receiving a bit stream from the image encoding apparatus, the variable length decoding unit 21 performs variable length decoding on the compressed data, the filter information, and the prediction signal generation parameter from the bit stream.
Upon receiving the parameter for generating the prediction signal from the variable length decoding unit 21, the prediction unit 22 generates the prediction signal based on the parameter for generating the prediction signal. In particular, when a motion vector is received as a parameter for generating a prediction signal, a prediction signal is generated from the motion vector and a reference image signal stored in the memory 26.
Upon receiving the compressed data from the variable length decoding unit 21, the prediction error decoding unit 23 performs inverse quantization on the compressed data and performs inverse DCT processing, thereby calculating a prediction error signal corresponding to the prediction error signal output from the prediction unit 2 of fig. 1.
If the prediction error decoding unit 23 calculates the prediction error signal, the adder 24 adds the prediction error signal to the prediction signal generated by the prediction unit 22, thereby calculating a decoded image signal corresponding to the local decoded image signal output from the adder 5 in fig. 1.
Upon receiving the decoded image signal from the adder 24, the loop filter 25 performs filtering processing for compensating for distortion superimposed on the decoded image signal, outputs the decoded image signal after the filtering processing to the outside as a filtered decoded image signal, and stores the decoded image signal in the memory 26 as a reference image signal.
The processing content of the loop filter 25 will be specifically described below.
Fig. 10 is a flowchart showing the processing contents of the loop filter 25 of the image decoding apparatus according to embodiment 1 of the present invention.
First, the frame memory 31 of the loop filter 25 stores the decoded image signal output from the adder 24 for 1 frame.
When the flag (frame _ filter _ ON _ off _ flag) included in the filter information is ON (filtering process is performed) (step ST31), the area classification unit 32 extracts the feature values of each of the plurality of areas constituting the decoded image represented by the 1-frame decoded image signal stored in the frame memory 31, and classifies the type to which each area belongs based ON the feature values, in the same manner as the area classification unit 12 in fig. 2 (step ST 32).
Upon receiving the filter information from the variable length decoding unit 21, the filter processing unit 33 generates a wiener filter applied to the category to which each of the regions classified by the region classification unit 32 belongs, with reference to the filter information (step ST 33).
For example, when the number of wiener filters (the number of classes to which 1 or more regions belong) is N, the number of taps of the wiener filter is L × L, and the coefficient value of each wiener filter is Wi11、Wi12、…、Wi1L、…、WiL1、WiL2、…、WiLLN wiener filters Wi(i ═ 1, 2, …, and N) are shown in the following formulae.
If the filter processing unit 33 generates N wiener filters WiThen, the distortion of the decoded image signal superimposed on the 1 frame is compensated using these wiener filters, and the compensated decoded image signal is output to the outside and the memory 26 as a filtered decoded image signal (step ST 34).
Here, the decoded image signal after the filtering processRepresented by the following formula (4).
The matrix S is a reference signal group including L × L pixels of the decoded image signal S to be subjected to the filtering process, and id (S) is a number (filter number) of a class to which a region including the signal S obtained by the region classification unit 32 belongs.
The filter processing unit 33 refers to a flag (block _ filter _ ON _ off _ flag) included in the filter information at the time of the above-described filter processing, and when the flag (block _ filter _ ON _ off _ flag) is 1(ON), refers to block size information included in the filter information, determines a plurality of blocks (K) constituting a decoded image, and then refers to information included in the filter information whether or not to perform the filter processing for each block (K), and performs the filter processing.
That is, when the flag (block _ filter _ ON _ off _ flag) is 1(ON), the filter processing unit 33 performs filter processing ON the decoded image signal in the block (K) among the plurality of blocks constituting the decoded image, using the wiener filter of the type to which the region including the block (K) belongs, but outputs the decoded image signal before the filter processing to the outside and the memory 26 as it is as the filter-processed decoded image signal for the block (K) not subjected to the filter processing.
On the other hand, when the flag (block _ filter _ on _ OFF _ flag) is 0(OFF), the filter processing is performed on all decoded image signals in the frame using the filter corresponding to the type assigned to each region by the region classification unit 32.
When the flag (frame _ filter _ on _ OFF _ flag) included in the filter information is OFF (no filtering process is performed) (step ST31), the filter processing unit 33 outputs the decoded image signal output from the adder 24 to the outside and the memory 26 as the filtered decoded image signal without performing the filtering process for the frame (step ST 35).
As described above, according to this embodiment 1, the loop filter 6 includes: a region classification unit 12 that extracts feature values of a plurality of regions constituting the local decoded image represented by the local decoded image signal output from the adder 5, and classifies a category to which each region belongs based on the feature values; and a filter design processing unit 13 that generates a wiener filter that minimizes a sum of squared errors between an image signal to be encoded and the local decoded image signal in 1 or more regions belonging to the category, for each category to which 1 or more regions among the plurality of regions constituting the local decoded image belong, and compensates for distortion superimposed on the region using the wiener filter.
In addition, according to embodiment 1, the loop filter 25 includes: a region classification unit 32 that extracts feature values of a plurality of regions constituting the decoded image represented by the decoded image signal output from the adder 24, and classifies the class to which each region belongs based on the feature values; and a filter processing unit 33 for generating a wiener filter suitable for the type to which each of the regions classified by the region classification unit 32 belongs with reference to the filter information variable-length decoded by the variable-length decoding unit 21, and compensating for distortion superimposed on the region using the wiener filter.
Embodiment 2.
In embodiment 1, the filter design processing unit 13 generates a wiener filter for each class to which 1 or more regions belong, and performs filter processing in the block (K) for each of a plurality of blocks (K) constituting a local decoded image by using, for each block, a wiener filter of a class to which a region including the block (K) belongs, but may select, for each block, a wiener filter in which the sum of square errors between an image signal to be encoded in the block (K) and a local decoded image signal is the smallest from among the wiener filters generated for each class to which 1 or more regions belong, and compensate distortion superposed on the block (K) by using the wiener filter.
Specifically, the following is described.
Fig. 11 is a flowchart showing the processing contents of the loop filter 6 of the image encoding device according to embodiment 2 of the present invention.
As in embodiment 1, the filter design processing unit 13 generates a wiener filter for each of the categories to which 1 or more regions belong (steps ST2 to ST 8).
However, in embodiment 2, a flag (block _ filter _ ON _ OFF _ flag) indicating whether or not to perform a process of selecting whether or not to perform a filtering process for each block in the frame is used instead of the flag (block _ filter _ ON _ OFF _ flag) indicating whether or not to select a filter to be used for each block in the frame, and the flag (block _ filter _ selection _ flag) is initially set to OFF in step ST40, and the flag (block _ filter _ selection _ flag) is turned ON only when step ST46 is performed.
As described later, the block size and the filter selection information for each block are included in the filter information only when the flag (block _ filter _ selection _ flag) is ON.
If wiener filters are generated for each class to which 1 or more regions belong, the filter design processing unit 13 selects an optimum process (for example, a process in which the sum of square errors between an image signal to be encoded and a local decoded image signal in the block (K) is minimized) for each of a plurality of blocks (K) constituting a local decoded image, when one of the wiener filters generated for each class to which 1 or more regions belong is selected and the filtering process is performed, and when the filtering process is not performed (step ST9, ST41 to ST 47).
Specifically, 4 wiener filters W are generated1、W2、W3、W4In the case of (1), when filtering processing is performed using 4 wiener filters, respectively, if the magnitude relationship between the square error sum E in the block (K) is as follows, the wiener filter W in which the square error sum E is the smallest is selected for the block (K)3。
EW3<EW2<EW4<EW0<EW1
Wherein E isW0The sum of squared errors E is shown for the case where no filtering process is performed.
Here, fig. 12 is an explanatory diagram showing an example of selection of the wiener filter in a plurality of blocks (K) constituting the local decoded image, and for example, in the block (1), the wiener filter W is selected2In block (2), the wiener filter W is selected3。
When determining that the filtering process is to be performed using the wiener filter for the frame, the filter design processing unit 13 sets a flag (frame _ filter _ ON _ off _ flag) to 1(ON), performs the filtering process in the case where the cost is the minimum in steps ST1 to ST9 and ST40 to ST47, and outputs the local decoded image signal after the filtering process to the memory 7 as the reference image signal (steps ST17 to ST 20).
On the other hand, when it is determined that the filtering process is not performed in the entire frame (steps ST17 to ST18), a flag (frame _ filter _ on _ OFF _ flag) is set to 0(OFF), and the local decoded image signal before the filtering process is output to the memory 7 as it is as a reference image signal (steps ST21 to ST 22).
As described above, if the filter design processing unit 13 generates a wiener filter and performs filtering processing, the filter design processing unit outputs filter information on the wiener filter to the variable length encoding unit 8.
In the filter information, a flag (frame _ filter _ on _ off _ flag) indicating whether or not the filtering process is performed in the frame is included.
When the flag is ON (filtering is performed), the following information is included as the filter information.
(1) Number of wiener filters (number of classes to which more than 1 region belongs)
The number of wiener filters may also be different for each frame.
(2) Information (index) of the number of taps of a wiener filter
In the case of common across all filters within a frame, a common number of taps is included.
If the number of taps differs for each filter, the number of taps of each filter is included.
(3) Information on coefficients of wiener filters actually used (wiener filters of a class to which 1 or more regions belong)
Information relating to a wiener filter that is not actually used even if generated is not included.
(4) Filter selection information and block size information for each block
A flag (block _ filter _ selection _ flag) indicating whether or not to select a filter for each block in frame units.
Only in the case where block _ filter _ selection _ flag is ON, including information (index) of block size and selection information of each block.
Here, the information (1) to (4) is included as the filter information, but the information may be held in the respective information determined in common by the image encoding device and the image decoding device, without being encoded and transmitted for the number of wiener filters, the number of taps of the wiener filter, and the block size.
The loop filter 25 in the image decoding apparatus performs the following processing.
Fig. 13 is a flowchart showing the processing contents of the loop filter 25 of the image decoding apparatus according to embodiment 2 of the present invention.
First, the frame memory 31 of the loop filter 25 stores the decoded image signal output from the adder 24 for 1 frame.
When the flag (frame _ filter _ ON _ OFF _ flag) included in the filter information is ON (filtering process is performed) (step ST31) and the flag (block _ filter _ selection _ flag) included in the filter information is OFF (step ST51), the region classification unit 32 extracts the feature values of each of the plurality of regions constituting the decoded image represented by the 1-frame decoded image signal stored in the frame memory 31, and classifies the type to which each region belongs based ON the feature values (step ST32), as in embodiment 1 described above.
ON the other hand, when the flag (frame _ filter _ ON _ off _ flag) included in the filter information is ON (filtering process is performed) (step ST31) and the flag (block _ filter _ selection _ flag) included in the filter information is ON (step ST51), the filter information is referred to the block size information, which is the selection unit of the filter, and the filter selection information for each block, and the type is classified for each block (step ST 52).
If the region classification unit 32 classifies the class to which each region (each block) belongs, the filter processing unit 33 generates a wiener filter applied to the class to which each region (each block) classified by the region classification unit 32 belongs, with reference to the filter information output from the variable length decoding unit 21, as in the above-described embodiment 1 (step ST 33).
If the wiener filter applicable to each category is generated, the filter processing unit 33 performs filter processing on all decoded image signals in the frame using the generated wiener filter in the case where (block _ filter _ selection _ flag) is OFF, as in the case where the flag (block _ filter _ on _ OFF _ flag) is OFF in the above-described embodiment 1, and outputs the decoded image signals after the filter processing to the outside and the memory 26 as the filter-processed decoded image signals (step ST 53).
ON the other hand, if the block _ filter _ selection _ flag is ON, the filter processing unit 33 generates the wiener filter to be applied to each class, compensates for distortion of the decoded image signal to be superimposed in the block using the selected wiener filter for each block, and outputs the decoded image signal after the filter processing to the outside and the memory 26 as the filter-processed decoded image signal (step ST 53).
At this time, the decoded image signal after the filtering processingRepresented by the following formula (5).
The matrix S is a reference signal group including L × L pixels of the decoded image signal S to be subjected to the filtering process.
id _2(b1) is the type number (filter number) of the block bl including the filter selection information in the block bl of the decoded image signal s.
Note that if id _2(b 1)' is 0, it is considered that the block is not subjected to the filtering process, and the filtering process of the block is not performed.
As described above, according to embodiment 2, for each of a plurality of blocks (K) constituting a decoded image, a wiener filter that minimizes the sum of squared errors between an image signal to be encoded in the block (K) and a decoded image signal is selected from wiener filters generated for each of the categories to which 1 or more regions belong, and distortion superimposed on the block (K) is compensated for using the wiener filter, so that the accuracy of improvement in image quality can be further improved as compared with embodiment 1.
Embodiment 3.
In embodiment 2, there is shown a case where, for a plurality of blocks (K) constituting a decoded image, a method for selecting, for each block, a case where the sum of square errors between the image signal to be encoded and the local decoded image signal in the block (K) is minimized from a case where any one of wiener filters generated for the class to which 1 or more blocks belong in the frame is used and a case where the filtering process is not performed, however, 1 or more wiener filters may be prepared in advance, and from the case of using any one of the previously prepared wiener filters, the case of using any one of the wiener filters generated for each of the categories to which 1 or more blocks belong in the frame, and the case of not performing the filtering process, the sum of square errors between the image signal to be encoded and the local decoded image signal in the block (K) is selected to be the minimum.
Fig. 14 is a flowchart showing the processing contents of the loop filter 6 of the image encoding device according to embodiment 3 of the present invention.
In embodiment 3, since the number of options of the wiener filter increases, the probability of selecting the optimal wiener filter is improved as compared with embodiment 2.
The method of selecting the wiener filter is the same as that in embodiment 2, and therefore, the description thereof is omitted.
The processing contents of the image decoding apparatus are also the same as those of embodiment 2, and therefore, the description thereof is omitted.
Embodiment 4.
In embodiment 2, there is shown a case where, for a plurality of blocks (K) constituting a decoded image, in the case where any one of the wiener filters generated for the class to which 1 or more blocks belong in the frame is used and the case where the filtering process is not performed for each block, a method for selecting a case where the sum of square errors between the image signal to be encoded and the local decoded image signal in the block (K) is minimized, however, in the case of using any one of the wiener filters generated for each of the categories to which 1 or more blocks belong in the frame, in the case of using any one of the wiener filters used in the already-encoded frame, and in the case of not performing the filtering process, the sum of square errors between the image signal to be encoded and the local decoded image signal in the block (K) is selected to be the minimum.
Here, fig. 15 is a flowchart showing the processing content of the loop filter 6 in the 1 st frame, and is the same as the flowchart of fig. 11 in embodiment 2 described above.
Fig. 16 is a flowchart showing the processing contents of the loop filter 6 after the 2 nd frame.
As a reference method of the wiener filter used for an already encoded frame, for example, the following reference method is considered.
(1) Wiener filter for use in a block at a position indicated by a representative motion vector calculated from within a block to be filtered
(2) Wiener filter for use in a block located at the same position in a temporally closest frame as a block to be subjected to filter processing
(3) Wiener filter for use in block having highest cross correlation coefficient among blocks in encoded frame
In the case of (3), the same block search process is required in the image encoding apparatus and the image decoding apparatus.
In embodiment 4, since the number of options of the wiener filter increases, the probability of selecting the optimal wiener filter is improved as compared with embodiment 2.
The method of selecting the wiener filter is the same as that in embodiment 2, and therefore, the description thereof is omitted.
The processing contents of the image decoding apparatus are also the same as those of embodiment 2, and therefore, the description thereof is omitted.
Industrial applicability
The image encoding device, the image decoding device, the image encoding method, and the image decoding method according to the present invention can improve the accuracy of improving the image quality, and are suitable for an image encoding device and an image encoding method for compression-encoding and transmitting an image, an image decoding device and an image decoding method for decoding an image from encoded data transmitted from an image encoding device, and the like.
Claims (4)
1. An image encoding device is characterized by comprising:
a filter operation unit configured to apply a filter process to a local decoded image obtained by adding a difference image generated by decoding a compressed difference image and a predicted image;
a variable length encoding unit for variable length encoding the parameters for prediction signal generation and the compressed difference image,
wherein the filtering operation means determines a type for each pixel constituting the local decoded image, applies a filter corresponding to the determined type to each pixel, and performs filtering processing,
the variable length coding unit performs variable length coding on information indicating whether or not filtering processing is performed for each of a plurality of blocks constituting the locally decoded image.
2. An image encoding method is characterized by comprising:
a filtering operation processing step of applying filtering processing to a local decoded image obtained by adding a difference image generated by decoding a compressed difference image and a predicted image;
a variable length coding step of performing variable length coding on the parameter for prediction signal generation and the compressed difference image,
determining a class for each pixel constituting the local decoded image, applying a filter corresponding to the determined class to each pixel, and performing the filtering process,
the variable length coding processing step includes a process of performing variable length coding on information indicating whether or not filtering processing is performed for each of a plurality of blocks constituting the local decoded image.
3. An image decoding device is characterized by comprising:
a variable length decoding unit that performs variable length decoding processing on the encoded bit stream to obtain a parameter for prediction signal generation and a compressed difference image; and
a filtering operation processing unit that applies filtering processing to a decoded image obtained by adding a predicted image generated using the parameter for generating a predicted signal and a difference image generated by decoding the compressed difference image,
the filtering operation processing unit determines a type for each pixel of the decoded image, applies a filter corresponding to the determined type to each pixel, and performs filtering processing,
the variable length decoding unit performs variable length decoding on information indicating whether or not to perform filtering processing for each of a plurality of blocks constituting the decoded image.
4. An image decoding method is characterized by comprising:
a variable length decoding processing step of obtaining a parameter for generating a prediction signal and a compressed difference image by performing variable length decoding processing on the coded bit stream; and
a filtering operation processing step of performing filtering processing on a decoded image obtained by adding a predicted image generated using the parameter for generating a predicted signal and a difference image generated by decoding the compressed difference image,
in the filtering operation processing step, a class is determined for each pixel of the decoded image, filtering processing is performed for each pixel by applying a filter corresponding to the determined class,
the variable length decoding processing step includes processing of performing variable length decoding on information indicating whether or not filtering processing is performed for each of a plurality of blocks constituting the decoded image.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2009146350 | 2009-06-19 | ||
| JP2009-146350 | 2009-06-19 |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| HK13105879.0A Addition HK1178355B (en) | 2009-06-19 | 2010-05-25 | Image encoding device, image decoding device, image encoding method, and image decoding method |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| HK13105879.0A Division HK1178355B (en) | 2009-06-19 | 2010-05-25 | Image encoding device, image decoding device, image encoding method, and image decoding method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1210556A1 HK1210556A1 (en) | 2016-04-22 |
| HK1210556B true HK1210556B (en) | 2019-08-09 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104639942B (en) | Picture coding device, image encoding method and picture decoding apparatus | |
| EP2319241B1 (en) | Skip modes for inter-layer residual video coding and decoding | |
| CN107566840B (en) | Method and apparatus for providing compensation offset for a set of reconstructed samples of an image | |
| US9025676B2 (en) | Method, system and device for improving video quality through in-loop temporal pre-filtering | |
| JP4334533B2 (en) | Video encoding / decoding method and apparatus | |
| US7366238B2 (en) | Noise filter for video processing | |
| JPWO2011039931A1 (en) | Image encoding device, image decoding device, image encoding method, and image decoding method | |
| HK1210556B (en) | Image encoding device, image encoding method, and image decoding device | |
| HK1207926B (en) | Image decoding device and image decoding method | |
| HK1178355B (en) | Image encoding device, image decoding device, image encoding method, and image decoding method | |
| HK1207931B (en) | Image encoding device, image encoding method, and image decoding device | |
| HK1224111B (en) | Image encoding device, image encoding method and image decoding device | |
| EP4548257A1 (en) | A method, an apparatus and a computer program product for video coding |