Method for post-processing decoded video image, using diagonal pixel s
FIELD
[0001] The invention relates to a method, an apparatus, a computer program and computer memory means for post-processing decoded video image formed of consecutive still images.
BACKGROUND
[0002] Video image is encoded and decoded in order to reduce the amount of data so that the video image can be stored more efficiently in mem- ory means or transferred using a telecommunication connection. An example of a video coding standard is MPEG-4 (Moving Pictures Expert Group), where the idea is to send video image in real time on a wireless channel. This is a very ambitious aim, as if the image to be sent is for example of cif size (288 x 352 pixels) and the transmission frequency is 15 images per second, then 36.5 million bits should be packed into 64 kilobits each second. The packing ratio would in such a case be extremely high, 570:1.
[0003] In order to transfer an image, the image is typically divided into image blocks, the size of which is selected to be suitable with the system. The image block information generally comprises information about the bright- ness, colour and location of an image block in the image itself. The data in the image blocks is compressed block-by-block using a desired coding method. Compression is based on deleting the less significant data. The compression methods are mainly divided into three different categories: spectral redundancy reduction, spatial redundancy reduction and temporal redundancy reduction. Typically various combinations of these methods are employed for compression.
[0004] In order to reduce spectral redundancy, a YUV colour model is for instance applied. The YUV model takes advantage of the fact that the human eye is more sensitive to the variation in luminance, or brightness, than to the changes in chrominance, or colour. The YUV model comprises one luminance component (Y) and two chrominance components (U, V). The chrominance components can also be referred to as cb and cr components. For example, the size of an H.263 luminance block according to the video coding standard is 16 x 16 pixels, and the size of each chrominance block is 8 x 8 pixels, together covering the same area as the luminance block. In this stan-
dard the combination of one luminance block and two chrominance blocks is referred to as a macro block. The macro blocks are generally read from the image line-by-line. Each pixel in both the luminance and chrominance blocks may obtain a value ranging between 0 and 255, meaning that eight bits are required to present one pixel. For example, value 0 of the luminance pixel refers to black and value 255 refers to white.
[0005] What is used to reduce spatial redundancy is for example discrete cosine transform DCT. In discrete cosine transform, the pixel presentation in the image block is transformed to a spatial frequency presentation. Furthermore, only the signal frequencies in the image block that are presented therein have high-amplitude coefficients, and the coefficients of the signals that are not shown in the image block are close to zero. The discrete cosine transform is basically a lossless transform, and interference is caused to the signal only in quantization. [0006] Temporal redundancy tends to be reduced by taking advantage of the fact that consecutive images generally resemble one another, and therefore instead of compressing each individual image, the motion data in the image blocks is generated. The basic principle is the following: a previously encoded reference block that is as good as possible is searched for the image block to be encoded, the motion between the reference block and the image block to be encoded is modelled and the motion vector coefficients are sent to the receiver. The difference between the block to be encoded and the reference block is indicated as a prediction error component, or prediction error frame. A reference picture, or reference frame, previously stored in the mem- ory can be used in motion vector prediction of the image block. Such a coding is referred to as intercoding, which means utilizing the similarities between the images in the same image string.
[0007] In order to reduce spatial redundancy, the discrete cosine transform can be performed for the macro block using the formula:
π 2 ^ ^' f (2x + ϊ)uπ (2y + l)vπ . , .
F(ιι,v) = — C(M)C(v)∑∑/(x,j )cos λ cos (1)
where x and y are the coordinates of the original block, u and v are the coordinates of the transform block, N = 8 and
[0008] Next, table 1 shows an example of how an 8 x 8 pixel block is transformed using the discrete cosine transform. The upper part of the table shows the non-transformed pixels, and the lower part of the table shows the result after the discrete cosine transform has been carried out, where the first element of value 1303, what is known as a dc coefficient, depicts the mean size of the pixels in the block, and the remaining 63 elements, what are known as ac coefficients, illustrate the spread of the pixels in the block.
[0009] As the values of the pixels in Table 1 show, they are widely spread. Consequently the result obtained after the discrete cosine transform also includes plenty of ac coefficients of different sizes.
Table 1
[0010] Table 2 illustrates a block in which the spread between pixels is small. As table 2 shows, the ac coefficients receive the same values, meaning that the block is compressed very efficiently.
115 115 115 115 115 115 115 115
115 115 115 115 115 115 115 115
115 115 115 115 115 115 115 115
115 115 115 115 115 115 115 115
115 115 115 115 115 115 115 115
115 115 115 115 115 115 115 115
115 115 115 115 115 115 115 115
115 115 115 115 115 115 115 115
924 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
Table 2
[0011] Next, the discrete cosine transformed block is "quantized", or each element therein is basically divided using a constant. This constant may vary between different macro blocks. In addition, a higher divider is generally used for ac coefficients than for dc coefficients. The "quantization parameter ", from which said dividers are calculated, ranges between 1 and 31. The more zeroes are obtained to the block, the better the block is packed, since zeroes are not sent to the channel. Different coding methods can also be performed to the quantized blocks and finally a bit stream can be formed thereof that is sent to a decoder. An inverse quantization and an inverse discrete cosine transform are performed to the quantized blocks within the encoder, thus forming a reference image, from which the blocks of the following images can be predicted.
Hereafter the encoder thus sends the difference data between the following block and the reference blocks. Consequently the packing efficiency improves. [0012] Quantization is a problem in video coding, as the higher the quantization used, the more information disappears from the image and the final result is unpleasant to watch.
[0013] After decoding the bit stream and performing the decoding methods, a decoder basically carries out the same measures as the encoder when generating a reference image, meaning that similar steps are performed to the blocks as in the encoder but inversely. [0014] Finally the assembled video image is supplied onto a display, and the final result depends to a great extent on the quantization parameter used. If an element in the block descends to zero during quantization, it can no longer be restored in inverse quantization. The discrete cosine transform and quantization cause the quality of the image to deteriorate and can be observed as noise and segmentation.
[0015] In accordance with the prior art, averaging the pixels of the image carries out the post-processing improving the image quality. However, this smoothens the boundaries between the objects in the image and the image becomes blurred.
BRIEF DESCRIPTION
[0016] It is an object of the invention to provide an improved method for post-processing decoded video image, an improved apparatus for postprocessing decoded video image, an improved computer program for postprocessing decoded video image and improved computer memory means for post-processing decoded video image.
[0017] As an aspect of the invention a method according to claim 1 is provided for post-processing the decoded video image. A computer program according to claim 15 is also provided as an aspect of the invention. As a further aspect of the invention computer memory means according to claim 16 are provided. As a still further aspect of the invention an apparatus according to claim 17 is provided for post-processing the decoded video image. Further preferred embodiments of the invention are disclosed in the dependent claims.
[0018] The invention is based on the idea that in post-processing all or nearly all pixels in a still image are processed except for the pixels at the edge of the image so that the pixels to be used for generating a pixel to be
processed are selected from the diagonals, or diameters, of a square fictitiously formed to surround the pixel to be processed utilizing the information about the quantization parameters of the macro block to which the pixel belongs in the selection. The reference pixels are selected only if they deviate from the pixel to be processed at the most for the remainder lost in quantization. Thus, the boundaries between the objects in the image are not interfered with and the smoothened image remains sharp. The smoothing reduces the segmentation that can be seen in the image, but at the same time the boundaries between some objects are sharpened, meaning that the invention enables to simultaneously smooth and sharpen the image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] The preferred embodiments of the invention are described by way of example below with reference to the accompanying drawings, in which:
Figure 1 shows apparatuses for encoding and decoding video im- age,
Figures 2A, 2B and 2C show the choice of reference pixels,
Figures 3A and 3B show different embodiments for selecting reference pixels,
Figure 3C illustrates the advantage achieved with the choice of ref- erence pixels,
Figure 4 illustrates the pixels at the interfaces between the apparatus parts shown in Figure 1 , and
Figure 5 is a flow chart illustrating a method for post-processing decoded video image formed of consecutive still images.
DESCRIPTION OF EMBODIMENTS
[0020] With reference to Figure 1, apparatuses for encoding and decoding video image are described. The face of a person 100 is filmed using a video camera 102. The camera 102 produces video image of individual consecutive still images, whereof one still image 104 is shown in the Figure. The camera 102 forms a matrix describing the image 104 as pixels, for example as described above, where both luminance and chrominance are provided with specific matrixes. A data flow 106 depicting the image 104 as pixels is next applied to an encoder 108. It is naturally also possible to provide such an apparatus, in which the data flow 106 is applied to the encoder 108, for instance along a data transmission connection or from computer memory means. In
such a case, the idea is to compress un-compressed video image 106 using the encoder 108 for instance in order to be forwarded or stored.
[0021] Since our interest towards the apparatuses concerned lies in the compression to be carried out in order to reduce spatial redundancy, only the essential parts of the encoder 108 and a decoder 120 are described. The operation of other parts is apparent for those skilled in the art on the basis of standards and textbooks, for instance the works incorporated herein by reference:
[0022] - ISO/IEC JTC 1/SC 29ΛΛ/G 11 : "Generic coding of audiovis- ual objects - Part 2: Visual", pages 178, 179, 281.
[0023] - Vasudev Bhaskaran ja Konstantinos Konstantinides: "Image and Video Compressing Standards - Algorithms and Architectures, Second Edition", Kluwer Academic Publishers 1997, chapter 6: "The MPEG video standards". [0024] The encoder 108 comprises discrete cosine transform means 110 for performing discrete cosine transform as described above for the pixels in each still image 104. A data flow 112 formed using discrete cosine transform is applied to quantization means 114 that carry out quantization using a selected quantization ratio. Other types of coding can also be performed to a quantized data flow 116 that are not further described in this context. The compressed video image formed using the encoder 108 is transferred over a channel 118 to the decoder 120. How the channel 118 is implemented is not described herein, since the different implementation alternatives are apparent for those skilled in the art. The channel 118 may for instance be a fixed or wire- less data transmission connection. The channel 118 can also be interpreted as a transmission path, by means of which the video image is stored in memory means, for example on a laser disc, and by means of which the video image is read from the memory means and processed using the decoder 120.
[0025] The decoder 120 comprises inverse quantization means 122, which are used to decode the quantization performed in the encoder 108. The inverse quantization is unfortunately unable to restore the element of the block, the value of which descends to zero in quantization.
[0026] An inverse quantized data flow 124 is next applied to inverse discrete cosine transform means 126, which carry out inverse discrete cosine transform to the pixels in each still image 104. A data flow 128 obtained is then
applied through other possible decoding processes onto a display 130, which shows the video image formed of still images 104.
[0027] The encoder 108 and decoder 120 can be placed into different apparatuses, such as computers, subscriber terminals of various radio sys- terns like mobile stations, or into other apparatuses where video image is to be processed. The encoder 108 and the decoder 120 can also be connected to the same apparatus, which can in such a case be referred to as a video codec. [0028] Figure 4 describes prior art pixels at the interfaces 106, 112, 116, 124 and 128 between the apparatus parts shown in Figure 1. The test image used is the first 8 x 8 luminance block in the first image of the test sequence "calendar_qcif.yuv" known to those skilled in the art. The interface 106 shows the contents of the data flow after the camera 102. The interface 112 depicts the contents of the data flow after the discrete cosine transform means 110. The interface 116 shows the contents of the data flow after the quantiza- tion means 114. The quantization ratio used is 17.
[0029] For the sake of simplicity other known coding methods are not used, meaning that the data flow of the interface 116 is transferred along the channel 118 to the decoder 120. The interface 124 describes the contents of the data flow after the inverse quantization means 122. As Figure 5 shows when the original data flow 112 before quantization is compared with the reconstructed data flow 124 after the inverse quantization, the ac component values, which have descended to zero and which are represented at the interface 116 as a result of the quantization, can no longer be restored. In practice this means that the original image 106 before decoding and the image recon- structed using the inverse discrete cosine transform means 126 described at the interface 128 no longer correspond with one another. Noise that degrades the quality of the image has appeared on the reconstructed image.
[0030] In Figure 1 , an apparatus is attached to the decoder 120 for post-processing the decoded video image formed of consecutive still images. Said apparatus comprises processing means 140 for post-processing still image. The post-processing apparatus can be implemented so that it is integrated into the decoder 120, in which case the processing means 140 may constitute a processor including software. The processing means 140 are arranged to repeat at a time the post-processing for the pixels in each still image. [0031] When a pixel is post-processed, at first the pixels on both diagonals in a square area formed to surround the pixel to be processed are se-
lected as reference pixels. Such a selection phase is depicted in Figures 2A, 2B and 2C. Figure 2A shows a block of the size of 8 x 8 pixels 200. The pixel to be processed is described using reference numeral 202 and the letter P. In Figure 2B, a square area 204 is formed around the pixel P to be processed including two diagonals 206 and 208. The pixels on said diagonals 206, 208 are selected as described in Figure 2C as reference pixels 210, which are illustrated by the letter R. In this example, the number of reference pixels R surrounding the pixel P to be processed is eight.
[0032] More than eight or less than eight reference pixels can also be suggested. Figure 3A describes an embodiment in which a square 300 formed to surround the pixel to be processed is smaller than the one in Figure 2B. In Figure 2B the size of the square 204 is 5 x 5 pixels, but in Figure 3A the size of the square 300 is 3 x 3 pixels, whereby the number of reference pixels R obtained is four. [0033] Figure 3B describes an embodiment, in which the processing means 140 are arranged to select at least four reference pixels 302, 310, 312, 314, two from both diagonals, i.e. the first one on each diagonal starting from the pixel to be processed. The diagonals are not shown in Figure 3B for clarity, but they are located as shown in Figure 2B. One of the four diagonal parts starting from the pixel to be processed is described, i.e. the diagonal part 208 sloping downwards on the right.
[0034] Figure 3B describes an embodiment, in which the processing means 140 are arranged to select four new reference pixels R in addition to the already selected four 302, 310, 312, 314 in such a manner that the new selected pixel is either the following pixel 304 in said diagonal part, or the pixel 306 or 308 located adjacent to the first pixel P and the second pixel 304 in the diagonal part.
[0035] Four reference pixels R are a sort of minimum, as it is preferable that the reference pixels are evenly located around the pixel P to be processed. Figure 3C illustrates the significance of how the reference pixels R are located. It is assumed that Figure 3C shows such a spot in a still image, where four blocks are placed adjacent to one another, the size of each block being for example 8 x 8 pixels. Thus, block 320 is found on top left, block 322 on top right, block 324 on bottom left and block 326 on bottom right. As the Figure shows, each one of the four reference pixels R closest to the pixel P to be processed is placed in a different block 320, 322, 324, 326. The reference
pixels R thus preferably form a pattern resembling the letter X. If the reference pixels are placed into an XY coordinate system, i.e. X is rotated 45 degrees, then the example described does not include any reference pixels in block 326, instead the same block 320 where pixel P to be processed is placed would include a double amount of reference pixels, which might weaken the improvement of the image provided in post-processing. The aim is therefore to select the reference pixels R in such a manner that as many as possible of the reference pixels R are located in different blocks than the pixel P to be processed. The block boundaries are maximally faded in this way. [0036] The reference pixels are generally located in a pattern resembling the letter X, the length of the branches thereof being determined by the length of the diagonal in the square. It should be noted that the shape of the pattern formed as the letter X may also be distorted as shown in Figure 3B, meaning that the middle part of the letter X is even, but the tips of the branches in the letter X are twisted either to the left or to the right, when examining the situation in relation the diagonal part 208.
[0037] The processing means 140 are arranged to form an absolute value of the difference between the pixel P to be processed at a time and each reference pixel R, and if the absolute value is lower than the quantization pa- rameter of the macro block to which the pixel to be processed P belongs, then a reference pixel R is selected to form the reference mean. Thus, the effect of the pixels belonging to a different object of the image on the pixel P to be processed is minimized, meaning that the boundaries between the objects are not blurred. [0038] In addition, the processing means 140 are arranged to perform the following test: if at least one reference pixel R was selected to form the reference mean, then the reference mean of the selected reference pixels R is formed and the mean formed from the pixel P to be processed and the reference mean is set as the value of the pixel P to be processed. [0039] Next, with reference to the flow chart in Figure 5, a method for post-processing decoded video image formed of consecutive still images is explained. In this method, the post-processing is repeated for the pixels of each still image at a time. Figure 5 illustrates the particular post-processing process that is to be performed for a single still image. In practice, when video image is post-processed, the measures shown in Figure 5 are repeated for each individual still image of the video image. Post-processing can be per-
formed for both the luminance data and the chrominance data of the still image.
[0040] The method starts from block 500, where a decoded still image is obtained into the memory. In block 502 the next un-post-processed pixel of the image is read, which becomes the pixel P to be processed. In block 504, the quantization parameter of the macro block to which the pixel P to be processed belongs is read.
[0041] Then in block 506, the pixels on both the diagonals in the square area formed to surround the pixel P to be processed are selected as reference pixels R as described above. The number of reference pixels is indicated in our example with the letter N.
[0042] Next in block 510 an absolute value A of the difference between the pixel P to be processed at a time and each reference pixel R is formed that can be described using the formula A=ABS(P-R) (3)
[0043] Then in block 512, a test is performed A<Q (4)
[0044] If the absolute value A is lower than the quantization parameter Q of the macro block to which the pixel P to be processed belongs, then in accordance with arrow 514 the process proceeds to block 518, where a reference pixel R is selected for forming the reference mean, or is added to the sum S
S=S+R (5)
[0045] If, in turn, the condition in block 512 is not fulfilled, then the process proceeds in accordance with arrow 516 to block 520, where the number N of reference pixels is reduced by one, or
N=N-1 (6)
[0046] The process proceeds from both blocks 518 and 520 to block 522, where tests are performed in order to know whether all reference pixels R have been processed. If all reference pixels R are not processed, then the process proceeds in accordance with arrow 526 to block 508, in which the following reference pixel R is read.
[0047] When all reference pixels R, for instance four or eight reference pixels, are processed in blocks 510, 512, 518 and 520, then the condition in block 522 is fulfilled, and the process proceeds of block 522 in accordance with arrow 524 to block 528. In block 528 the value of letter N is tested, which
informs us about the number of the reference pixels R selected to form the reference mean. The test is thus represented as
N>0 (7)
[0048] If no reference pixels R were selected to form the reference mean, i.e. N=0, then the process proceeds from block 528 in accordance with arrow 532 to block 538, where tests are performed in order to know whether all the pixels P in the image have been processed.
[0049] If at least one reference pixel R was selected to form the reference mean, i.e. N obtained at least value 1 in our example, then the process proceeds to block 534, where the reference mean M of the selected reference pixels is formed using formula
M=S/N, (8) or the mean is thus the sum S of selected reference pixels divided with number N of the selected reference pixels. [0050] Then in block 534 the process proceeds to block 536, where the mean formed of the pixel P to be processed and the reference mean M is set as the value of the pixel P to be processed, in other words positioning is carried out
P=(M+P)/2 (9) [0051] Then in block 532, the process proceeds to block 538, where tests are performed to know whether all the pixels P in the image have been processed. If all pixels have been processed, then the process proceeds in accordance with arrow 540 to block 544, where the post-processing of a single still image is ended. If all the pixels have not yet been post-processed, then the process proceeds in accordance with arrow 542 to block 502, where the following un-post-processed pixel is read, and the method described thereto is started.
[0052] The processing means 140 process still image 104 line-byline, column-by-column, macro block-by-macro block, block-by-block or in ac- cordance with another predetermined non-random way.
[0053] The method can be modified according to the accompanying dependent claims. Since a part of the contents thereof is described above in connection with the processing means 140, the explanation is not repeated herein. [0054] It should be noted that the implementation of the variables, such as A, S, N, used in the method do not necessarily have to be as de-
scribed above. It is obvious for those skilled in the art that the detailed algorithm shown in Figure 5 is merely one embodiment among the numerous embodiments of the more generally described algorithm in claim 1.
[0055] In the following the embodiments common to both the method and the apparatus are explained.
[0056] In an embodiment the processing to be performed in blocks 508, 510, 512, 518 and 520, where the choice of a reference pixel for calculating the reference mean is tested, can be simplified. This occurs in such a manner that if the reference pixel closer to the pixel to be processed on the diagonal is not selected to form the reference mean, then the reference pixel further in the same direction from the pixel to be processed on the diagonal is not selected to form the reference means. For example in figure 3B, if the reference pixel 302 is not selected to form the reference mean, then other pixels in said branch, for instance the pixel 304, is not even worth testing, meaning that the remaining reference pixels in said branch in block 508 can be bypassed. Obviously in this example, N must correspondingly be reduced with the number of bypassed reference pixels.
[0057] In an embodiment, the processing means 140 are arranged to weightedly calculate the mean of the pixel to be processed and the refer- ence mean, in other words formula 9 obtains the following form
P =(aM+bP)/(a+b), (10) where a and b are weighting coefficients. For instance, if a=3 and b=1 , then formula 10 obtains the following form
P=(3M+P)/4 (11) [0058] Weighting can be used to adjust smoothing. For example, if the reference mean is weighted more than the pixel to be processed, then the image obtains more smoothing, and correspondingly weighting the reference mean more than the pixel to be processed can reduce the smoothing. This kind of weighting affects the smoothing of evenly coloured areas in particular, in which case the colour is more evenly distributed in the coloured area when the reference mean is weighted more than the pixel to be processed.
[0059] In an embodiment the processing means 140 are arranged to employ the value of the already post-processed pixel when post-processing an unprocessed pixel, in which case the still image is stored into the memory of the processing means 140 in one example only. This has the advantage that
the apparatus requires less memory than if both the unprocessed and post- processed image were separately stored into the memory.
[0060] In an embodiment the processing means 140 are arranged to inform about the value of the quantization parameter using the weighting coefficient before the comparison with the absolute value of the difference is carried out, i.e. formula 4 obtains the following form
A<cQ, (12) where c is the weighting coefficient, for instance c=2. Using a weighting coefficient that exceeds one, the edges between the objects in the still images are smoothed even more, if this is desired. In some cases such a smoothing of object edges instead of sharpening may be sensible.
[0061] In an embodiment the processing means 140 are arranged to weight each reference pixel, when calculating the reference mean, inversely in relation to the reference pixel distance from the pixel to be processed. This means that the further the reference pixel is from the pixel to be processed, the smaller the significance thereof becomes. In the example shown in Figure 2B, each branch of the letter X comprises only two reference pixels, meaning that the weighting might for instance be such that the reference pixel that is placed closer is weighted using the figure two, and the reference pixel further apart is weighted using figure one, whereby the weighting ratio is 2:1. If the processing means 140 comprise an adequate amount of calculation capacity, then the branch of the letter X may include even more reference pixels, i.e. if the size of the square is for example 7 x 7 pixels, then the number of reference pixels in the branch is three, whereby the weighting ratio thereof may be for example 3:2:1. Such a weighting provides the advantage that the accuracy of the smoothing can be improved, as the significance of the reference pixels is weighted.
[0062] Post-processing is not necessarily carried out for all the pixels in a still image. The border areas in an image are problematic. In an em- bodiment the processing means 140 are arranged so as not to perform postprocessing to at least one, preferably two, leftmost columns, rightmost columns, topmost rows and bottommost rows of the still image. This does not substantially deteriorate the quality of the image, since the narrow margin that may include errors is not generally considered to be disturbing. [0063] On the other hand, if the pixels in the border areas are to be post-processed, then the processing means 140 in an embodiment are ar-
ranged to perform post-processing to at least one, preferably two leftmost columns, rightmost columns, topmost rows and bottommost rows of the still image so that the pixel in the image that is perpendicularly closest to the reference pixel is employed as the value of the reference pixel outside the image. This can be implemented for instance in such a manner that the operation logic of the method is able to retrieve the value of the pixel outside the border in block 508. Another implementation is such that the rows and columns closest to the edges are copied to surround the pixel to provide a margin of two pixels so as to form a frame. This provides simpler operation logic but requires somewhat more memory owing to the copied pixels.
[0064] Another embodiment that can be used for post-processing the pixels of the border areas is such that the processing means 140 are arranged not to select a reference pixel that is outside the image for postprocessing. When moving on the edge of the image, the X-shaped pattern of the reference pixels thus lacks two or even three branches.
[0065] The processing means 140 can be implemented as a computer program operating in the processor, whereby for instance each required operation is implemented as a specific program module. The computer program thus comprises the routines for implementing the steps of the method. In order to promote the sales of the computer program, it can be stored into the computer memory means, such as a CD-ROM (Compact Disc Read Only Memory). The computer program can be designed so as to operate also in a standard general-purpose personal computer, in a portable computer, in a computer network server or in another prior art computer. [0066] The processing means 140 can be implemented also as an equipment solution, for example as one or more application specific integrated circuits (ASIC) or as operation logic composed of discrete components. When selecting the way to implement the means, a person skilled in the art observes for example the required processing power and the manufacturing costs. Dif- ferent hybrid implementations formed of software and equipment are also possible.
[0067] The Applicant has carried out tests, in which the effect of post-processing is compared with the processing described in chapter F.3.1 Deblocking filter in the ISO/IEC standard 14496-2: 1999(E), which is incorpo- rated herein by reference. In the tests, the subjects compared different video sequences. The results of the tests show that the improvement obtained in
image quality provided by the post-processing method described herein was at least as good as in the standard method. However, it should be noted that this was achieved using calculation, whose complexity was only about one fourth of the calculation in the standard method. The reduction in complexity refers to increased speed and/or lower required processing power.
[0068] Even though the invention has above been described with reference to the example in the accompanying drawings, it is obvious that the invention is not restricted thereto but can be modified in various ways within the scope of the inventive idea disclosed in the attached claims.