CN1455600A - Interframe predicting method based on adjacent pixel prediction - Google Patents
Interframe predicting method based on adjacent pixel prediction Download PDFInfo
- Publication number
- CN1455600A CN1455600A CN 03136605 CN03136605A CN1455600A CN 1455600 A CN1455600 A CN 1455600A CN 03136605 CN03136605 CN 03136605 CN 03136605 A CN03136605 A CN 03136605A CN 1455600 A CN1455600 A CN 1455600A
- Authority
- CN
- China
- Prior art keywords
- prediction
- mode
- block
- sub
- twenty
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
基于相邻像素预测的帧内预测方法属于图像处理与压缩编码技术领域,主要涉及视频编码中的帧内预测。本发明首先读出接收到的视频序列的一幅图像,并将该图像的像素亮度样值分成16×16的宏块,将宏块分成p×p(p=4,8,16)的块,接着将每一个p×p块按像素下标的奇偶性分成偶偶子块、奇奇子块、偶奇子块和奇偶子块,分别对每个子块中的像素亮度样值按不同的模式进行预测,然后按JVT中的方法求取预测残差值并对该预测残差值进行变换、量化,对量化变换后的系数信息进行变长编码或算术编码,直到整个图像编码完成为止,最后输出整个图像编码比特流。本发明的提出的预测方法其预测结构精细,预测残差小,提高了图像的编码质量,节省了比特开销。
An intra-frame prediction method based on adjacent pixel prediction belongs to the technical field of image processing and compression coding, and mainly relates to intra-frame prediction in video coding. The present invention first reads out an image of the received video sequence, and divides the pixel brightness samples of the image into 16×16 macroblocks, and divides the macroblock into p×p (p=4, 8, 16) blocks , and then divide each p×p block into even-even sub-blocks, odd-odd sub-blocks, even-odd sub-blocks, and odd-even sub-blocks according to the parity of the pixel subscript, and predict the pixel brightness samples in each sub-block according to different modes , and then calculate the prediction residual value according to the method in JVT, transform and quantize the prediction residual value, and perform variable-length coding or arithmetic coding on the quantized transformed coefficient information until the entire image coding is completed, and finally output the entire Image encoding bitstream. The prediction method proposed by the present invention has fine prediction structure and small prediction residual, which improves the coding quality of the image and saves bit overhead.
Description
技术领域technical field
基于相邻像素预测的帧内预测方法属于图像处理与压缩编码技术领域,主要涉及视频编码中的帧内预测。An intra-frame prediction method based on adjacent pixel prediction belongs to the technical field of image processing and compression coding, and mainly relates to intra-frame prediction in video coding.
背景技术Background technique
为了在当前有限的传输带宽和存储媒体中传输和存放图像,我们必须对图像进行压缩编码处理。在动态图像的压缩编码技术中,编码算法分帧内编码和帧间编码两种情况。其中对视频序列中的第一幅图像或景物变换后的第一幅图像,采用帧内变换编码,其它图像采用帧间编码。在现有技术中,帧内编码使用空间预测以利用源信号中的空间统计相关性,帧间编码预测的编码算法使用基于块的帧间预测以利用时域统计相关性。在具体帧内编码时,规定了图像内基本处理块的帧内预测模式,然后对预测残差随即进行变换以去除变换块内的空间相关性,再进行量化(这种不可逆过程通常丢弃重要性低的信息而得到与源样值接近的近似值)。最后,使用现有技术的变长编码或算术编码对量化变换系数信息进行编码。In order to transmit and store images in the current limited transmission bandwidth and storage media, we must compress and encode images. In the compression coding technology of dynamic images, the coding algorithm is divided into two cases: intra-frame coding and inter-frame coding. Among them, the first image in the video sequence or the first image after scene transformation is adopted for intra-frame transform coding, and the other images are inter-frame coded. In the prior art, intra-frame coding uses spatial prediction to take advantage of spatial statistical correlation in the source signal, and the coding algorithm for inter-frame coding prediction uses block-based inter-frame prediction to take advantage of time-domain statistical correlation. In the specific intra-frame coding, the intra-frame prediction mode of the basic processing block in the image is specified, and then the prediction residual is transformed to remove the spatial correlation in the transformed block, and then quantized (this irreversible process usually discards the importance low information to obtain close approximations to the source samples). Finally, the quantized transform coefficient information is encoded using prior art variable length coding or arithmetic coding.
目前,国内外有关视频编码的大多数技术方案,都是基于MPEG4或者H.26L标准的。其中JVT(该组织是由ITU-T和ISO/IEC JTC1这两个国际标准化组织联合形成的一个音视频标准化组织)出具的视频编码标准则是目前国内外十分流行的一种编码标准技术。在J VT中视频编码标准中,一个宏块包括一个16×16的亮度样值块和两个对应的色度样值块,用作视频编解码过程的基本处理单元。At present, most technical solutions related to video coding at home and abroad are based on MPEG4 or H.26L standards. Among them, the video coding standard issued by JVT (the organization is an audio and video standardization organization jointly formed by two international standardization organizations, ITU-T and ISO/IEC JTC1), is a very popular coding standard technology at home and abroad. In the JVT video coding standard, a macroblock includes a 16×16 luma sample block and two corresponding chrominance sample blocks, which are used as the basic processing unit in the video codec process.
在JVT中,帧内亮度样植的预测采用的是基于4×4的块预测结构和16×16的块预测结构。In JVT, the prediction of intra-frame luminance sample planting adopts a block prediction structure based on 4*4 and a block prediction structure of 16*16.
1. 4×4块预测结构,如图3所示:1. 4×4 block prediction structure, as shown in Figure 3:
该预测结构利用4×4块的左边4个像素I~L、上边8个像素A~H和左上角1个像素M已经编码宏块的像素形成当前4×4块像素a~p的预测。The prediction structure uses the 4 pixels I~L on the left of the 4*4 block, the 8 pixels A~H on the upper side and the 1 pixel M in the upper left corner of the coded macroblock to form the prediction of the pixels a~p in the current 4*4 block.
4×4块预测结构很精细,残差小。但是,在对一幅图像处理时,我们的基本单位是一个16×16的宏块(macroblock),而这样的宏块需要16个4×4块预测,而每个4×4块预测有9种模式,因此一个帧内宏块预测采用4×4块预测结构时,编码模式信息比特开销很大。The 4×4 block prediction structure is very fine and the residual error is small. However, when processing an image, our basic unit is a 16×16 macroblock (macroblock), and such a macroblock requires 16 4×4 block predictions, and each 4×4 block prediction has 9 Therefore, when a 4×4 block prediction structure is used for intra-frame macroblock prediction, the bit overhead of coding mode information is very large.
2. 16×16块预测结构,如图4所示:2. 16×16 block prediction structure, as shown in Figure 4:
该预测结构利用16×16块的左边16个像素、上边16个像素和左上角1个像素已经编码宏块的像素形成当前宏块的预测。它主要采用4种模式的预测方式。The prediction structure uses the 16 pixels on the left, 16 pixels on the top and 1 pixel in the upper left corner of the 16×16 block to form the prediction of the current macroblock. It mainly adopts 4 modes of forecasting methods.
16×16块预测结构虽然只需要1个16×16块预测,编码模式信息比特开销相对减小,可预测很粗糙,残差较大。发明内容Although the 16×16 block prediction structure only needs one 16×16 block prediction, the coding mode information bit overhead is relatively reduced, and the prediction is very rough and the residual error is large. Contents of the invention
本发明的目的在于提供一种亮度样值帧内预测结构式及预测模式的计算方法,来克服上述两种预测结构的缺点,即用一种帧内预测结构替代JVT中现有的两种预测结构且在性能上优于现有技术。本发明的一种基于相邻像素预测的帧内预测方法,首先读出接收到的视频序列的帧内图像,并将该图像的像素亮度样值按从左至右,从上到下的顺序分成16×16的宏块,接着对每个16×16宏块中的像素亮度样值进行预测,然后按JVT中的方法求取预测残差值并对该预测残差值进行变换、量化,对量化变换后的系数信息进行变长编码或算术编码,直到整个图像编码完成为止,最后输出整个图像编码比特流,其特征在于,所述对每个16×16宏块中的像素亮度样值进行预测由以下步骤顺次组成:1)取第一个16×16宏块为当前预测宏块;2)将该宏块按从左至右,从上到下的顺序分成p×p(p=4,8,16)的块,接着对每个p×p块进行预测;3)取第一个p×p块为当前块;4)将当前p×p块中的像素亮度样值按照像素下标(i=0,1,…,p-1,表示像素行下标;j=0,1,…,p-1,表示像素列下标;)的奇偶性划分成偶偶子块(当i=0,2,4,…,p-2,j=0,2,4,…,p-2,时),偶奇子块(当i=0,2,4,…,p-2,j=1,3,5,…,p-1,时),奇偶子块(当i=1,3,5,…,p-1,j=0,2,4,…,p-2,时)和奇奇子块(当i=1,3,5,…,p-1,j=1,3,5,…,p-1,时);5)预测当前块中偶偶子块的像素亮度样值:The purpose of the present invention is to provide a calculation method of a luminance sample intra-frame prediction structure formula and prediction mode to overcome the shortcomings of the above two prediction structures, that is, to replace the existing two prediction structures in JVT with an intra-frame prediction structure And the performance is superior to the prior art. An intra-frame prediction method based on adjacent pixel prediction of the present invention first reads out the intra-frame image of the received video sequence, and sets the pixel brightness samples of the image in order from left to right and from top to bottom Divide into 16×16 macroblocks, then predict the pixel brightness sample value in each 16×16 macroblock, and then calculate the prediction residual value according to the method in JVT and transform and quantize the prediction residual value, Perform variable-length coding or arithmetic coding on the quantized and transformed coefficient information until the coding of the whole image is completed, and finally output the coding bit stream of the whole image, which is characterized in that the pixel brightness samples in each 16×16 macroblock Prediction is composed of the following steps in sequence: 1) take the first 16×16 macroblock as the current prediction macroblock; 2) divide the macroblock into p×p (p =4, 8, 16), and then predict each p×p block; 3) take the first p×p block as the current block; 4) take the pixel brightness samples in the current p×p block according to The parity of pixel subscript (i=0,1,...,p-1, represents pixel row subscript; j=0,1,...,p-1, represents pixel column subscript;) is divided into even and even subblocks ( When i=0, 2, 4, ..., p-2, j = 0, 2, 4, ..., p-2, when), the even-odd sub-block (when i=0, 2, 4, ..., p-2 , j=1, 3, 5, ..., p-1, when), parity sub-block (when i=1, 3, 5, ..., p-1, j = 0, 2, 4, ..., p-2 , when) and odd-odd sub-blocks (when i=1, 3, 5, ..., p-1, j = 1, 3, 5, ..., p-1, when); 5) predict even and even sub-blocks in the current block The pixel brightness samples of :
利用该p×p块上边、左边和左上角已经重构的像素亮度样值(对于图像边缘的p×p块,约定其周边重构出的像素亮度样值为128),按照常规预测模式来预测偶偶子块的像素亮度样值 并重构偶偶子块中的像素亮度样值ij,其中,i=0,2,4,…,p-2,表示像素行坐标,j=0,2,4,…,p-2,表示像素列坐标;6)预测当前块中奇奇子块的像素亮度样值:Using the reconstructed pixel luminance samples on the top, left and upper left corners of the p×p block (for the p×p block at the edge of the image, it is agreed that the reconstructed pixel luminance samples around the p×p block are 128), according to the conventional prediction mode Predict pixel brightness samples of even and even sub-blocks And reconstruct the pixel brightness sample value ij in the even and even sub-blocks, where i=0, 2, 4, ..., p-2, represents the pixel row coordinates, j = 0, 2, 4, ..., p-2, Represents the pixel column coordinates; 6) predicts the pixel brightness sample value of the odd-odd sub-block in the current block:
利用该p×p块上边、左边和左上角已经重构的像素亮度样值和重构出的偶偶子块的像素亮度样值,按照常规预测模式预测奇奇子块的像素亮度样值 并重构奇奇子块中的像素亮度样值ij其中,i=1,3,5,…,p-1,表示像素行坐标,j=1,3,5,…,p-1,表示像素列坐标;7)预测当前块中偶奇子块的像素亮度样值:Using the reconstructed pixel luminance samples on the top, left and upper left corners of the p×p block and the reconstructed pixel luminance samples of even and even sub-blocks, predict the pixel luminance samples of odd and odd sub-blocks according to the conventional prediction mode And reconstruct the pixel brightness samples in the odd-odd sub-block ij wherein, i=1, 3, 5, ..., p-1, represents the pixel row coordinates, j = 1, 3, 5, ..., p-1, Represents the pixel column coordinates; 7) predicts the pixel brightness samples of even and odd sub-blocks in the current block:
利用该p×p块上边、左边和左上角已经重构的像素亮度样值和重构出的偶偶、奇奇子块的像素亮度样值,按照常规预测模式预测偶奇子块的像素亮度样值 并重构偶奇子块中的像素亮度样值ij其中,i=0,2,4,…,p-2,表示像素行坐标,j=1,3,5,…,p-1,表示像素列坐标;8)预测当前块中奇偶子块的像素亮度样值:Using the reconstructed pixel brightness samples on the top, left and upper left corners of the p×p block and the reconstructed pixel brightness samples of even-even and odd-odd sub-blocks, predict the pixel brightness samples of even-odd sub-blocks according to the conventional prediction mode And reconstruct the pixel brightness samples in the even and odd sub-blocks ij where, i=0, 2, 4, ..., p-2, represents the pixel row coordinates, j = 1, 3, 5, ..., p-1, represents Pixel column coordinates; 8) Predict the pixel brightness sample value of the parity sub-block in the current block:
利用该p×p块上边、左边和左上角已经重构的像素亮度样值和重构出的偶偶、奇奇和偶奇子块的像素亮度样值,按照常规预测模式预测奇偶子块的像素亮度样值 并重构奇偶子块中的像素亮度样值ij其中,i=1,3,5,…,p-1,表示像素行坐标,j=0,2,4,…,p-2,表示像素列坐标;9)取下一个p×p块作为当前块,重复第3)至8)步的过程,直到该宏块预测完毕为止;10)取下一个宏块作为当前预测宏块,重复第2)至9)步的过程,直到完成整幅图像的像素亮度样值的预测。Using the reconstructed pixel brightness samples on the top, left and upper left corners of the p×p block and the reconstructed pixel brightness samples of even-even, odd-odd and even-odd sub-blocks, predict the pixel brightness of the odd-even sub-blocks according to the conventional prediction mode sample value And reconstruct the pixel brightness samples in the odd-even sub-block ij where, i=1, 3, 5, ..., p-1, represents the pixel row coordinates, j = 0, 2, 4, ..., p-2, represents Pixel column coordinates; 9) take the next p×p block as the current block, repeat the process from step 3) to 8), until the macroblock is predicted; 10) take the next macroblock as the current prediction macroblock, repeat Steps 2) to 9) until the prediction of the pixel brightness samples of the entire image is completed.
本发明的一种基于相邻像素预测的帧内预测方法,p×p块的大小为8×8,所述对每个p×p块中的像素亮度样值进行预测由以下步骤顺次组成:1)将每一个16×16的宏块按照从左至右,从上至下的顺序分成4个8×8块如图9所示,再对每一个8×8块如图10中的64个像素亮度样值按下标的奇偶性划分成偶偶子块、偶奇子块、奇偶子块和奇奇子块如图11所示;2)选定第一个8×8块且定义其像素为aij,其中i=0,1,…,7,表示像素行下标;j=0,1,…,7,表示像素列下标;与其相邻的上边一行的16个像素定义为sp,p=0,1,…,15;与其相邻的左边一列的8个像素定义为tq,q=0,1,…,7;f表示当前8×8块的左上角的像素,对于图像中开始的8×8边缘块,约定其周边重构出的像素亮度样值为128;3)预测当前8×8块中偶偶子块的像素亮度样值In an intra-frame prediction method based on adjacent pixel prediction in the present invention, the size of the p×p block is 8×8, and the prediction of the pixel brightness sample value in each p×p block consists of the following steps in sequence : 1) each 16×16 macroblock is divided into four 8×8 blocks as shown in Figure 9 according to the order from left to right and from top to bottom, and each 8×8 block is shown in Figure 10 The 64 pixel brightness samples are divided into even-even sub-blocks, even-odd sub-blocks, odd-even sub-blocks, and odd-odd sub-blocks according to the parity of the subscript; 2) select the first 8×8 block and define its pixels is a ij , where i=0, 1,...,7, represents the subscript of the pixel row; j=0, 1,...,7, represents the subscript of the pixel column; the 16 pixels in the upper row adjacent to it are defined as s p , p=0, 1, ..., 15; the 8 pixels in the left column adjacent to it are defined as t q , q = 0, 1, ..., 7; f represents the pixel in the upper left corner of the current 8×8 block, For the first 8×8 edge block in the image, it is agreed that the reconstructed pixel brightness samples around it are 128; 3) Predict the pixel brightness samples of even and even sub-blocks in the current 8×8 block
I.利用当前8×8块周围已经编码宏块的像素sp(p=0,1,…,15)和tq(q=0,1,…,7)的亮度样值,按照下述9种预测模式来形成当前偶偶子块中各个像素9种预测模式下的预测值 如图12所示,其中,k=0,1,2,…,8,表示预测模式,i=0,2,4,6,表示像素行坐标,j=0,2,4,6,表示像素列坐标,符号“>>”表示位右移运算;I. Utilize the luminance sample values of the pixels sp (p=0,1,...,15) and tq (q=0,1,...,7) of the coded macroblock around the current 8×8 block, according to the following 9 prediction modes to form the prediction value of each pixel in the current even and even sub-block under 9 prediction modes As shown in Figure 12, where k=0, 1, 2, ..., 8 represents the prediction mode, i=0, 2, 4, 6 represents the pixel row coordinates, and j=0, 2, 4, 6 represents Pixel column coordinates, the symbol ">>" means bit right shift operation;
a、模式0:vertical Predictiona. Mode 0: vertical Prediction
使用本模式的必要条件是sp(p=0,2,4,6)可用,预测样本
的产生方法如下:
b、模式1:Horizontal predictionb. Mode 1: Horizontal prediction
使用本模式的必要条件是tq(q=0,2,4,6)可用,预测样本
的产生方法如下:
c.模式2:DC Predictionc. Mode 2: DC Prediction
预测样本 的产生方法如下:forecast sample The generation method is as follows:
如果sp,tq(p=q=0,1,…,7)都可用,则所有预测样本
等于
d、模式3:Diagonal_Down_Left Predictiond. Mode 3: Diagonal_Down_Left Prediction
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,预测样本
的产生方法如下:
e、模式4:Diagonal_Down_Right Predictione. Mode 4: Diagonal_Down_Right Prediction
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,如果f不可得到,用t0代替f,预测样本
的产生方法如下:
f、模式5 Vertical_Right Predictionf.
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,预测样本
的产生方法如下:
g、模式6 Horizontal_Down predictiong.
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,如果f不可得到,用t0代替f,预测样本
的产生方法如下:
h、模式7 Vertical_Left Predictionh.
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,如果f不可得到,用t0代替f,预测样本
的产生方法如下:
i、模式8 Horizontal_Up predictioni.
使用本模式的必要条件是tq(q=0,1,…,7)可用,预测样本
的产生方法如下:
II.确定偶偶子块的最优预测模式:II. Determine the optimal prediction mode for even and even sub-blocks:
a.由下面的预测残差公式,得到预测模式k下的预测残差值Δk:
这里,aij表示原始像素亮度样值, 表示模式k下预测像素亮度样值,k=0,1,2,…,8,表示预测模式;i=0,2,4,6,表示像素行坐标;j=0,2,4,6,表示像素列坐标;Here, a ij represents the original pixel brightness sample value, Indicates the predicted pixel brightness sample value in mode k, k=0, 1, 2, ..., 8, indicates the prediction mode; i = 0, 2, 4, 6, indicates the pixel row coordinates; j = 0, 2, 4, 6 , represents the pixel column coordinates;
b.采用JVT中的编码方法,对各个像素的预测残差做DCT变换、量化和熵编码,计算当前子块在当前模式下的编码比特数;并对各个像素的预测残差做DCT变换和量化后,再进行反量化和反DCT变换,然后加上预测值 重构子块中的各个像素点的亮度样值,记为kij,k=0,1,2,…,8,表示预测模式;i=0,2,4,6,表示像素行坐标;j=0,2,4,6,表示像素列坐标;b. Using the encoding method in JVT, DCT transformation, quantization and entropy coding are performed on the prediction residual of each pixel, and the number of encoded bits of the current sub-block in the current mode is calculated; and DCT transformation and summing are performed on the prediction residual of each pixel After quantization, perform inverse quantization and inverse DCT transformation, and then add the predicted value The luminance samples of each pixel in the reconstructed sub-block are recorded as kij , k=0, 1, 2, ..., 8, indicating the prediction mode; i = 0, 2, 4, 6, indicating the pixel row coordinates; j=0, 2, 4, 6, indicating the pixel column coordinates;
c.采用JVT中的方法计算该子块在当前预测模式下的失真率rdcost;
rdcost=distortion+lambda×rate;rdcost=distortion+lambda×rate;
其中,lambda为常数68.539,rate为在当前模式下编码当前子块所用比特数,distortion为当前子块的所有像素的原始亮度样值与预测值差平方的和;Among them, lambda is a constant 68.539, rate is the number of bits used to encode the current sub-block in the current mode, and distortion is the sum of the squares of the difference between the original brightness samples and the predicted values of all pixels in the current sub-block;
d. k值增1,重复第a,b,c步,直至本子块所有预测模式都执行一遍为止;d. The value of k is increased by 1, and steps a, b, and c are repeated until all prediction modes of this sub-block are executed once;
e.比较各个模式下的rdcost,选取rdcost最小的模式为当前最优预测模式;e. Compare the rdcost in each mode, and select the mode with the smallest rdcost as the current optimal prediction mode;
III.将最优预测模式下的预测值作为该子块的最后预测值,记为 i=0,2,4,6,j=0,2,4,6;将最优预测模式下的重构值作为该子块的最后重构值,记为ij,i=0,2,4,6,j=0,2,4,6;该子块重构后当前8×8块的重构状态如图13所示;4)预测当前8×8块中奇奇子块的像素亮度样值:III. The predicted value in the optimal prediction mode is used as the last predicted value of the sub-block, which is recorded as i=0, 2, 4, 6, j=0, 2, 4, 6; take the reconstructed value in the optimal prediction mode as the last reconstructed value of the sub-block, denoted as ij , i=0, 2 , 4, 6, j=0, 2, 4, 6; the reconstruction state of the current 8×8 block after the reconstruction of the sub-block is shown in Figure 13; 4) predict the odd and odd sub-block in the current 8×8 block Pixel brightness samples:
I.首先将偶偶子块的16个像素最后重构值放回到原始8×8块的位置上,如图14所示,从图14可以看出所有下标为奇奇的像素正好位于已编码重构完的下标为偶偶的像素中间的位置,然后利用偶偶子块中重构出的16个像素亮度样值和当前8×8块周围已经编码宏块的像素sp(p=0,1,…,15)和tq(q=0,1,…,7)的亮度样值,按照下述9种预测模式来预测奇奇子块中各点的像素亮度样值其中,k=0,1,2,…,8,表示预测模式,i=1,3,5,7,表示像素行坐标,j=1,3,5,7,表示像素列坐标,mn表示偶偶子块中像素最后重构值,m=0,2,4,6,表示像素行坐标,n=0,2,4,6表示像素列坐标,符号“>>”表示位右移运算;I. First put the final reconstruction values of the 16 pixels of the even sub-block back to the position of the original 8×8 block, as shown in Figure 14, it can be seen from Figure 14 that all pixels with odd subscripts are just located Encode the position in the middle of the reconstructed pixels whose subscripts are even and even, and then use the reconstructed 16 pixel brightness samples in the even and even sub-blocks and the pixels sp (p=0 , 1, ..., 15) and t q (q=0, 1, ..., 7) brightness samples, predict the pixel brightness samples of each point in the odd and odd sub-block according to the following nine prediction modes Among them, k=0, 1, 2, ..., 8, indicates the prediction mode, i = 1, 3, 5, 7, indicates the pixel row coordinates, j = 1, 3, 5, 7, indicates the pixel column coordinates, mn Represents the last reconstructed value of the pixel in the even sub-block, m=0, 2, 4, 6 represents the pixel row coordinates, n=0, 2, 4, 6 represents the pixel column coordinates, and the symbol ">>" represents the bit right shift operation ;
a、模式0:vertical Predictiona. Mode 0: vertical Prediction
使用本模式的必要条件是sp(p=0,1,…,7)可用,预测样本
的产生方法如下:
b、模式1:Horizontal predictionb. Mode 1: Horizontal prediction
预测样本
的产生方法如下:
c、模式2:DC Predictionc. Mode 2: DC Prediction
预测样本 的产生方法如下:forecast sample The generation method is as follows:
如果sp,tq(p=q=0,1,…,7)都可用,则所有预测样本
等于
d、模式3:Diagonal_Down_Left Predictiond. Mode 3: Diagonal_Down_Left Prediction
预测样本
的产生方法如下:
e、模式4:Diagonal_Down_Right Predictione. Mode 4: Diagonal_Down_Right Prediction
预测样本
的产生方法如下:
f、模式5 Vertical_Right Predictionf.
预测样本
的产生方法如下:
g、模式6 Horizontal_Down predictiong.
预测样本
的产生方法如下:
h、模式7 Vertical_Left Predictionh.
预测样本
的产生方法如下:
i、模式8 Horizontal_Up predictioni.
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,预测样本
的产生方法如下:
II.确定奇奇子块的最优预测模式:II. Determine the optimal prediction mode for the odd-odd sub-block:
a.由下面的预测残差公式,得到预测模式k下的预测残差值Δk:
这里,aij表示原始像素亮度样值, 表示模式k下预测像素亮度样值,k=0,1,2,…,4,表示预测模式;i=1,3,5,7,表示像素行坐标;j=1,3,5,7,表示像素列坐标;Here, a ij represents the original pixel brightness sample value, Indicates the predicted pixel brightness sample value in mode k, k=0, 1, 2, ..., 4, indicates the prediction mode; i = 1, 3, 5, 7, indicates the pixel row coordinates; j = 1, 3, 5, 7 , represents the pixel column coordinates;
b.采用JVT中的编码方法,对各个像素的预测残差做DCT变换、量化和熵编码,计算当前子块在当前模式下的编码比特数;并对各个像素的预测残差做DCT变换和量化后,再进行反量化和反DCT变换,然后加上预测值 重构子块中的各个像素点的亮度样值,记为kij,k=0,1,2,…,4,表示预测模式;i=1,3,5,7,表示像素行坐标;j=1,3,5,7,表示像素列坐标;b. Using the encoding method in JVT, DCT transformation, quantization and entropy coding are performed on the prediction residual of each pixel, and the number of encoded bits of the current sub-block in the current mode is calculated; and DCT transformation and summing are performed on the prediction residual of each pixel After quantization, perform inverse quantization and inverse DCT transformation, and then add the predicted value The luminance samples of each pixel in the reconstructed sub-block are recorded as kij , k=0, 1, 2, ..., 4, indicating the prediction mode; i = 1, 3, 5, 7, indicating the pixel row coordinates; j=1, 3, 5, 7, indicating the pixel column coordinates;
c.采用JVT中的方法计算该子块在当前预测模式下的失真率rdcost;
rdcost=distortion+lambda×rate;rdcost=distortion+lambda×rate;
其中,lambda为常数68.539,rate为在当前模式下编码当前子块所用比特数,distortion为当前子块的所有像素的原始亮度样值与预测值差平方的和;Among them, lambda is a constant 68.539, rate is the number of bits used to encode the current sub-block in the current mode, and distortion is the sum of the squares of the difference between the original brightness samples and the predicted values of all pixels in the current sub-block;
d. k值增1,重复第a,b,c步,直至本子块所有预测模式都执行一遍为止;d. The value of k is increased by 1, and steps a, b, and c are repeated until all prediction modes of this sub-block are executed once;
e.比较各个模式下的rdcost,选取rdcost最小的模式为当前最优预测模式;e. Compare the rdcost in each mode, and select the mode with the smallest rdcost as the current optimal prediction mode;
III.将最优预测模式下的预测值作为该子块的最后预测值,记为 i=1,3,5,7,j=1,3,5,7;将最优预测模式下的重构值作为该子块的最后重构值,记为ij,i=1,3,5,7,j=1,3,5,7;5)预测当前8×8块中偶奇子块的像素亮度样值:III. The predicted value in the optimal prediction mode is used as the last predicted value of the sub-block, which is recorded as i=1, 3, 5, 7, j=1, 3, 5, 7; take the reconstructed value in the optimal prediction mode as the last reconstructed value of the sub-block, denoted as ij , i=1, 3 , 5, 7, j=1, 3, 5, 7; 5) Predict the pixel brightness samples of even and odd sub-blocks in the current 8×8 block:
I.首先将偶偶子块的16个像素最后重构值和奇奇子块的16个像素最后重构值放回到原始的8×8块的位置上如图15所示,并利用当前8×8块周围已经编码宏块的像素sp(p=0,1,…,15)和tq(q=0,1,…,7)的亮度样值,然后按照下述9种预测模式来预测偶奇子块中各点的像素亮度样值 其中,k=0,1,2,…,8,表示预测模式,i=0,2,4,6,表示像素行坐标,j=1,3,5,7,表示像素列坐标,mn表示偶偶、奇奇子块中像素最后重构值,m=0,1,…,7,表示像素行坐标,n=0,1,…,7,表示像素列坐标,符号“>>”表示位右移运算;I. First put the 16 final reconstruction values of the even and even sub-blocks and the 16 final reconstruction values of the odd-odd sub-blocks back to the position of the original 8×8 block as shown in Figure 15, and use the current 8 The luminance samples of the pixels sp (p=0, 1, ..., 15) and t q (q = 0, 1, ..., 7) of the coded macroblocks around the ×8 block, and then according to the following 9 prediction modes To predict the pixel brightness samples of each point in the even-odd sub-block Among them, k=0, 1, 2, ..., 8, indicates the prediction mode, i = 0, 2, 4, 6, indicates the pixel row coordinates, j = 1, 3, 5, 7, indicates the pixel column coordinates, mn Indicates the last reconstructed value of pixels in even-even and odd-odd sub-blocks, m=0, 1, ..., 7, indicates the pixel row coordinates, n = 0, 1, ..., 7, indicates the pixel column coordinates, and the symbol ">>" indicates bit right shift operation;
a、模式0:vertical Predictiona. Mode 0: vertical Prediction
使用本模式的必要条件是sp(p=0,1,…,15)可用,预测样本
的产生方法如下:
b、模式1:Horizontal predictionb. Mode 1: Horizontal prediction
预测样本
的产生方法如下:
c、模式2:DC Predictionc. Mode 2: DC Prediction
预测样本 的产生方法如下:forecast sample The generation method is as follows:
如果sp,tq(p=q=0,1,…,7)都可用,则所有预测样本
等于
d、模式3:Diagonal_Down_Left Predictiond. Mode 3: Diagonal_Down_Left Prediction
使用本模式的必要条件是sp(p=0,1,…,15)可用,预测样本
的产生方法如下:
e、模式4:Diagonal_Down_Right Predictione. Mode 4: Diagonal_Down_Right Prediction
使用本模式的必要条件是sp(p=0,1,…,15)可用,预测样本
的产生方法如下:
f、模式5 Vertical_Right Predictionf.
使用本模式的必要条件是sp(p=0,1,…,15)可用,预测样本
的产生方法如下:
g、模式6 Horizontal_Down predictiong.
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,如果f不可得到,用t0代替f,预测样本
的产生方法如下:
h、模式7 Vertical_Left Predictionh.
使用本模式的必要条件是sp(p=0,1,…,15)可用,预测样本
的产生方法如下:
i、模式8 Horizontal_Up predictioni.
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,预测样本
的产生方法如下:
II.确定偶奇子块的最优预测模式:II. Determine the optimal prediction mode for even-odd sub-blocks:
a.由下面的预测残差公式,得到预测模式k下的预测残差值Δk:
这里,aij表示原始像素亮度样值, 表示模式k下预测像素亮度样值,k=0,1,2,…,8,表示预测模式;i=0,2,4,6,表示像素行坐标;j=1,3,5,7,表示像素列坐标;Here, a ij represents the original pixel brightness sample value, Indicates the predicted pixel brightness sample value in mode k, k=0, 1, 2, ..., 8, indicates the prediction mode; i = 0, 2, 4, 6, indicates the pixel row coordinates; j = 1, 3, 5, 7 , represents the pixel column coordinates;
b.采用JVT中的编码方法,对各个像素的预测残差做DCT变换、量化和熵编码,计算当前子块在当前模式下的编码比特数;并对各个像素的预测残差做DCT变换和量化后,再进行反量化和反DCT变换,然后加上预测值 重构子块中的各个像素点的亮度样值,记为kij,k=0,1,2,…,8,表示预测模式;i=0,2,4,6,表示像素行坐标;j=1,3,5,7,表示像素列坐标;b. Using the encoding method in JVT, DCT transformation, quantization and entropy coding are performed on the prediction residual of each pixel, and the number of encoded bits of the current sub-block in the current mode is calculated; and DCT transformation and summing are performed on the prediction residual of each pixel After quantization, perform inverse quantization and inverse DCT transformation, and then add the predicted value The luminance samples of each pixel in the reconstructed sub-block are recorded as kij , k=0, 1, 2, ..., 8, indicating the prediction mode; i = 0, 2, 4, 6, indicating the pixel row coordinates; j=1, 3, 5, 7, indicating the pixel column coordinates;
c.采用JVT中的方法计算该子块在当前预测模式下的失真率rdcost;
rdcost=distortion+lambda×rate;rdcost=distortion+lambda×rate;
其中,lambda为常数68.539,rate为在当前模式下编码当前子块所用比特数,distortion为当前子块的所有像素的原始亮度样值与预测值差平方的和;Among them, lambda is a constant 68.539, rate is the number of bits used to encode the current sub-block in the current mode, and distortion is the sum of the squares of the difference between the original brightness samples and the predicted values of all pixels in the current sub-block;
d. k值增1,重复第a,b,c步,直至本子块所有预测模式都执行一遍为止;d. The value of k is increased by 1, and steps a, b, and c are repeated until all prediction modes of this sub-block are executed once;
e.比较各个模式下的rdcost,选取rdcost最小的模式为当前最优预测模式;e. Compare the rdcost in each mode, and select the mode with the smallest rdcost as the current optimal prediction mode;
III.将最优预测模式下的预测值作为该子块的最后预测值,记为 i=0,2,4,6,j=1,3,5,7;将最优预测模式下的重构值作为该子块的最后重构值,记为ij,i=0,2,4,6,j=1,3,5,7;6)预测当前8×8块中奇偶子块的像素亮度样值:III. The predicted value in the optimal prediction mode is used as the last predicted value of the sub-block, which is recorded as i=0, 2, 4, 6, j=1, 3, 5, 7; take the reconstructed value in the optimal prediction mode as the last reconstructed value of the sub-block, denoted as ij , i=0, 2 , 4, 6, j=1, 3, 5, 7; 6) Predict the pixel brightness sample value of the parity sub-block in the current 8×8 block:
I.首先将偶偶子块的16个像素最后重构值、奇奇子块的16个像素最后重构值和偶奇子块的16个像素最后重构值放回到原始的8×8块的位置上,如图16所示,并利用当前8×8块周围已经编码宏块的像素sp(p=0,1,…,15)和tq(q=0,1,…,7)的亮度样值,然后按照下述9种预测模式来预测奇奇子块中各点的像素亮度样值 其中,k=0,1,2,…,8,表示预测模式,i=1,3,5,7,表示像素行坐标,j=0,2,4,6,表示像素列坐标,mn表示偶偶子块中像素最后重构值,m=0,1,…,7,表示像素行坐标,n=0,1,…,7,表示像素列坐标,符号“>>”表示位右移运算;1. At first the last reconstruction values of 16 pixels of even and even sub-blocks, the last reconstruction values of 16 pixels of odd-odd sub-blocks and the last reconstruction values of 16 pixels of even-odd sub-blocks are put back to the original 8×8 blocks position, as shown in Figure 16, and use the pixels sp (p=0, 1, ..., 15) and t q (q = 0, 1, ..., 7) of the coded macroblocks around the current 8×8 block luminance sample value, and then predict the pixel luminance sample value of each point in the odd-odd sub-block according to the following 9 prediction modes Among them, k=0, 1, 2, ..., 8, represents the prediction mode, i=1, 3, 5, 7, represents the pixel row coordinates, j=0, 2, 4, 6, represents the pixel column coordinates, mn Indicates the last reconstructed value of the pixel in the even sub-block, m=0, 1, ..., 7, indicates the pixel row coordinates, n = 0, 1, ..., 7, indicates the pixel column coordinates, the symbol ">>" indicates the right shift of the bit operation;
a、模式0:vertical Predictiona. Mode 0: vertical Prediction
预测样本
的产生方法如下:
b、模式1:Horizontal predictionb. Mode 1: Horizontal prediction
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,预测样本
的产生方法如下:
c、模式2:DC Predictionc. Mode 2: DC Prediction
预测样本 的产生方法如下:forecast sample The generation method is as follows:
如果sp,tq(p=q=0,1,…,7)都可用,则所有预测样本
等于
d、模式3:Diagonal_Down_Left Predictiond. Mode 3: Diagonal_Down_Left Prediction
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,预测样本
的产生方法如下:
e、模式4:Diagonal_Down_Right Predictione. Mode 4: Diagonal_Down_Right Prediction
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,预测样本
的产生方法如下:
f、模式5 Vertical_Right Predictionf.
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,预测样本
的产生方法如下:
g、模式6 Horizontal_Down predictiong.
预测样本
的产生方法如下:
h、模式7 Vertical_Left Predictionh.
使用本模式的必要条件是sp,tq(p=0,1,…,15;q=0,1,…,7)均可用,如果f不可得到,用t0代替f,预测样本
的产生方法如下:
i、模式8 Horizontal_Up predictioni.
预测样本
的产生方法如下:
II.确定奇偶子块的最优预测模式:II. Determine the optimal prediction mode of the parity sub-block:
a.由下面的预测残差公式,得到预测模式k下的预测残差值Δk:
这里,aij表示原始像素亮度样值, 表示模式k下像素预测值,k=0,1,2,…,8,表示预测模式;i=1,3,5,7,表示像素行坐标;j=0,2,4,6,表示像素列坐标;Here, a ij represents the original pixel brightness sample value, Indicates the pixel prediction value in mode k, k=0, 1, 2, ..., 8, indicating the prediction mode; i = 1, 3, 5, 7, indicating the pixel row coordinates; j = 0, 2, 4, 6, indicating pixel column coordinates;
b.采用JVT中的编码方法,对各个像素的预测残差做DCT变换、量化和熵编码,计算当前子块在当前模式下的编码比特数;并对各个像素的预测残差做DCT变换和量化后,再进行反量化和反DCT变换,然后加上预测值 重构子块中的各个像素点的亮度样值,记为kij,k=0,1,2,…,8,表示预测模式;i=1,3,5,7,表示像素行坐标;j=0,2,4,6,表示像素列坐标;b. Using the encoding method in JVT, DCT transformation, quantization and entropy coding are performed on the prediction residual of each pixel, and the number of encoded bits of the current sub-block in the current mode is calculated; and DCT transformation and summing are performed on the prediction residual of each pixel After quantization, perform inverse quantization and inverse DCT transformation, and then add the predicted value The luminance samples of each pixel in the reconstructed sub-block are recorded as kij , k=0, 1, 2, ..., 8, indicating the prediction mode; i = 1, 3, 5, 7, indicating the pixel row coordinates; j=0, 2, 4, 6, indicating the pixel column coordinates;
c.采用JVT中的方法计算该子块在当前预测模式下的失真率rdcost;
rdcost=distortion+lambda×rate;rdcost=distortion+lambda×rate;
其中,lambda为常数68.539,rate为在当前模式下编码当前子块所用比特数,distortion为当前子块的所有像素的原始亮度样值与预测值差平方的和;Among them, lambda is a constant 68.539, rate is the number of bits used to encode the current sub-block in the current mode, and distortion is the sum of the squares of the difference between the original brightness samples and the predicted values of all pixels in the current sub-block;
d. k值增1,重复第a,b,c步,直至本子块所有预测模式都执行一遍为止;d. The value of k is increased by 1, and steps a, b, and c are repeated until all prediction modes of this sub-block are executed once;
e.比较各个模式下的rdcost,选取rdcost最小的模式为当前最优预测模式;e. Compare the rdcost in each mode, and select the mode with the smallest rdcost as the current optimal prediction mode;
III.将最优预测模式下的预测值作为该子块的最后预测值,记为 i=1,3,5,7,j=0,2,4,6;将最优预测模式下的重构值作为该子块的最后重构值,记为ij,i=1,3,5,7,j=0,2,4,6;III. The predicted value in the optimal prediction mode is used as the last predicted value of the sub-block, which is recorded as i=1, 3, 5, 7, j=0, 2, 4, 6; take the reconstructed value in the optimal prediction mode as the last reconstructed value of the sub-block, denoted as ij , i=1, 3 , 5, 7, j=0, 2, 4, 6;
此时,8×8块中所有像素预测完毕,其预测值记为 如图17所示,重构值记为ij,如图18所示,i=0,1,…,7,表示像素行坐标;j=0,1,…,7,表示像素列坐标;7)选取另一个8×8块,重复第3)至第6)步的过程,直到四个8×8块全部完成。At this point, all the pixels in the 8×8 block have been predicted, and their predicted values are denoted as As shown in Figure 17, the reconstructed value is recorded as ij , as shown in Figure 18, i=0, 1, ..., 7, representing the pixel row coordinates; j=0, 1, ..., 7, representing the pixel column coordinates; 7) Select another 8×8 block and repeat the process from step 3) to step 6) until all four 8×8 blocks are completed.
本发明的一种基于相邻像素预测的帧内预测方法,其预测结构精细,预测残差小,提高了图像的编码质量,节省了编码比特开销。The intra-frame prediction method based on adjacent pixel prediction of the present invention has fine prediction structure and small prediction residual, improves image coding quality, and saves coding bit overhead.
本发明的系统框图及帧内图像中待压缩块的处理框图如图1、图2所示,这种基于视频内容的视频编码方法,是通过视频摄像机得到原始视频序列作为输入,通过视频采集卡后变成视频序列数据进入计算机,并采用JVT提供的视频编码技术,由计算机进行处理与运算。The system block diagram of the present invention and the processing block diagram of the block to be compressed in the image in the frame are shown in Figure 1 and Figure 2. This video encoding method based on video content is to obtain the original video sequence as input through the video camera, and use the video capture card Then it becomes video sequence data and enters the computer, and uses the video coding technology provided by JVT to be processed and calculated by the computer.
本发明的过程如图5、图6、图7、图8所示。附图说明The process of the present invention is shown in Fig. 5, Fig. 6, Fig. 7 and Fig. 8 . Description of drawings
图1系统框图;Figure 1 system block diagram;
1、采集的原始视频图像序列,2、计算机,3、帧内图像压缩处理,4、帧间图像压缩处理,5、压缩后编码序列,6、数据传输系统;图2帧内图像中待压缩块的处理框图;7、待处理的图像块,8、图像块的预测模块,9、变换,10、量化器,11、熵编码,12、压缩后比特流;1. The original video image sequence collected, 2. Computer, 3. Intra-frame image compression processing, 4. Inter-frame image compression processing, 5. Compressed coding sequence, 6. Data transmission system; Figure 2 Intra-frame image to be compressed Block processing block diagram; 7. Image block to be processed, 8. Prediction module of image block, 9. Transformation, 10. Quantizer, 11. Entropy coding, 12. Compressed bit stream;
图3现有JVT中的4×4块帧内预测结构;The 4×4 block intra prediction structure in the existing JVT of Fig. 3;
图4现有JVT中的16×16块帧内预测结构;The 16×16 block intra prediction structure in the existing JVT of Fig. 4;
图5本发明的主流程图;Fig. 5 is the main flow chart of the present invention;
图6本发明的宏块预测模块流程图;The flowchart of the macroblock prediction module of the present invention in Fig. 6;
图7求图6中任意子块的预测值流程图;Fig. 7 seeks the predicted value flowchart of any sub-block in Fig. 6;
图8求图7中当前预测模式下当前子块编码和重构流程图;Fig. 8 seeks the current sub-block encoding and reconstruction flowchart in the current prediction mode in Fig. 7;
图9本发明的16×16宏块中预测每个8×8块的顺序图;Fig. 9 is a sequence diagram of predicting each 8×8 block in the 16×16 macroblock of the present invention;
图10本发明的标注了像素下标的基本8×8块;Fig. 10 is the basic 8×8 block marked with pixel subscripts of the present invention;
图11本发明的对8×8块划分的奇偶子块图;Fig. 11 is the parity sub-block diagram divided into 8×8 blocks according to the present invention;
A、偶偶子块,B、奇奇子块,C、偶奇子块,D、奇偶子块;A, even sub-block, B, odd-odd sub-block, C, even-odd sub-block, D, odd-even sub-block;
图12本发明的用来预测当前偶偶子块的像素和偶偶子块中像素的位置图;Fig. 12 is used to predict the position map of the pixels in the current even-even sub-block and the pixels in the even-even sub-block of the present invention;
图13本发明重构出的偶偶子块的像素的位置图;Fig. 13 is a position diagram of the pixels of the even-even sub-block reconstructed by the present invention;
图14本发明重构出的偶偶子块像素和待预测的奇奇子块像素位置图;Fig. 14 is a position map of even and even sub-block pixels reconstructed by the present invention and odd and odd sub-block pixels to be predicted;
图15本发明重构出的偶偶子块像素、奇奇子块像素和待预测的偶奇子块像素位置图;Fig. 15 is a position map of even and even sub-block pixels, odd-odd sub-block pixels and even-odd sub-block pixels to be predicted reconstructed by the present invention;
图16本发明重构出的偶偶子块像素、奇奇子块像素、偶奇子块像素和待预测的奇偶子块像素位置图;Fig. 16 is a position map of even and even sub-block pixels, odd and odd sub-block pixels, even-odd sub-block pixels and odd-even sub-block pixels to be predicted reconstructed by the present invention;
图17本发明的8×8块预测值图;Fig. 17 is the 8×8 block prediction value map of the present invention;
图18本发明的8×8块重构值图;Figure 18 is the 8×8 block reconstruction value map of the present invention;
图19本发明的重构偶偶子块所需像素值及其位置图;Fig. 19 is a map of pixel values and their positions required for reconstructing even and even sub-blocks of the present invention;
图20本发明的重构偶偶子块后8×8块像素值及其位置图;Fig. 20 is a diagram of pixel values and their positions of 8×8 blocks after reconstructing even and even sub-blocks of the present invention;
图21本发明的重构出偶偶子块、奇奇子块像素值及其位置图;Fig. 21 is the reconstructed even-even sub-block, odd-odd sub-block pixel values and their positions in the present invention;
图22本发明的重构出偶偶子块、奇奇子块和偶奇子块像素值及其位置图;Fig. 22 is the reconstructed even-even sub-block, odd-odd sub-block and even-odd sub-block pixel values and their positions in the present invention;
图23本发明的整个8×8块重构后所有像素值及其位置图;Fig. 23 is a diagram of all pixel values and their positions after reconstruction of the entire 8×8 block of the present invention;
图24本发明的整个8×8块预测完毕后所有像素值及其位置图;Fig. 24 is a map of all pixel values and their positions after the prediction of the entire 8×8 block of the present invention is completed;
图25本发明与JVT预测结构在实例下的亮度样值信噪比及比特率曲线图。具体实施方式Fig. 25 is a curve diagram of the signal-to-noise ratio and bit rate of the luminance sample under the example of the present invention and the JVT prediction structure. Detailed ways
下面我们对帧内图像中任意取出的一个8×8块做实例说明,当前的8×8块周边的已经编码完的像素亮度样值如图19所示,利用它们来预测当前8×8块中的各个像素的亮度样值;Below we illustrate an 8×8 block arbitrarily taken out of an intra-frame image. The encoded pixel brightness samples around the current 8×8 block are shown in Figure 19, and they are used to predict the current 8×8 block The brightness sample value of each pixel in ;
1.利用8×8块周边已经重构出的亮度样值,预测偶偶子块中各个像素的亮度样值,对于图像中开始的8×8边缘块,约定其周边的像素亮度样值为128,利用它们来预测边缘的8×8块。1. Use the reconstructed luminance samples around the 8×8 block to predict the luminance sample value of each pixel in the even and even sub-blocks. For the first 8×8 edge block in the image, it is agreed that the luminance sample value of the surrounding pixels is 128 , use them to predict the 8×8 blocks of the edges.
a.利用本发明的方法求取9种模式下每种模式预测出的偶偶子块的像素预测值 这里k=0,1,2,…,8,表示预测模式,i=0,2,4,6,表示像素行坐标,j=0,2,4,6表示像素列坐标;a. Utilize the method of the present invention to obtain the pixel prediction value of the even-even sub-block predicted by each mode under 9 modes Here k=0, 1, 2, ..., 8, represents the prediction mode, i=0, 2, 4, 6, represents the pixel row coordinates, j=0, 2, 4, 6 represents the pixel column coordinates;
b.由下面的预测残差公式,得到预测模式k下的预测残差值Δk:
这里,aij表示原始像素亮度样值, 表示模式k下像素预测值,i=0,2,4,6,表示像素行坐标,j=0,2,4,6,表示像素列坐标;k=0,1,…,8;采用现有JVT中的编码技术,对各个像素的预测残差做DCT变换、量化和熵编码,求编码当前子块所需比特;并对各个像素的预测残差做了DCT变换和量化后,再进行反量化和反DCT变换,然后加上预测值重构当前模式下子块中的各个像素点的亮度样值,记为kij;Here, a ij represents the original pixel brightness sample value, Indicates the pixel prediction value under mode k, i=0, 2, 4, 6, represents the pixel row coordinates, j=0, 2, 4, 6, represents the pixel column coordinates; k=0, 1, ..., 8; With the coding technology in JVT, DCT transformation, quantization and entropy coding are performed on the prediction residual of each pixel to find the bits required for encoding the current sub-block; and DCT transformation and quantization are performed on the prediction residual of each pixel, and then Inverse quantization and inverse DCT transformation, and then add the predicted value to reconstruct the luminance sample value of each pixel in the sub-block in the current mode, denoted as kij ;
c.按现有JVT技术计算该子块在当前预测模式下的失真率rdcost;
rdcost=distortion+lambda×rate;rdcost=distortion+lambda×rate;
其中,lambda为常数68.539,rate为在当前模式下编码当前子块所用比特数,distortion为当前子块的所有像素的原始亮度样值与预测值差平方的和;Among them, lambda is a constant 68.539, rate is the number of bits used to encode the current sub-block in the current mode, and distortion is the sum of the squares of the difference between the original brightness samples and the predicted values of all pixels in the current sub-block;
d. k值增1,重复a.b.c.直至本子块所有预测模式都执行一遍为止;d. Increase the value of k by 1, repeat a.b.c. until all prediction modes of this sub-block are executed once;
e.比较各个模式下的rdcost,选取rdcost最小的模式为当前最优预测模式。在本例中,计算出的该子块的最优预测模式为5,其中模式0的Rdcost为747.7;模式1的Rdcost为498.1;模式2的Rdcost为763.7;模式3的Rdcost为480.1;模式4的Rdcost为824.7;模式5的Rdcost为472.7;模式6的Rdcost为777.7;模式7的Rdcost为723.7;模式8的Rdcost为685.7;所以本子块的最优模式为5。该最优预测模式5下重建的5ij为该子块的最后重构值,记为ij;该最优预测模式5下预测的
记为
为该子块的最后预测值;e. Compare the rdcost in each mode, and select the mode with the smallest rdcost as the current optimal prediction mode. In this example, the calculated optimal prediction mode of the sub-block is 5, where the Rdcost of mode 0 is 747.7; the Rdcost of
偶偶子块重构后当前8×8块的重构状态如图20所示。The reconstruction state of the current 8×8 block after even sub-block reconstruction is shown in Fig. 20 .
2.利用8×8块周边的已经重构完的像素亮度样值和偶偶子块中重构的像素亮度样值,预测本发明奇奇子块中的各个像素的亮度样值;2. Predict the brightness samples of each pixel in the odd-odd sub-block of the present invention by utilizing the reconstructed pixel brightness samples around the 8×8 block and the reconstructed pixel brightness samples in the even-even sub-block;
a.利用本发明的方法求取9种模式预测下每种模式奇奇子块的像素亮度样值k=0,1,…,8,表示预测模式,i=1,3,5,7,表示像素行坐标,j=1,3,5,7,表示像素列坐标;a. Utilize the method of the present invention to obtain the pixel brightness sample value of each mode odd-odd sub-block under 9 modes prediction k=0, 1, ..., 8, indicating the prediction mode, i = 1, 3, 5, 7, indicating the pixel row coordinates, j = 1, 3, 5, 7, indicating the pixel column coordinates;
b.由下面的预测残差公式,得到预测模式k下的预测残差值Δk:
这里,aij表示原始像素亮度样值, 表示模式k下像素预测值,i=1,3,5,7,表示像素行坐标,j=1,3,5,7,表示像素列坐标; k=0,1,…,8;采用现有JVT中的编码技术,对各个像素的预测残差做DCT变换、量化和熵编码,求编码当前子块所需比特;并对各个像素的预测残差做了DCT变换和量化后,再进行反量化和反DCT变换,然后加上预测值重构当前模式下子块中的各个像素点的亮度样值,记为kij;Here, a ij represents the original pixel brightness sample value, Indicates the pixel prediction value under mode k, i=1, 3, 5, 7, represents the pixel row coordinates, j=1, 3, 5, 7, represents the pixel column coordinates; k=0, 1, ..., 8; With the coding technology in JVT, DCT transformation, quantization and entropy coding are performed on the prediction residual of each pixel to find the bits required for encoding the current sub-block; and DCT transformation and quantization are performed on the prediction residual of each pixel, and then Inverse quantization and inverse DCT transformation, and then add the predicted value to reconstruct the luminance sample value of each pixel in the sub-block in the current mode, denoted as kij ;
c.按现有JVT技术计算该子块在当前预测模式下的失真率rdcost;
rdcost=distortion+lambda×rate;rdcost=distortion+lambda×rate;
其中,lambda为常数68.539,rate为在当前模式下编码当前子块所用比特数,distortion为当前子块的所有像素的原始亮度样值与预测值差平方的和;Among them, lambda is a constant 68.539, rate is the number of bits used to encode the current sub-block in the current mode, and distortion is the sum of the squares of the difference between the original brightness samples and the predicted values of all pixels in the current sub-block;
d. k值增1,重复a.b.c.直至本子块所有预测模式都执行一遍为止;d. Increase the value of k by 1, repeat a.b.c. until all prediction modes of this sub-block are executed once;
e.比较各个模式下的rdcost,选取rdcost最小的模式为当前最优预测模式。在本例中,计算出的该子块的最优预测模式为1,其中模式0的Rdcost为891.3;模式1的Rdcost为216.1;模式2的Rdcost为703.3;模式3的Rdcost为430.7;模式4的Rdcost为410.7;模式5的Rdcost为428.7;模式6的Rdcost为576.7;模式7的Rdcost为596.7;模式8的Rdcost为518.7;所以本子块的最优模式为1。该最优预测模式1下重建的1ij为该子块的最后重构值,记为ij;该最优预测模式5下预测的
记为
为该子块的最后预测值;e. Compare the rdcost in each mode, and select the mode with the smallest rdcost as the current optimal prediction mode. In this example, the calculated optimal prediction mode of the sub-block is 1, where the Rdcost of mode 0 is 891.3; the Rdcost of
奇奇子块重构后当前8×8块的重构状态如图21所示。The reconstruction state of the current 8×8 block after reconstruction of the odd-odd sub-block is shown in FIG. 21 .
3.用8×8块周边的已经编码完的像素亮度样值和偶偶子块、奇奇子块中重构的像素亮度样值预测本发明偶奇子块中的各个像素的亮度样值;3. Predict the brightness samples of each pixel in the even-odd sub-block of the present invention with the encoded pixel brightness samples around the 8×8 block and the reconstructed pixel brightness samples in the even-even sub-block and the odd-odd sub-block;
a.求取9种模式预测下每种模式偶奇块的像素亮度样值 k=0,1,…,8,表示预测模式,i=0,2,4,6,表示像素行坐标,j=1,3,5,7,表示像素列坐标;a. Calculate the pixel luminance sample value of the even-odd block in each mode under the prediction of 9 modes k=0, 1, ..., 8, represents the prediction mode, i=0, 2, 4, 6, represents the pixel row coordinates, j=1, 3, 5, 7, represents the pixel column coordinates;
b.由下面的预测残差公式,得到预测模式k下的预测残差值Δk:
这里,aij表示原始像素亮度样值, 表示模式k下像素预测值,i=0,2,4,6,表示像素行坐标,j=1,3,5,7,表示像素列坐标; k=0,1,…,8;采用现有JVT中的编码技术,对各个像素的预测残差做DCT变换、量化和熵编码,求编码当前子块所需比特;并对各个像素的预测残差做了DCT变换和量化后,再进行反量化和反DCT变换,然后加上预测值重构当前模式下子块中的各个像素点的亮度样值,记为kij;Here, a ij represents the original pixel brightness sample value, Indicates the pixel prediction value under mode k, i=0, 2, 4, 6, represents the pixel row coordinates, j=1, 3, 5, 7, represents the pixel column coordinates; k=0, 1, ..., 8; With the coding technology in JVT, DCT transformation, quantization and entropy coding are performed on the prediction residual of each pixel to find the bits required for encoding the current sub-block; and DCT transformation and quantization are performed on the prediction residual of each pixel, and then Inverse quantization and inverse DCT transformation, and then add the predicted value to reconstruct the luminance sample value of each pixel in the sub-block in the current mode, denoted as kij ;
c.按现有JVT技术计算该子块在当前预测模式下的失真率rdcost;
rdcost=distortion+lambda×rate;rdcost=distortion+lambda×rate;
其中,lambda为常数68.539,rate为在当前模式下编码当前子块所用比特数,distortion为当前子块的所有像素的原始亮度样值与预测值差平方的和;Among them, lambda is a constant 68.539, rate is the number of bits used to encode the current sub-block in the current mode, and distortion is the sum of the squares of the difference between the original brightness samples and the predicted values of all pixels in the current sub-block;
d. k值增1,重复a.b.c.直至本子块所有预测模式都执行一遍为止;d. Increase the value of k by 1, repeat a.b.c. until all prediction modes of this sub-block are executed once;
e.比较各个模式下的rdcost,选取rdcost最小的模式为当前最优预测模式。在本例中,计算出的该子块的最优预测模式为1,其中模式0的Rdcost为502.7;模式1的Rdcost为310.1;模式2的Rdcost为884.3;模式3的Rdcost为528.7;模式4的Rdcost为487.7;模式5的Rdcost为494.7;模式6的Rdcost为533.7;模式7的Rdcost为589.7;模式8的Rdcost为440.7;所以本子块的最优模式为1。该最优预测模式1下重建的1ij为该子块的最后重构值,记为ij;该最优预测模式5下预测的
记为
为该子块的最后预测值;e. Compare the rdcost in each mode, and select the mode with the smallest rdcost as the current optimal prediction mode. In this example, the calculated optimal prediction mode of the sub-block is 1, where the Rdcost of mode 0 is 502.7; the Rdcost of
偶奇子块重构后当前8×8块的重构状态如图22所示。The reconstruction state of the current 8×8 block after reconstruction of even and odd sub-blocks is shown in FIG. 22 .
4.利用8×8块周边的已经编码完的像素亮度样值和偶偶子块、奇奇子块、偶奇子块中重构的像素亮度样值,预测本发明奇偶子块中的各个像素的亮度样值;4. Utilize the encoded pixel brightness samples around the 8×8 block and the pixel brightness samples reconstructed in the even-even sub-block, odd-odd sub-block, and even-odd sub-block to predict the brightness of each pixel in the odd-even sub-block of the present invention Luminance samples;
a.利用本发明技术求取9种模式预测下每种模式奇偶子块的像素亮度样值这里k=0,1,…,8,表示预测模式,i=1,3,5,7,表示像素行坐标,j=0,2,4,6,表示像素列坐标;a. Utilize the technology of the present invention to obtain the pixel brightness sample value of each mode parity sub-block under 9 modes prediction Here k=0, 1, ..., 8, represents the prediction mode, i=1, 3, 5, 7, represents the pixel row coordinates, j=0, 2, 4, 6, represents the pixel column coordinates;
b.由下面的预测残差公式,得到预测模式k下的预测残差值Δk:
这里,aij表示原始像素亮度样值, 表示模式k下像素预测值,i=1,3,5,7,表示像素行坐标,j=0,2,4,6,表示像素列坐标;k=0,1,…,8;采用现有JVT中的编码技术,对各个像素的预测残差做DCT变换、量化和熵编码,求编码当前子块所需比特;并对各个像素的预测残差做了DCT变换和量化后,再进行反量化和反DCT变换,然后加上预测值重构当前模式下子块中的各个像素点的亮度样值,记为kij;Here, a ij represents the original pixel brightness sample value, Indicates the pixel prediction value under mode k, i=1, 3, 5, 7, represents the pixel row coordinates, j=0, 2, 4, 6, represents the pixel column coordinates; k=0, 1, ..., 8; With the coding technology in JVT, DCT transformation, quantization and entropy coding are performed on the prediction residual of each pixel to find the bits required for encoding the current sub-block; and DCT transformation and quantization are performed on the prediction residual of each pixel, and then Inverse quantization and inverse DCT transformation, and then add the predicted value to reconstruct the luminance sample value of each pixel in the sub-block in the current mode, denoted as kij ;
c.按现有JVT技术计算该子块在当前预测模式下的失真率rdcost;
rdcost=distortion+lambda×rate;rdcost=distortion+lambda×rate;
其中,lambda为常数63.539,rate为在当前模式下编码当前子块所用比特数,distortion为当前子块的所有像素的原始亮度样值与预测值差平方的和;Among them, lambda is a constant 63.539, rate is the number of bits used to encode the current sub-block in the current mode, and distortion is the sum of the squares of the difference between the original brightness samples and the predicted values of all pixels in the current sub-block;
d. k值增1,重复a.b.c.直至本子块所有预测模式都执行一遍为止;d. Increase the value of k by 1, repeat a.b.c. until all prediction modes of this sub-block are executed once;
e.比较各个模式下的rdcost,选取rdcost最小的模式为当前最优预测模式。在本例中,计算出的该子块的最优预测模式为1,其中模式0的Rdcost为527.7;模式1的Rdcost为305.1;模式2的Rdcost为922.3;模式3的Rdcost为569.7;模式4的Rdcost为504.7;模式5的Rdcost为485.7;模式6的Rdcost为715.7;模式7的Rdcost为543.7;模式8的Rdcost为592.7;所以本子块的最优模式为1。该最优预测模式1下重建的1ij为该子块的最后重构值,记为ij;该最优预测模式5下预测的
记为
为该子块的最后预测值。奇偶子块重构后当前8×8块的重构状态如图23所示;e. Compare the rdcost in each mode, and select the mode with the smallest rdcost as the current optimal prediction mode. In this example, the calculated optimal prediction mode of the sub-block is 1, where the Rdcost of mode 0 is 527.7; the Rdcost of
此时8×8块预测完毕,其最后预测值如图24所示。At this time, the prediction of the 8×8 block is completed, and its final prediction value is shown in FIG. 24 .
对于一幅大小为176×144,帧率为30Hz的图像,利用本发明和JVT预测结构分别计算在不同量化值下的亮度样植信噪比及比特率,并画出亮度样植信噪比与比特率曲线图(图25)。从图25中可以看出,利用本发明提出的预测结构给出的曲线在利用JVT结构给出的曲线上方,这就表明了本发明的优势,即在耗费相同的比特时,本发明能提供更高的图象质量;在得到相同的图像质量时,本发明能减少比特的开销。For an image with a size of 176×144 and a frame rate of 30Hz, use the present invention and the JVT prediction structure to calculate the luminance sample signal-to-noise ratio and bit rate under different quantization values, and draw the luminance sample signal-to-noise ratio vs. bitrate graph (Figure 25). As can be seen from Figure 25, the curve given by the prediction structure proposed by the present invention is above the curve given by the JVT structure, which shows the advantages of the present invention, that is, the present invention can provide Higher image quality; when obtaining the same image quality, the present invention can reduce bit overhead.
Claims (2)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN 03136605 CN1204753C (en) | 2003-05-19 | 2003-05-19 | Interframe predicting method based on adjacent pixel prediction |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN 03136605 CN1204753C (en) | 2003-05-19 | 2003-05-19 | Interframe predicting method based on adjacent pixel prediction |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN1455600A true CN1455600A (en) | 2003-11-12 |
| CN1204753C CN1204753C (en) | 2005-06-01 |
Family
ID=29260522
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN 03136605 Expired - Fee Related CN1204753C (en) | 2003-05-19 | 2003-05-19 | Interframe predicting method based on adjacent pixel prediction |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN1204753C (en) |
Cited By (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN100382567C (en) * | 2006-06-06 | 2008-04-16 | 王程 | Method for rebuilding super resolution image from reduced quality image caused by interlaced sampling |
| CN100413342C (en) * | 2005-03-09 | 2008-08-20 | 浙江大学 | Method and device for encoding and decoding intra-frame prediction mode for video or image compression |
| CN100426868C (en) * | 2005-01-25 | 2008-10-15 | 中国科学院计算技术研究所 | Frame image brightness predictive coding method |
| CN100442857C (en) * | 2005-10-12 | 2008-12-10 | 华为技术有限公司 | An enhancement layer intra prediction method and codec device |
| WO2009015553A1 (en) * | 2007-07-31 | 2009-02-05 | Peking University Founder Group Co., Ltd. | A method and device selecting intra-frame predictive coding best mode for video coding |
| CN100466740C (en) * | 2005-11-11 | 2009-03-04 | 北京微视讯通数字技术有限公司 | Difference quantization method and device for video coding processing |
| CN100536573C (en) * | 2004-01-16 | 2009-09-02 | 北京工业大学 | Inframe prediction method used for video frequency coding |
| CN1589028B (en) * | 2004-07-29 | 2010-05-05 | 展讯通信(上海)有限公司 | Predicting device and method based on pixel flowing frame |
| CN101385348B (en) * | 2006-01-09 | 2011-01-12 | Lg电子株式会社 | Inter-Layer Prediction Method for Video Signal |
| CN101385356B (en) * | 2006-02-17 | 2011-01-19 | 汤姆森许可贸易公司 | Process for coding images using intra prediction mode |
| CN101039420B (en) * | 2007-03-30 | 2011-02-02 | 孟智平 | Streaming format-based image transmission method, prediction algorithm and display method |
| CN101502124B (en) * | 2006-07-28 | 2011-02-23 | 株式会社东芝 | Image coding and decoding method and device |
| CN102415098A (en) * | 2009-04-24 | 2012-04-11 | 索尼公司 | Image processing apparatus and method |
| CN101095360B (en) * | 2004-12-30 | 2012-04-25 | 英特尔公司 | Method and device for encoding digital video |
| US8264968B2 (en) | 2006-01-09 | 2012-09-11 | Lg Electronics Inc. | Inter-layer prediction method for video signal |
| CN102857752A (en) * | 2011-07-01 | 2013-01-02 | 华为技术有限公司 | Pixel predicting method and pixel predicting device |
| CN103609117B (en) * | 2011-06-16 | 2017-07-04 | 飞思卡尔半导体公司 | Method and device for encoding and decoding images |
| CN107404650A (en) * | 2017-07-25 | 2017-11-28 | 哈尔滨工业大学 | Pixel-level three-dimensional intra-frame prediction method based on adaptive model selection |
| CN109474825A (en) * | 2018-10-18 | 2019-03-15 | 北京大学 | A pulse sequence compression method and system |
| CN116366845A (en) * | 2018-03-09 | 2023-06-30 | 韩国电子通信研究院 | Image encoding/decoding method and device using sample filtering |
-
2003
- 2003-05-19 CN CN 03136605 patent/CN1204753C/en not_active Expired - Fee Related
Cited By (36)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN100536573C (en) * | 2004-01-16 | 2009-09-02 | 北京工业大学 | Inframe prediction method used for video frequency coding |
| CN1589028B (en) * | 2004-07-29 | 2010-05-05 | 展讯通信(上海)有限公司 | Predicting device and method based on pixel flowing frame |
| CN101095360B (en) * | 2004-12-30 | 2012-04-25 | 英特尔公司 | Method and device for encoding digital video |
| CN100426868C (en) * | 2005-01-25 | 2008-10-15 | 中国科学院计算技术研究所 | Frame image brightness predictive coding method |
| CN100413342C (en) * | 2005-03-09 | 2008-08-20 | 浙江大学 | Method and device for encoding and decoding intra-frame prediction mode for video or image compression |
| CN100442857C (en) * | 2005-10-12 | 2008-12-10 | 华为技术有限公司 | An enhancement layer intra prediction method and codec device |
| CN100466740C (en) * | 2005-11-11 | 2009-03-04 | 北京微视讯通数字技术有限公司 | Difference quantization method and device for video coding processing |
| CN101385348B (en) * | 2006-01-09 | 2011-01-12 | Lg电子株式会社 | Inter-Layer Prediction Method for Video Signal |
| US8451899B2 (en) | 2006-01-09 | 2013-05-28 | Lg Electronics Inc. | Inter-layer prediction method for video signal |
| US8792554B2 (en) | 2006-01-09 | 2014-07-29 | Lg Electronics Inc. | Inter-layer prediction method for video signal |
| US8687688B2 (en) | 2006-01-09 | 2014-04-01 | Lg Electronics, Inc. | Inter-layer prediction method for video signal |
| US8619872B2 (en) | 2006-01-09 | 2013-12-31 | Lg Electronics, Inc. | Inter-layer prediction method for video signal |
| US8494060B2 (en) | 2006-01-09 | 2013-07-23 | Lg Electronics Inc. | Inter-layer prediction method for video signal |
| US9497453B2 (en) | 2006-01-09 | 2016-11-15 | Lg Electronics Inc. | Inter-layer prediction method for video signal |
| US8264968B2 (en) | 2006-01-09 | 2012-09-11 | Lg Electronics Inc. | Inter-layer prediction method for video signal |
| US8345755B2 (en) | 2006-01-09 | 2013-01-01 | Lg Electronics, Inc. | Inter-layer prediction method for video signal |
| US8494042B2 (en) | 2006-01-09 | 2013-07-23 | Lg Electronics Inc. | Inter-layer prediction method for video signal |
| US8457201B2 (en) | 2006-01-09 | 2013-06-04 | Lg Electronics Inc. | Inter-layer prediction method for video signal |
| US8401091B2 (en) | 2006-01-09 | 2013-03-19 | Lg Electronics Inc. | Inter-layer prediction method for video signal |
| CN101385356B (en) * | 2006-02-17 | 2011-01-19 | 汤姆森许可贸易公司 | Process for coding images using intra prediction mode |
| CN100382567C (en) * | 2006-06-06 | 2008-04-16 | 王程 | Method for rebuilding super resolution image from reduced quality image caused by interlaced sampling |
| CN101502124B (en) * | 2006-07-28 | 2011-02-23 | 株式会社东芝 | Image coding and decoding method and device |
| CN101039420B (en) * | 2007-03-30 | 2011-02-02 | 孟智平 | Streaming format-based image transmission method, prediction algorithm and display method |
| US8406286B2 (en) | 2007-07-31 | 2013-03-26 | Peking University Founder Group Co., Ltd. | Method and device for selecting best mode of intra predictive coding for video coding |
| WO2009015553A1 (en) * | 2007-07-31 | 2009-02-05 | Peking University Founder Group Co., Ltd. | A method and device selecting intra-frame predictive coding best mode for video coding |
| CN102415098A (en) * | 2009-04-24 | 2012-04-11 | 索尼公司 | Image processing apparatus and method |
| CN102415098B (en) * | 2009-04-24 | 2014-11-26 | 索尼公司 | Image processing apparatus and method |
| CN103609117B (en) * | 2011-06-16 | 2017-07-04 | 飞思卡尔半导体公司 | Method and device for encoding and decoding images |
| CN102857752B (en) * | 2011-07-01 | 2016-03-30 | 华为技术有限公司 | A kind of pixel prediction method and apparatus |
| CN102857752A (en) * | 2011-07-01 | 2013-01-02 | 华为技术有限公司 | Pixel predicting method and pixel predicting device |
| WO2013004163A1 (en) * | 2011-07-01 | 2013-01-10 | 华为技术有限公司 | Pixel prediction method and device |
| CN107404650A (en) * | 2017-07-25 | 2017-11-28 | 哈尔滨工业大学 | Pixel-level three-dimensional intra-frame prediction method based on adaptive model selection |
| CN107404650B (en) * | 2017-07-25 | 2020-04-07 | 哈尔滨工业大学 | Pixel-level three-way intra-frame prediction method based on self-adaptive mode selection |
| CN116366845A (en) * | 2018-03-09 | 2023-06-30 | 韩国电子通信研究院 | Image encoding/decoding method and device using sample filtering |
| CN109474825A (en) * | 2018-10-18 | 2019-03-15 | 北京大学 | A pulse sequence compression method and system |
| CN109474825B (en) * | 2018-10-18 | 2020-07-10 | 北京大学 | A pulse sequence compression method and system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN1204753C (en) | 2005-06-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN1535027A (en) | A Method of Intra-frame Prediction for Video Coding | |
| CN1455600A (en) | Interframe predicting method based on adjacent pixel prediction | |
| CN1225126C (en) | Space predicting method and apparatus for video encoding | |
| CN1214647C (en) | Image encoding method and image encoder | |
| CN1280709C (en) | Parameterization of fade compensation | |
| CN1202650C (en) | Image processing method, image processing device, and data storage medium | |
| CN1155259C (en) | Bit rate variable coding device and method, coding program recording medium | |
| CN1703096A (en) | Prediction encoder/decoder, prediction encoding/decoding method, and recording medium | |
| CN1941913A (en) | Method and apparatus for skipping pictures | |
| CN1910931A (en) | Video encoding method and device, video decoding method and device, program thereof, and recording medium containing the program | |
| CN1132978A (en) | Method of reducing quantization noise generated during decoding process of image data and device for decoding image data | |
| CN1108462A (en) | Method and apparatus for coding video signal, and method and apparatus for decoding video signal | |
| CN1694537A (en) | Adaptive Deblocking Filter Device and Method for Video Decoder of Moving Picture Experts Group | |
| CN1717058A (en) | Image encoding method and device, and image decoding method and device | |
| CN1679342A (en) | Method and device for intracoding video data | |
| CN1197359C (en) | Device and method for video signal with lowered resolution ratio of strengthened decode | |
| CN1270541C (en) | Encoding device and method | |
| CN1705375A (en) | Method of forecasting encoder/decoder and forecasting coding/decoding | |
| CN1358028A (en) | Image codec method, image coder and image decoder | |
| CN1320825C (en) | Image decoding method, image decoding device | |
| CN1266947C (en) | Moving picture compression/coding apparatus and motion vector detection method | |
| CN1652608A (en) | Data processing device and method of same, and encoding device and decoding device | |
| CN1282107C (en) | Method for rapidly compressing and decompressing image | |
| CN1856997A (en) | 8x8 transform and quantization | |
| CN1630376A (en) | Image encoding device and image encoding method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| C17 | Cessation of patent right | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20050601 Termination date: 20130519 |