[go: up one dir, main page]

CN111107346A - Prediction method in bandwidth compression - Google Patents

Prediction method in bandwidth compression Download PDF

Info

Publication number
CN111107346A
CN111107346A CN201811260526.8A CN201811260526A CN111107346A CN 111107346 A CN111107346 A CN 111107346A CN 201811260526 A CN201811260526 A CN 201811260526A CN 111107346 A CN111107346 A CN 111107346A
Authority
CN
China
Prior art keywords
prediction
pixel
residual
current pixel
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201811260526.8A
Other languages
Chinese (zh)
Inventor
田林海
李雯
岳庆东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Keruisheng Innovative Technology Co Ltd
Xian Cresun Innovation Technology Co Ltd
Original Assignee
Xian Keruisheng Innovative Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Keruisheng Innovative Technology Co Ltd filed Critical Xian Keruisheng Innovative Technology Co Ltd
Priority to CN201811260526.8A priority Critical patent/CN111107346A/en
Publication of CN111107346A publication Critical patent/CN111107346A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to a prediction method in bandwidth compression, which comprises the following steps: selecting an MB to be predicted with the size of m x n; acquiring a first prediction residual of a current pixel component of the MB to be predicted according to a first prediction mode; obtaining a plurality of prediction residuals of a plurality of prediction search windows of the MB to be predicted according to the second prediction mode, and obtaining a second prediction residual according to the plurality of prediction residuals of the plurality of prediction search windows; acquiring a first residual absolute value sum according to the first prediction residual, and acquiring a second residual absolute value sum according to the second prediction residual; the first residual absolute value sum and the second residual absolute value sum are compared to select a final prediction mode of the MB to be predicted. The optimal prediction method can be selected through a prediction selection algorithm, a proper reference pixel is found for the complex texture in the image, the minimum prediction residual error can be obtained, the theoretical limit entropy is further reduced, and the prediction effect is further optimized for the image with the complex texture.

Description

Prediction method in bandwidth compression
Technical Field
The invention belongs to the technical field of multimedia, and particularly relates to a prediction method in bandwidth compression.
Background
With the popularization of network applications, multimedia applications are becoming more widespread, video, which is an important component of multimedia services, has become one of the main carriers of information dissemination, and the demand of people for video quality is gradually increasing. Image resolution of video, one of the important characteristics of video quality, has transitioned from 720p and 1080p to the 4K video resolution currently prevailing in the market, and the corresponding video compression standard has also transitioned from h.264 to h.265. For a video processing chip, the multiplied increase of the resolution ratio not only causes the great increase of the chip area cost, but also brings great impact on the bus bandwidth and the power consumption.
To overcome this problem, a bandwidth compression technique applied within a chip is proposed. Unlike port class compression (e.g., h.265), the goal of on-chip bandwidth compression is to increase the compression factor as much as possible and reduce DDR usage with less logic area cost. The intra-chip compression is divided into lossy compression and lossless compression, and the lossy compression technology is widely adopted by commercial-grade video processing chips, such as the fields of monitoring, televisions and the like; lossless compression is more applied to military-grade and aerospace-grade video processing chips with strict requirements on image quality. Currently, bandwidth compression is mainly composed of 4 parts, which include: the device comprises a prediction module, a quantization module, a code control module and an entropy coding module. The quantization module and the code control module are specific to lossy compression, and the prediction module is an important module, and reduces the redundancy of an image space by searching the correlation among image data, so that the theoretical entropy of the image data is finally minimized.
The existing prediction module algorithm is mainly divided into 2 types, namely texture related prediction and pixel value related prediction. However, in the face of artificial texture in image complex texture, the prior art often cannot guarantee to find the most suitable reference pixel, so as to obtain the minimum prediction residual, so as to reduce the theoretical limit entropy.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a prediction method in bandwidth compression. The technical problem to be solved by the invention is realized by the following technical scheme:
the embodiment of the invention provides a prediction method in bandwidth compression, which comprises the following steps:
s1, selecting an MB to be predicted with the size of m × n, wherein m and n are natural numbers larger than zero;
s2, acquiring a first prediction residual of the current pixel component of the MB to be predicted according to a first prediction mode;
s3, obtaining a plurality of prediction residuals of a plurality of prediction search windows of the MB to be predicted according to a second prediction mode, and obtaining a second prediction residual according to the plurality of prediction residuals of the plurality of prediction search windows;
s4, acquiring a first residual absolute value sum according to the first prediction residual, and acquiring a second residual absolute value sum according to the second prediction residual;
s5, comparing the first and second residual absolute value sums to select a final prediction mode of the MB to be predicted.
In one embodiment of the present invention, S2 includes:
s21, determining a plurality of pixel components of the current pixel of the MB to be predicted;
s22, obtaining the gradient value of the texture direction of the current pixel component;
s23, determining a reference value of the current pixel component according to the texture direction gradient value;
and S24, determining a first prediction residual of the current pixel component through the reference value.
In one embodiment of the present invention, S22 includes:
s221, determining N texture direction gradient values of the current pixel component through the surrounding components of the current pixel component.
In one embodiment of the present invention, S23 includes:
s231, obtaining a first weighted gradient value through the texture direction gradient value;
s232, acquiring a second weighted gradient value through the first weighted gradient value;
s233, obtaining the reference direction of the current pixel component through the second weighted gradient value;
and S234, acquiring a reference value of the current pixel component according to the reference direction of the current pixel component.
In one embodiment of the present invention, the reference value and the first prediction residual in S24 satisfy the following formula:
RES=Curcpt-Ref
wherein RES is the first prediction residual, cutcpt is the pixel value of the current pixel component, and Ref is the reference value.
In one embodiment of the present invention, S3 includes:
s31, determining the plurality of prediction search windows in the MB to be predicted; wherein the prediction search window comprises a current pixel and a plurality of encoded reconstructed pixels;
s32, calculating a plurality of prediction residuals of the current pixel component within a plurality of the prediction search windows;
and S33, determining the second prediction residual according to the plurality of prediction residuals.
In one embodiment of the present invention, the plurality of predicted search windows in S31 includes: a first predictive search window, a second predictive search window, and a third predictive search window; wherein the first predictive search window, the second predictive search window, and the third predictive search window are respectively any one of a horizontal bar-shaped predictive search window, a vertical bar-shaped predictive search window, or a rectangular predictive search window.
In one embodiment of the present invention, S33 includes:
s321, calculating a component difference degree weight of each pixel component of the current pixel relative to a pixel component of the reconstructed pixel in a current prediction search window;
s322, calculating the component position weight of each pixel component of the current pixel relative to the pixel component of the reconstructed pixel in the current prediction search window;
s323, calculating the sub-weights of the plurality of reconstructed pixels according to the component difference degree weight and the component position weight;
s324, determining a plurality of reference pixels of the current pixel according to the plurality of sub-weights;
s325, obtaining the current prediction search of the current pixel component according to the plurality of reference pixels
Prediction residuals within a window of search;
and S326, repeating the steps S321 to S325, and acquiring a plurality of prediction residuals of all the prediction search windows of the current pixel.
In one embodiment of the present invention, S34 includes:
and S331, comparing the plurality of prediction residuals, determining a minimum prediction residual according to a minimum value algorithm, taking the minimum prediction residual as a second prediction residual of the current pixel component, and taking a reference pixel corresponding to the minimum prediction residual as an optimal reference pixel of the current pixel.
In one embodiment of the present invention, S5 includes:
and S51, selecting the minimum value of the first residual absolute value sum and the second residual absolute value sum, and determining the final prediction method of the MB to be predicted according to the minimum value.
Compared with the prior art, the invention has the beneficial effects that:
an optimal prediction method can be selected through a prediction selection algorithm, a proper reference pixel can be found for a complex texture in an image, a minimum prediction residual error can be obtained, the theoretical limit entropy is further reduced, and the prediction effect is further optimized for the image with the complex texture.
Drawings
Fig. 1 is a schematic flowchart illustrating a prediction method in bandwidth compression according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating an algorithm principle of a prediction method in bandwidth compression according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a reference pixel position of a prediction method in bandwidth compression according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating gradient value calculation of a prediction method in bandwidth compression according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a reference value selection of a prediction method in bandwidth compression according to an embodiment of the present invention;
fig. 6(a) and fig. 6(b) are a schematic diagram of a pixel index and a schematic diagram of a reconstructed pixel search number of a horizontal stripe prediction search window according to an embodiment of the present invention;
fig. 7(a) and fig. 7(b) are a schematic diagram of a pixel index and a schematic diagram of a reconstructed pixel search number of a vertical stripe prediction search window according to an embodiment of the present invention;
fig. 8(a) and fig. 8(b) are a schematic diagram of pixel index and a schematic diagram of reconstructed pixel search number of a rectangular prediction search window according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Example one
Referring to fig. 1 to 8, fig. 1 is a schematic flowchart illustrating a prediction method in bandwidth compression according to an embodiment of the present invention; fig. 2 is a schematic diagram illustrating an algorithm principle of a prediction method in bandwidth compression according to an embodiment of the present invention; FIG. 3 is a schematic diagram of a reference pixel position of a prediction method in bandwidth compression according to an embodiment of the present invention; FIG. 4 is a schematic diagram illustrating gradient value calculation of a prediction method in bandwidth compression according to an embodiment of the present invention; fig. 5 is a schematic diagram illustrating a reference value selection of a prediction method in bandwidth compression according to an embodiment of the present invention; fig. 6(a) and fig. 6(b) are a schematic diagram of a pixel index and a schematic diagram of a reconstructed pixel search number of a horizontal stripe prediction search window according to an embodiment of the present invention; fig. 7(a) and fig. 7(b) are a schematic diagram of a pixel index and a schematic diagram of a reconstructed pixel search number of a vertical stripe prediction search window according to an embodiment of the present invention; fig. 8(a) and fig. 8(b) are a schematic diagram of pixel index and a schematic diagram of reconstructed pixel search number of a rectangular prediction search window according to an embodiment of the present invention. Macroblock (MB) is a basic concept in video coding technology. Different compression strategies are implemented at different locations by dividing the picture into blocks of different sizes. A prediction method in bandwidth compression, the method comprising the steps of:
step 1, selecting an MB to be predicted with the size of m × n, wherein m and n are natural numbers larger than zero.
Dividing the image into MBs with the same size, wherein the size of each MB is m × n, selecting one of the MBs as the MB to be predicted, and m and n are natural numbers larger than zero.
And step 2, acquiring a first prediction residual of the MB to be predicted.
Preferably, the first prediction mode is an adaptive directional prediction mode of pixel-level multi-component reference.
Step 21, as shown in fig. 2, it is defined that the MB to be predicted has K (K >1) pixel components, which are pixel component 1 and pixel component 2 … … pixel component K, respectively.
Step 22, for each pixel component of the current pixel, determining N texture direction gradient values G1-GN for each pixel component through the surrounding pixel components of the pixel component.
Preferably, the surrounding pixel components of the current pixel component may be adjacent to the current pixel component or not; as shown in fig. 3, the CUR represents the current pixel component, i.e. the surrounding pixel component may be GHIK or ABCDEFJ.
Preferably, if it is defined that the current pixel of MB to be predicted has three pixel components, i.e., K is 3, and the three pixel components are pixel component Y, pixel component U, and pixel component V, respectively, as shown in fig. 4, ABS (P-H) is 45-degree gradient value, ABS (P-G) is 90-degree gradient value, ABS (P-F) is 135-degree gradient value, and ABS (P-J) is 180-degree gradient value. Wherein ABS is an absolute value operation.
Step 23, weighting the N texture direction gradient values G1-GN of each pixel component to obtain a first weighted gradient value BG weighted by the N texture direction gradient values, wherein the weighting formula is as follows:
BGi=w1*G1+w2*G2+…+wN*GN(i=1…K)
wherein w1 and w2 … wN are weighting coefficients, which may be the same or different; BG1 is the first weighted gradient value for pixel component 1, BG2 is the first weighted gradient value for pixel component 2, and so on, BGK is the first weighted gradient value for pixel component K.
Preferably, the first weighted gradient value BG may be represented by an absolute value of a pixel value difference, but is not limited thereto.
Preferably, the minimum value is taken, and therefore, the optimum value BGbst of the first weighted gradient value of each pixel component can be obtained.
Preferably, the optimal value BGbst of the first weighted gradient values of the K pixel components is weighted, so as to obtain a second weighted gradient value BG ″ weighted by the optimal value of the first weighted gradient value, and the weighting formula is as follows:
BG"i=t1*BGbst1+t2*BGbst2+…+tK*BGbstK(i=1…K)
wherein t1 and t2 … tK are weighting coefficients, which may be the same or different; BGbst1 is the optimal value of the first weighted gradient value of pixel component 1, BGbst2 is the optimal value of the first weighted gradient value of pixel component 2, and so on, BGbstK is the optimal value of the first weighted gradient value of pixel component K, BG "1 is the second weighted gradient value of pixel component 1, BG"2 is the second weighted gradient value of pixel component 2, and so on, BG "K is the second weighted gradient value of component K, and the optimal value BG" bst of the second weighted gradient value BG "is determined.
Preferably, taking the minimum value, the optimal value BG "bst of the second weighted gradient value of each pixel component can be obtained.
The direction of the optimal value BG "bst of the second weighted gradient value is the reference direction Dir of the current pixel component.
Preferably, all available pixel component pixel values in the reference direction of each pixel component are weighted to obtain a reference value Ref of each pixel component, and the weighting formula is as follows:
Refi=r1*cpt1+r2*cpt2+…+rN*cptN(i=1…K)
wherein r1 and r2 … rN are weighting coefficients, which may be the same or different; cpt 1-cptN are the N available pixel component pixel values in the reference direction for each pixel component; ref1 is the reference value for pixel component 1, Ref2 is the reference value for pixel component 2, and so on, and RefK is the reference value for pixel component K.
Preferably, as shown in FIG. 5, if ABS (E-A) is the smallest, i.e., 135 degree texture, then the reference value is B; if ABS (E-B) is minimal, i.e., vertical texture, then the reference value is C; if ABS (E-C) is minimal, i.e., 45 degree texture, then the reference value is D; if ABS (C-B) is minimal, i.e., horizontal texture, then the reference value is E; and selecting the obtained reference value and the current pixel component, and performing difference calculation to obtain the prediction residual error of the mode. Wherein ABS is an absolute value operation.
24, subtracting the reference value from the current pixel component value to obtain a first prediction residual RES of the current pixel component; the formula is as follows:
RES=Curcpt-Ref
wherein RES is the first prediction residual, Curcpt is the pixel value of the current pixel component, and Ref is the reference value.
Preferably, the first prediction residual of the current pixel component 1 is:
RES1=Curcpt1-Ref1
where, cutcpt 1 is the pixel value of pixel component 1, and RES1 is the first prediction residual of pixel component 1.
Step 25, repeating S22-S24 according to the rest pixel components of the current pixel, and obtaining the prediction residual error of all the pixel components of the pixel; the formula is as follows:
RESi=Curcpti-Refi(i=1…K)
wherein, cutcpt 1 is the pixel value of pixel component 1, cutcpt 2 is the pixel value of pixel component 2, and so on, and cutcptk is the pixel value of pixel component K; RES1 is the first prediction residual for pixel component 1, RES2 is the first prediction residual for pixel component 2, and so on, RESK is the first prediction residual for pixel component K.
Preferably, the multiple components can be processed in parallel or in series, as required by a specific application specification scenario.
And step 3, acquiring a second prediction residual of the MB to be predicted.
Preferably, the second prediction mode is a complex texture adaptive prediction method in bandwidth compression.
Step 31, determining a plurality of prediction search windows in the MB to be predicted; wherein the prediction search window includes a current pixel and a plurality of encoded reconstructed pixels.
Preferably, in the video image pixel area, C is usedijRepresenting the current pixel, PijRepresenting the encoded reconstructed pixels. Where ij is the position index of the current pixel or the reconstructed pixel. A plurality of sliding windows are set as the prediction search windows, and the shapes of the prediction search windows can be horizontal strips, vertical strips, L-shaped, cross-shaped, T-shaped, rectangular and the like. The size of the prediction search window is determined according to the texture characteristics of the video image and the demand of prediction precision, a smaller prediction search window can be set for the video image with thinner texture or lower demand of prediction precision, and a larger prediction search window can be set for the video image with thicker texture or higher demand of prediction precision.
Preferably, as shown in fig. 6 to 8, fig. 6 to 8 are schematic diagrams of pixel indexes and numbers of reconstructed pixels of three prediction search windows provided by the embodiment of the present invention. In the embodiment of the present invention, a plurality of prediction search windows with the same size and different shapes are set, for example, a first prediction search window, a second prediction search window, and a third prediction search window, respectively. The first prediction search window is a horizontal bar prediction search window, the window is in the shape of a horizontal bar, the second prediction search window is a vertical bar prediction search window, the window is in the shape of a vertical bar, the third prediction search window is a rectangular prediction search window, and the window is in the shape of a rectangle. The three prediction search windows are the same in size and each contain K pixels.
Preferably, the plurality of prediction search windows each contain 8 pixels. E.g. the current pixel C within a first prediction search window, i.e. a horizontal stripe prediction search windowijAt the rightmost position, the other positions in the first prediction search window are encoded K-1 reconstructed pixels Pi-1,j、Pi-2,j、Pi-3,j、Pi-4,j、Pi-5,j、Pi-6,j、Pi-7,j(ii) a Within the second prediction search window, i.e. the vertical slice prediction search window, the current pixelCijAt the lowest position, and the other positions in the second prediction search window are the encoded K-1 reconstructed pixels Pi,j-1、Pi,j-2、Pi,j-3、Pi,j-4、Pi,j-5、Pi,j-6、Pi,j-7(ii) a Within a third, rectangular prediction search window, the current pixel CijAt the lower right corner, and the other positions in the third prediction search window are encoded K-1 reconstructed pixels Pi-1,j、Pi-2,j、Pi-3,j、Pi,j-1、Pi-1,j-1、Pi-2,j-1、Pi-3,j-1. For the current pixel CijWhen encoding is carried out, the reconstruction values New Data (P) and the current pixel C of K-1 reconstruction pixels in the first prediction search window, the second prediction search window and the third prediction search window are respectively usedijTo predict the current pixel C from the original value ofijThe first window prediction residual, the second window prediction residual, and the third window prediction residual.
Preferably, within each prediction search window, the current pixel C is predicted from the reconstructed values of K-1 reconstructed pixelsijWhen the residual error is predicted, K-1 reconstruction pixels in a prediction search window are sequentially numbered as 0, 1, 2, K0、P1、P2、...Pk...、PK-2A sequential search is performed. For example, the first prediction search window of the embodiment of the present invention includes 7 reconstructed pixels arranged along the horizontal direction, the 7 reconstructed pixels are numbered from left to right, the number is from 0 to 6, and the 6 reconstructed pixels P are numbered0、P1、P2、P3、P4、P5、P6From the reconstructed pixel P numbered 00The search is started until the reconstructed pixel P with number 6 is searched6Looking for the current pixel CijThe first window prediction residual is calculated. The second prediction search window contains 7 reconstruction pixels which are arranged along the vertical direction, the 7 reconstruction pixels are numbered from top to bottom, the number is from 0 to 6, and the 6 reconstruction pixels P are numbered0、P1、P2、P3、P4、P5、P6From the reconstructed pixel P numbered 00The search is started until the reconstructed pixel P with number 6 is searched6Looking for the current pixel CijAnd computing a second window prediction residual. The third prediction search window comprises 7 reconstructed pixels which are arranged in a 4 multiplied by 2 matrix, the 7 reconstructed pixels are numbered from 0 to 6, and the 6 reconstructed pixels P are numbered0、P1、P2、P3、P4、P5、P6From the reconstructed pixel P numbered 00The search is started until the reconstructed pixel P with number 6 is searched6Looking for the current pixel CijAnd calculating a third window prediction residual. Calculating the current pixel C in a plurality of prediction search windows respectivelyijThe method of predicting a plurality of residuals is described as follows.
Step 32, calculating the current pixel C in a plurality of prediction search windowsijA plurality of weights WijAnd according to a plurality of weights WijDetermining a current pixel CijAnd calculating a plurality of prediction residuals.
Preferably, the current pixel C is setijComprising N pixel components of
Figure BDA0001843791370000111
Wherein N is a natural number greater than 1,
Figure BDA0001843791370000112
represents the current pixel CijThe ith pixel component of (2). For example, when the pixel CijMay comprise 3 pixel components RGB, or 4 pixel components RGBW, or 3 pixel components Lab, or 3 pixel components YUV, or 4 pixel components CMYK.
Preferably, the plurality of weights includes a first weight, a second weight, and a third weight. Current pixel C calculated in a first predictive search window, such as a horizontal stripe predictive search windowijWeight W ofijFor the first weight, in the second prediction search windowCurrent pixel C calculated within a search window like a vertical slice predictionijWeight W ofijFor the second weight, the current pixel C is calculated in a third prediction search window, such as a rectangular prediction search windowijWeight W ofijIs the third weight. In particular, the current pixel C is calculated within each prediction windowijWeight W ofijThe method of (1) is as follows:
within the prediction search window, K-1 encoded reconstructed pixels P0、P1、P2、...Pk...、PK-2Corresponding to the weight WijComprising K-1 sub-weights, i.e.
Wij={Wij、0,Wij、1,Wij、2,...Wij、k...,Wij、K-2}
Wherein, Wij、kIs the current pixel CijCorresponding to the encoded reconstructed pixel PkSub-weights of (c). Sub-weight Wij、kIs the current pixel CijOf N pixel components
Figure BDA0001843791370000121
Relative reconstructed pixel PkOf N pixel components
Figure BDA0001843791370000122
N component sub-weights
Figure BDA0001843791370000123
The result of the weighted summation is
Figure BDA0001843791370000124
Wherein,
Figure BDA0001843791370000125
is the current pixel CijThe first pixel component of
Figure BDA0001843791370000126
Relative reconstructed pixel PkThe first pixel component of
Figure BDA0001843791370000127
The weight of the component(s) of (c),
Figure BDA0001843791370000128
are component weighted values and satisfy
Figure BDA0001843791370000129
In one embodiment of the present invention,
Figure BDA00018437913700001210
is taken as
Figure BDA00018437913700001211
In another embodiment of the invention, the pixel components are based on
Figure BDA00018437913700001212
Respectively with N pixel components
Figure BDA00018437913700001213
Is determined according to the distance, the closer the distance is, the corresponding distance is
Figure BDA00018437913700001214
The larger; in yet another embodiment of the invention, the determination is empirically determined
Figure BDA00018437913700001215
The value of (a).
Preferably, the current pixel CijWeight W ofijFrom the current pixel CijDiff weight DIF ofijAnd (4) determining. Corresponding to K-1 encoded reconstructed pixels P0、P1、P2、...Pk...、PK-2Difference degree weight DIFijWith K-1 diff sub-weights DIFij、kI.e. by
DIFij={DIFij、0,DIFij、1,DIFij、2,...DIFij、k...,DIFij、K-2}
Preferably, the current pixel C is calculatedijPixel component of
Figure BDA0001843791370000131
Component disparity weighting with respect to pixel components of reconstructed pixels
Figure BDA0001843791370000132
Each pixel component
Figure BDA0001843791370000133
Component difference degree weight of
Figure BDA0001843791370000134
With K-1 component difference degree sub-weights
Figure BDA0001843791370000135
Namely, it is
Figure BDA0001843791370000136
Wherein the component difference degree sub-weight
Figure BDA0001843791370000137
According to the current pixel CijPixel component of
Figure BDA0001843791370000138
And a reconstructed pixel PkPixel component of
Figure BDA0001843791370000139
Is determined.
Preferably, in the embodiment of the present invention, the component difference degree sub-weight
Figure BDA00018437913700001310
As pixel components
Figure BDA00018437913700001311
Original value of
Figure BDA00018437913700001312
And reconstructing the pixel components
Figure BDA00018437913700001313
Is a reconstructed value of
Figure BDA00018437913700001314
Of the absolute value of the difference, i.e.
Figure BDA00018437913700001315
Preferably, the current pixel C is calculatedijWith respect to each reconstructed pixel PkSub-weight W ofij、k. Current pixel CijRelative reconstructed pixel PkSub-weight W ofij、kIs the current pixel CijOf N pixel components
Figure BDA00018437913700001316
Relative reconstructed pixel PkOf N pixel components
Figure BDA00018437913700001317
N component difference degree sub-weights
Figure BDA00018437913700001318
Weighted summation, i.e.
Figure BDA00018437913700001319
Wherein,
Figure BDA00018437913700001320
is the current pixel CijThe first pixel component of
Figure BDA00018437913700001321
Relative reconstructed pixel PkThe first pixel component of
Figure BDA00018437913700001322
The component difference degree sub-weights of (a),
Figure BDA00018437913700001323
are component weighted values and satisfy
Figure BDA00018437913700001324
In one embodiment of the present invention,
Figure BDA00018437913700001325
is taken as
Figure BDA00018437913700001326
In another embodiment of the invention, the pixel components are based on
Figure BDA00018437913700001327
Respectively with N pixel components
Figure BDA00018437913700001328
Is determined according to the distance, the closer the distance is, the corresponding distance is
Figure BDA00018437913700001329
The larger; in yet another embodiment of the invention, the determination is empirically determined
Figure BDA00018437913700001330
The value of (a).
Preferably, the current pixel C is calculatedijWeight W ofij. The weight is then:
Figure BDA0001843791370000141
preferably, the plurality of reference pixels includes, for example, a first reference pixel, a second reference pixel, and a third reference pixel; the plurality of prediction residuals includes, for example, a first window prediction residual, a second window prediction residual, and a third window prediction residual. In particular, the current pixel C is determined according to the first weightijIs calculated as the first reference pixelTo the first prediction residual; determining the current pixel C according to the second weightijCalculating to obtain a second prediction residual error; determining the current pixel C according to the third weightijAnd calculating to obtain a third prediction residual. Specifically, the method for calculating each prediction residual includes the following steps:
preferably according to the weight WijDetermining a current pixel CijReference pixel P ofs. In particular, the slave weight W is calculated according to an optimal algorithmijK-1 sub-weights W ofij、kSelecting an optimal value, and reconstructing a pixel P corresponding to the optimal valuesAs the current pixel CijThe reference pixel of (2). The optimum value determining algorithm is, for example, a minimum weight determining algorithm, i.e. from the weight Wij={Wij、0,Wij、1,Wij、2,...Wij、k...,Wij、K-2Selecting the minimum value of the sub-weights, such as W, from K-1 sub-weights ofij、sCorresponding reconstructed pixel PsTo reconstruct the pixel PsAs the current pixel CijThe reference pixel of (2).
Preferably, the current pixel C is calculatedijPrediction residual RES ofij. In particular, according to the reference pixel, i.e. PsReconstructed value of (N)s) And the current pixel CijOriginal value of (C) OldDataij) Calculating the current pixel CijRelative reference pixel PsPrediction residual RES ofijIs a
Figure BDA0001843791370000142
Wherein,
Figure BDA0001843791370000151
Figure BDA0001843791370000152
is the current pixel CijThe first pixel component of
Figure BDA0001843791370000153
Relative reference pixel PsThe first pixel component of
Figure BDA0001843791370000154
The prediction residual of (2).
Through the steps 31-34, the current pixel C is found in a plurality of prediction search windowsijAnd calculating to obtain a plurality of prediction residuals. E.g. finding the current pixel C within a first predictive search windowijFirst reference pixel Ps1And calculating to obtain a first window prediction residual RESij1(ii) a Finding the current pixel C within the second predictive search windowijSecond reference pixel Ps2And calculating to obtain a second window prediction residual RESij2(ii) a Finding the current pixel C within the third predictive search windowijThird reference pixel Ps3And calculating to obtain a third window prediction residual error RESij3
Step 33 of comparing the plurality of prediction residuals and determining a second prediction residual RES2And corresponding optimal reference pixel Ps_Perf
Preferably, the prediction residuals are predicted at a plurality of prediction residuals, such as the first window prediction residual RESij1Second window prediction residual RESij2Third window prediction residual RESij3Determining the minimum prediction residual according to the minimum algorithm, and taking the minimum prediction residual as the current pixel CijThe reference pixel corresponding to the minimum prediction residual is taken as the current pixel CijIs optimized to the reference pixel Ps_Perf
The reconstructed pixel component refers to a pixel component obtained by decompressing and reconstructing a compressed image, and a pixel value of the reconstructed pixel component is generally referred to as a reconstruction value. Further, the reconstructed pixel component may be obtained according to the prediction residual, that is, the reference value may be added to the prediction residual to obtain the reconstructed pixel component.
And 4, acquiring a first residual absolute value sum according to the first prediction residual, and acquiring a second residual absolute value sum according to the second prediction residual.
Preferably, the sum of absolute residual values for the first prediction mode is calculated from the first prediction residual, and the formula is as follows:
Figure BDA0001843791370000155
wherein, RES1For the first prediction residual, ABS is the absolute value, SAD1Is the first residual absolute value sum.
Preferably, the sum of absolute values of the residuals of the second prediction mode is calculated from the second prediction residuals, and the formula is as follows:
Figure BDA0001843791370000161
wherein, RES2For the second prediction residual, ABS is the absolute value, SAD2Is the second residual absolute value sum.
And 5, comparing the first residual absolute value sum with the second residual absolute value sum to select a final prediction mode of the MB to be predicted.
Preferably, the first residual absolute value and the SAD are compared1And the sum of absolute values of the second residuals SAD2(ii) a Wherein,
if SAD1If the prediction mode is small, selecting the first prediction mode as the final prediction mode of the MB to be predicted;
if SAD2If the prediction mode is small, selecting the second prediction mode as the final prediction mode of the MB to be predicted;
if SAD1Sum SAD2And if the sizes of the prediction modes are the same, selecting the first prediction mode as a final prediction mode of the MB to be predicted.
The optimal prediction method in the first prediction mode and the second prediction mode can be selected through the prediction selection algorithm, a proper reference pixel is found for the complex texture in the image, the minimum prediction residual error can be obtained, the theoretical limit entropy is further reduced, and the prediction effect is further optimized for the image with the complex texture.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. A prediction method in bandwidth compression, comprising:
s1, selecting an MB to be predicted with the size of m × n, wherein m and n are natural numbers larger than zero;
s2, acquiring a first prediction residual of the current pixel component of the MB to be predicted according to a first prediction mode;
s3, obtaining a plurality of prediction residuals of a plurality of prediction search windows of the MB to be predicted according to a second prediction mode, and obtaining a second prediction residual according to the plurality of prediction residuals of the plurality of prediction search windows;
s4, acquiring a first residual absolute value sum according to the first prediction residual, and acquiring a second residual absolute value sum according to the second prediction residual;
s5, comparing the first and second residual absolute value sums to select a final prediction mode of the MB to be predicted.
2. The prediction method according to claim 1, wherein S2 includes:
s21, determining a plurality of pixel components of the current pixel of the MB to be predicted;
s22, obtaining the gradient value of the texture direction of the current pixel component;
s23, determining a reference value of the current pixel component according to the texture direction gradient value;
and S24, determining a first prediction residual of the current pixel component through the reference value.
3. The prediction method according to claim 2, wherein S22 includes:
s221, determining N texture direction gradient values of the current pixel component through the surrounding components of the current pixel component.
4. The prediction method according to claim 2, wherein S23 includes:
s231, obtaining a first weighted gradient value through the texture direction gradient value;
s232, acquiring a second weighted gradient value through the first weighted gradient value;
s233, obtaining the reference direction of the current pixel component through the second weighted gradient value;
and S234, acquiring a reference value of the current pixel component according to the reference direction of the current pixel component.
5. The prediction method according to claim 2, wherein the reference value and the first prediction residual at S24 satisfy the following formula:
RES=Curcpt-Ref
wherein RES is the first prediction residual, cutcpt is the pixel value of the current pixel component, and Ref is the reference value.
6. The prediction method according to claim 1, wherein S3 includes:
s31, determining the plurality of prediction search windows in the MB to be predicted; wherein the prediction search window comprises a current pixel and a plurality of encoded reconstructed pixels;
s32, calculating a plurality of prediction residuals of the current pixel component within a plurality of the prediction search windows;
and S33, determining the second prediction residual according to the plurality of prediction residuals.
7. The prediction method of claim 6, wherein the plurality of prediction search windows in S31 comprises: a first predictive search window, a second predictive search window, and a third predictive search window; wherein the first predictive search window, the second predictive search window, and the third predictive search window are respectively any one of a horizontal bar-shaped predictive search window, a vertical bar-shaped predictive search window, or a rectangular predictive search window.
8. The prediction method according to claim 6, wherein S33 includes:
s321, calculating a component difference degree weight of each pixel component of the current pixel relative to a pixel component of the reconstructed pixel in a current prediction search window;
s322, calculating the component position weight of each pixel component of the current pixel relative to the pixel component of the reconstructed pixel in the current prediction search window;
s323, calculating the sub-weights of the plurality of reconstructed pixels according to the component difference degree weight and the component position weight;
s324, determining a plurality of reference pixels of the current pixel according to the plurality of sub-weights;
s325, obtaining the prediction residual error of the current pixel component in the current prediction search window according to the plurality of reference pixels;
and S326, repeating the steps S321 to S325, and acquiring a plurality of prediction residuals of all the prediction search windows of the current pixel.
9. The prediction method according to claim 6, wherein S34 includes:
and S331, comparing the plurality of prediction residuals, determining a minimum prediction residual according to a minimum value algorithm, taking the minimum prediction residual as a second prediction residual of the current pixel component, and taking a reference pixel corresponding to the minimum prediction residual as an optimal reference pixel of the current pixel.
10. The method of claim 1, wherein S5 includes:
and S51, selecting the minimum value of the first residual absolute value sum and the second residual absolute value sum, and determining the final prediction method of the MB to be predicted according to the minimum value.
CN201811260526.8A 2018-10-26 2018-10-26 Prediction method in bandwidth compression Withdrawn CN111107346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811260526.8A CN111107346A (en) 2018-10-26 2018-10-26 Prediction method in bandwidth compression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811260526.8A CN111107346A (en) 2018-10-26 2018-10-26 Prediction method in bandwidth compression

Publications (1)

Publication Number Publication Date
CN111107346A true CN111107346A (en) 2020-05-05

Family

ID=70419133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811260526.8A Withdrawn CN111107346A (en) 2018-10-26 2018-10-26 Prediction method in bandwidth compression

Country Status (1)

Country Link
CN (1) CN111107346A (en)

Similar Documents

Publication Publication Date Title
US11876974B2 (en) Block-based optical flow estimation for motion compensated prediction in video coding
US10291925B2 (en) Techniques for hardware video encoding
KR101770047B1 (en) Content adaptive prediction distance analyzer and hierarchical motion estimation system for next generation video coding
US11284107B2 (en) Co-located reference frame interpolation using optical flow estimation
CN102835106B (en) Data compression for video
US9743078B2 (en) Standards-compliant model-based video encoding and decoding
US11323700B2 (en) Encoding video using two-stage intra search
WO2017005146A1 (en) Video encoding and decoding method and device
CN118055253A (en) Optical flow estimation for motion compensated prediction in video coding
US11445205B2 (en) Video encoding method and apparatus, video decoding method and apparatus, computer device, and storage medium
WO2019128716A1 (en) Image prediction method, apparatus, and codec
CN109905714B (en) Inter-frame prediction method and device and terminal equipment
US20120218432A1 (en) Recursive adaptive intra smoothing for video coding
US12206842B2 (en) Motion field estimation based on motion trajectory derivation
CN109756739B (en) Image prediction method and device
Saha et al. New pixel-decimation patterns for block matching in motion estimation
EP1608180A1 (en) Method and apparatus for sub-pixel motion estimation which reduces bit precision
KR20230157975A (en) Motion flow coding for deep learning-based YUV video compression
CN111107346A (en) Prediction method in bandwidth compression
US20130170565A1 (en) Motion Estimation Complexity Reduction
US20240422309A1 (en) Selection of projected motion vectors
Shaikh et al. Video Compression Algorithm Using Motion Compensation Technique: A Survey
Mishra et al. Comparative study of motion estimation techniques in video
CN111107347A (en) Selection method of bandwidth compression prediction mode
Akotkar et al. Hybrid approach for video compression using block matching motion estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200505

WW01 Invention patent application withdrawn after publication