[go: up one dir, main page]

CN1067832C - Method for improving the realization of video-frequency coding device - Google Patents

Method for improving the realization of video-frequency coding device Download PDF

Info

Publication number
CN1067832C
CN1067832C CN97104376A CN97104376A CN1067832C CN 1067832 C CN1067832 C CN 1067832C CN 97104376 A CN97104376 A CN 97104376A CN 97104376 A CN97104376 A CN 97104376A CN 1067832 C CN1067832 C CN 1067832C
Authority
CN
China
Prior art keywords
block
search
current
macroblock
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN97104376A
Other languages
Chinese (zh)
Other versions
CN1200629A (en
Inventor
朱雪龙
谢波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN97104376A priority Critical patent/CN1067832C/en
Publication of CN1200629A publication Critical patent/CN1200629A/en
Application granted granted Critical
Publication of CN1067832C publication Critical patent/CN1067832C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本发明属运动图象编码技术领域。本发明包括运动估计、帧预测与预测误差生成,变换与量化、图象重建、熵编码四部分;其特征在于,在运动估计部分,对分级搜索过程中每级搜索结果都进入判决器G,在编码部分中,在对预测误差PE进行DCT之前进入判决器L,可省去大量变换和搜索的运算,从而大大提高了编码的速度。

The invention belongs to the technical field of moving image coding. The present invention includes four parts: motion estimation, frame prediction and prediction error generation, transformation and quantization, image reconstruction, and entropy coding; it is characterized in that, in the motion estimation part, each level of search results in the hierarchical search process enters the decision device G, In the encoding part, enter the decision device L before performing DCT on the prediction error PE, which can save a lot of transformation and search operations, thus greatly improving the encoding speed.

Description

Improved method for video encoder implementation
The present invention belongs to the field of motion image coding technology.
(1) In the current information age, storage and transmission of images is becoming more and more important. Since the amount of information of original image data is enormous, compression of image data, that is, encoding of moving images is essential to store images on a storage medium of limited capacity and to transmit images on an information channel of limited capacity. Motion picture coding is achieved by comprehensively utilizing the redundancies of the picture signals in three aspects of time, space and statistics, as well as the knowledge of the scene and the human visual characteristics. The current mature coding method is a hybrid coding method which integrates several coding methods of predictive coding, transform coding and entropy coding and a motion compensation technology.
One of the encoding implementation methods is shown in fig. 1, and includes the following steps:
(1) performing motion estimation ME on the input current image and the previous reconstructed image, and performing motion estimation to obtain a motion vector MV;
(2) predicting P for the last reconstructed frame based on the motion vector to obtain the predicted image of the current image;
(3) subtracting the predicted image of the current image from the current image to obtain a prediction error PE;
(4) performing Discrete Cosine Transform (DCT) and quantization Q on the prediction error;
(5) carrying out Variable Length Coding (VLC) on the result of the step (4) to obtain a current coded image; and
(6) and (4) performing inverse quantization IQ and inverse discrete cosine transform IDCT on the result of (4) to obtain a reconstructed prediction error, adding the reconstructed prediction error to the current prediction image to obtain a current reconstructed image, and converting the current reconstructed image into a previous reconstructed image through a frame memory FM.
The functions of the steps in fig. 1 are as follows:
the motion estimation, prediction and calculation of prediction error (subtracter) constitute predictive coding in order to eliminate temporal correlation of the image signal. The temporal correlation of the images is represented by a portion of the current frame image resulting from a portion of the previous frame image undergoing motion. The motion is described by a motion vector, the motion estimation is to obtain the motion vector, and the prediction is to compensate and offset the signal change between the current frame image and the previous frame image due to the motion according to the motion vector.
The discrete cosine transform DCT constitutes transform coding in order to remove spatial correlation of image signals. Quantization Q is both a requirement for subsequent entropy coding and takes advantage of human visual characteristics to improve the quality of the coding.
Variable length coding VLC constitutes entropy coding, further eliminating statistical correlation of the image signal.
And the inverse quantization IQ, the inverse discrete cosine transform IDCT and the adder realize image reconstruction and provide a reference object for prediction.
An encoder implementing the hybrid encoding method described above is shown in fig. 2. The block DCT, block Q, block IQ, block IDCT, and block VLC in the picture refer to a block (8 x 8 pixels according to the international standard) in one frame (image) which is subjected to discrete cosine transform DCT, quantization Q, inverse quantization IQ, inverse discrete cosine transform IDCT, and variable length coding VLC. The macroblock P is a unit of prediction P in units of one macroblock (6 blocks according to the international standard) in one frame picture. The macroblock MV refers to a motion vector of the macroblock. A 0 block means that 8 × 8 elements in the block are all 0.
The working process of the encoder is as follows: first, a frame (one) image is encoded in units of one block (8 × 8 pixels) or one macroblock (6 blocks). Secondly, the whole encoding process is divided into two parts: the motion estimation and coding core is shown in phantom in fig. 2. The current frame macro block firstly enters a motion estimation part for motion estimation, a motion vector MV is obtained by the motion estimation and is input to a coding core for coding the current frame macro block, and the method specifically comprises the following steps:
first, motion estimation is performed on a current frame macroblock and a previous reconstructed frame macroblock. The motion estimation is divided into two steps of integer pixel search and half pixel search. The whole pixel search adopts a hierarchical motion search method, namely, a search domain is graded: a stationary point, a small search field, a large search field. This is typically level 3, and the implementation may vary, e.g. level 2: a stationary point, a small search domain, i.e. combining the small search domain with a large search domain; or 4 stage: the method comprises the following steps of a static point, a small search domain, a large search domain and a larger search domain, namely, the large search domain is subdivided into two levels, but at least two levels are required. After the search in the first two stages of search domains is finished, the decision device A, B is required to determine whether the criterion is satisfied and the whole pixel search can be stopped to enter the half pixel search and the following coding core, and the specific decision criteria may be various.
Secondly, after the coding core is entered, the current prediction frame macro block is obtained by predicting the last reconstructed frame macro block according to the motion vector of the macro block obtained by motion estimation, then the current frame macro block is subtracted by the subtracter to calculate the prediction error, and then DCT and Q are carried out.
Finally, after the DCT and Q are finished, entering a decision device to judge whether the current data block is 0 block, and when the 0 block has no effect on the subsequent entropy coding and image reconstruction, finishing the coding process of the current block and returning to the prediction and subtraction device to process the next block, wherein the 0 block has no effect on the subsequent entropy coding and image reconstruction; if the block is not 0, entropy coding and image reconstruction are performed.
The disadvantage of this implementation of the encoder is that the encoding speed is not high enough. For QCIF format simple motion image sequences (e.g., Claire sequences), software real-time encoding (25 frames/second) is still not possible with Pentium-133 PC.
The invention aims to overcome the defects of the prior art, and adds judgment on a large amount of zero data in an image on the basis of the original coding method, so as to improve the speed of a coder on the premise of not or slightly reducing other performances.
The invention provides an improved method for realizing a video encoder, which is characterized by comprising the following steps:
(1) performing motion estimation on a pet block of a current frame and a pet block of a previous reconstructed frame, wherein the motion estimation comprises whole pixel search and half pixel search, the whole pixel search adopts a hierarchical motion search method, and a search domain is graded: a static point, a small search domain and a large search domain (which is a typical 3-level search domain, and can be changed, the same as the previous one), after the search of each level of search domain is finished, the search domain enters a decision device G, and the motion \ vector obtained by the current level of search is judged to calculate the prediction error through prognosis, so that whether the current prediction error macro block is changed into a 0 macro block (6 blocks in the macro block are all 0 blocks) through DCT and Q, if so, the coding process of the current macro block is finished, and the next macro block is switched to; otherwise, continuing the next-level search, after the whole pixel search is finished, entering a half-pixel search, after the half-pixel search is finished, obtaining a motion vector, and entering a coding core:
(2) after the current frame macroblock enters the coding core, predicting the previous reconstructed frame macroblock according to the motion vector of the macroblock MV obtained by motion estimation to obtain a current predicted frame macroblock, and then subtracting the current predicted frame macroblock from the current frame macroblock to calculate a prediction error;
(3) the prediction error is not DCT, Q and enters a judger L in advance to judge, judge whether the current error block is changed into 0 block through DCT, Q, if yes, the encoding process of the current block is ended, and the next block of the current macro block is switched to; if the block is not 0, DCT and Q are performed.
(4) Because the judger L can not ensure that all prediction errors changed into 0 by DCT and Q are judged in advance, a judger is still reserved after the DCT and Q to judge whether the data block is 0 after the DCT and Q, if so, the subsequent processing process is not needed to be carried out, and the next block is switched to; if not, the non-0 blocks are entropy encoded and image reconstructed.
Compared with the prior art, the invention has the following characteristics:
firstly, in the coding core part, a pre-transform decision device L is added before DCT and Q are carried out, and the decision device L can judge most of prediction error blocks which are changed into 0 blocks through DCT and Q in advance without DCT and Q, so that a large amount of operations for DCT and Q can be saved.
Secondly, a global decision device G is arranged after each stage of search in motion estimation is finished, once the criterion in the global decision device G is satisfied, the whole coding process of the macro block is finished, namely, the whole pixel search process is finished, and the whole coding core does not need to be finished even if half pixel search is carried out, thereby greatly improving the coding speed.
Brief description of the drawingsthe accompanying drawings:
fig. 1 is a block diagram of a hybrid encoding method.
Fig. 2 is a block diagram of a conventional video encoder.
Fig. 3 is a block diagram of a video encoder according to the present invention.
Fig. 4 is a flowchart of an implementation of the determiner L in this embodiment.
Fig. 5 is a flowchart of an implementation of the decision device G in this embodiment.
An embodiment of a video encoder implemented by the encoding method of the present invention is shown in fig. 3-5. The following is described in detail in connection with the figures:
the block diagram of the implementation of the novel video encoder of the present invention is shown in fig. 3, and the operation thereof includes the following steps:
1. and performing motion estimation, namely motion vector search on the current frame macro block and the last reconstructed frame macro block. The search for the motion vector includes a full-pel search and a half-pel search. First a full pixel search is performed. The whole pixel adopts a hierarchical motion searching method, which comprises the following steps:
a. searching a static point, namely judging whether the current prediction error macro block meets the criterion in the judger G, namely whether the current prediction error macro block is subjected to DCT transformation, and becomes a 0 macro block after Q is quantized (6 blocks in the macro block are all 0 blocks), if so, executing step 2, otherwise, continuing;
b. searching in a small search domain, judging whether the current prediction error macro block meets the criterion in a judger G, if so, executing step 2, otherwise, continuing;
c. searching in a large search domain, judging whether the current prediction error macro block meets the criterion in a judger G, if so, executing step 2, otherwise, executing step 3;
2. obtaining a current prediction macro block from the motion vector searched currently and a previous reconstruction frame, wherein the current prediction macro block is a current reconstruction macro block, and executing 6;
3. performing half-pixel search to obtain a motion vector, obtaining a current prediction macro block from the motion vector and a previous reconstruction frame, then subtracting the current prediction macro block from the current frame macro block to calculate a prediction error, and judging whether the prediction error meets the criterion of a judger L, if so, the current prediction macro block is the current reconstruction macro block, executing 6, otherwise, continuing;
4. performing DCT transformation, quantizing Q, judging whether the current data block is 0 block after DCT and Q quantization, if so, the current prediction macro block is the current reconstruction macro block, executing 6, otherwise, continuing;
5. inverse quantization IQ, inverse DCT transformation, adding the obtained result and the current prediction macro block to obtain the current reconstruction macro block;
6. and entropy coding is carried out, the coding of the current macro block is finished, and the coding of the next macro block is carried out.
The specific implementation block diagrams of the decision device L, G of the present invention are shown in fig. 4 and 5, and are described as follows:
the decider L relies on a block decision criterion (which may be referred to as an order criterion) that is: an 8 × 8 block data is represented by a set { f (x, y) | x, y =0,1, …,7}, and specifically, a prediction error block is represented by a set { f (x, y) | x, y =0,1, …,7}, in fig. 3. When it is satisfied with <math> <mrow> <mrow> <mo>(</mo> <mo>(</mo> <msub> <mi>x</mi> <msub> <mi>i</mi> <mn>0</mn> </msub> </msub> <mo>+</mo> <msub> <mi>y</mi> <msub> <mi>i</mi> <mn>0</mn> </msub> </msub> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msup> <mi>cos</mi> <mn>2</mn> </msup> <mfrac> <mi>&pi;</mi> <mn>16</mn> </mfrac> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <msub> <mi>i</mi> <mn>1</mn> </msub> </msub> <mo>+</mo> <msub> <mi>y</mi> <msub> <mi>i</mi> <mn>1</mn> </msub> </msub> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msup> <mi>cos</mi> <mn>2</mn> </msup> <mfrac> <mrow> <mn>3</mn> <mi>&pi;</mi> </mrow> <mn>16</mn> </mfrac> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <msub> <mi>i</mi> <mn>2</mn> </msub> </msub> <mo>+</mo> <msub> <mi>y</mi> <msub> <mi>i</mi> <mn>2</mn> </msub> </msub> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msup> <mi>cos</mi> <mn>2</mn> </msup> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>16</mn> </mfrac> <mo>+</mo> </mrow> </math> <math> <mrow> <mrow> <mo>(</mo> <msub> <mi>x</mi> <msub> <mi>i</mi> <mn>3</mn> </msub> </msub> <mo>+</mo> <msub> <mi>y</mi> <msub> <mi>i</mi> <mn>3</mn> </msub> </msub> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msup> <mi>cos</mi> <mn>2</mn> </msup> <mfrac> <mrow> <mn>7</mn> <mi>&pi;</mi> </mrow> <mn>16</mn> </mfrac> <mo>)</mo> <mo>&lt;</mo> <mn>20</mn> <mi>QP</mi> </mrow> </math> Then the current block becomes 0 block via DCT and Q. Wherein, <math> <mrow> <msub> <mi>x</mi> <msub> <mi>i</mi> <mn>0</mn> </msub> </msub> <mo>&GreaterEqual;</mo> <msub> <mi>x</mi> <msub> <mi>i</mi> <mn>1</mn> </msub> </msub> <mo>&GreaterEqual;</mo> <msub> <mi>x</mi> <msub> <mi>i</mi> <mn>2</mn> </msub> </msub> <mo>&GreaterEqual;</mo> <msub> <mi>x</mi> <msub> <mi>i</mi> <mn>3</mn> </msub> </msub> <mo>,</mo> <msub> <mi>i</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>i</mi> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </msub> <msub> <mi>i</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>i</mi> <mn>3</mn> </msub> <mi>&epsiv;</mi> <mo>{</mo> <mn>0,1,2,3</mn> <mo>}</mo> </mrow> </math> and are different from each other in that, <math> <mrow> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mo>|</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>|</mo> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mo>|</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mn>7</mn> <mo>-</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>0,1,2,3</mn> <mo>,</mo> <mo></mo> </mrow> </math> <math> <mrow> <msub> <mi>y</mi> <msubsup> <mi>i</mi> <mn>0</mn> <mo>&prime;</mo> </msubsup> </msub> <mo>&GreaterEqual;</mo> <msub> <mi>y</mi> <msubsup> <mi>i</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> </msub> <mo>&GreaterEqual;</mo> <msub> <mi>y</mi> <msubsup> <mi>i</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </msub> <mo>&GreaterEqual;</mo> <msub> <mi>y</mi> <msubsup> <mi>i</mi> <mn>3</mn> <mo>&prime;</mo> </msubsup> </msub> <mo>,</mo> <msubsup> <mi>i</mi> <mn>0</mn> <mo>&prime;</mo> </msubsup> <mo>,</mo> <msubsup> <mi>i</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>,</mo> <msubsup> <mi>i</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>,</mo> <msubsup> <mi>i</mi> <mn>3</mn> <mo>&prime;</mo> </msubsup> <mi>&epsiv;</mi> <mo>{</mo> <mn>0,1,2,3</mn> <mo>}</mo> </mrow> </math> and are different from each other in that, <math> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mo>|</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mo>|</mo> <mi>f</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>-</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>0,1,2,3</mn> <mo>.</mo> </mrow> </math> the QP is the two quantization parameter for the macroblock in which the block is located, which is half of one quantization step (QP is determined for macroblock coding). Thus, the flow chart of the implementation of the decision device L is shown in fig. 4. FIG. 4:
processing the prediction error block { f (x, y) | x, y =0,1, …,7} in rows and columns, respectively: for each row, first, the sum of absolute values of each row is calculated <math> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mo>|</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>=</mo> <msub> <mi>u</mi> <mi>x</mi> </msub> <mo>,</mo> <mi>x</mi> <mo>=</mo> <mn>0,1</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mn>7</mn> </mrow> </math> Then, ux=ux+u7-xX =0,1,2,3, p u0,u1,u2,u3Arranged in the order from big to small x i 0 , x i 1 , x i 2 , x i 3 ; For each column, calculating the sum of absolute values of each column <math> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>v</mi> <mi>y</mi> </msub> <mo>,</mo> <mi>y</mi> <mo>=</mo> <mn>0,1</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mn>7</mn> </mrow> </math> Calculating vy=vy+v7-yY =0,1,2,3, pairs being arranged in descending order <math> <mrow> <msub> <mi>y</mi> <msubsup> <mi>i</mi> <mn>0</mn> <mo>&prime;</mo> </msubsup> </msub> <mo>,</mo> <msub> <mi>y</mi> <msubsup> <mi>i</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> </msub> <mo>,</mo> <msub> <mi>y</mi> <msubsup> <mi>i</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </msub> <mo>,</mo> <msub> <mi>y</mi> <msubsup> <mi>i</mi> <mn>3</mn> <mo>&prime;</mo> </msubsup> </msub> <mo>,</mo> </mrow> </math> Computing
Figure C97104376000711
And finally, comparison is carried out: sum < 20 QP? When sum is less than 20QP, the criterion is met, and the current DFD block is changed into a 0 block through DCT and Q; if sum is more than or equal to 20QP, the criterion is not satisfied.
The decision device G implementation method comprises the following steps:
the decision device G implements the decision of the macroblock and one macroblock is 6 blocks, so the decision of the macroblock can be divided into the decision of 6 blocks, and the decision of the block can be made by the method of the previous decision device L. Fig. 5 is a flow chart of an implementation of the decider G.
In fig. 5:
the current prediction error PE macroblock is first calculated,
PE macroblock = current frame macroblock — current predicted frame macroblock, and then the order criterion is used to make a decision for each of 6 blocks in the PE macroblock: the current block is changed into 0 block through DCT and Q, and when 6 blocks all meet the sequence criterion, the whole macro block meets the criterion.
The coding parameters and coding speed for the simple sequence Claire sequence and the complex sequence Foreman sequence in QCIF format on a Pentium-133PC are given below. Claire sequence: the quantization parameter for the I frame is 5 and the quantization parameter for the P frame is 7 (i.e., the formula above)
QP =7), the integer pixel search in motion estimation employs a two-level search: a static point,
Small search domain with region length of 5, each level of search algorithm is sampling method
The interval is 3, and the block matching operation adopts a sub-sampling method.
As a result: the encoding frame rate (rate) reaches an average of 25 frames/second. Foreman sequence: the I frame quantization parameter is 15 and the P frame quantization parameter is 15 (i.e., the above disclosure)
Equation QP =15), the integer pixel search in motion estimation employs three levels of search: at rest
Point, small motion region (region length of 4), large search region (region length of 10),
each level of search algorithm is a sampling method, the sampling interval is 3, and the block matching operation adopts
A sub-sampling method.
As a result: the encoding frame rate (rate) reaches an average of 10 frames of sand.

Claims (1)

1、一种视频编码器实现的改进方法,其特征在于,包括以下各步骤:1, an improved method that a video encoder realizes, is characterized in that, comprises the following steps: (1)对当前帧宠块和上一重建帧宠块作运动估计,运动估计包括整象素搜索和半象素搜索,整象素搜索采用分级运动搜索方法,将搜索域分级:静止点、小搜索域、大搜索域(这是典型的3级,具有实现可以有变化,同前),每一级搜索域搜索结束后,都进入判决器G,判断在当前这一级搜索得到的运动\矢量经预后计算预测误差,那么当前预测误差宏块是否经DCT、Q变为0宏块(宏块中6个块都为0块),若是则当前宏块的编码过程结束,转到下一宏块;否则则继续下一级的搜索,整象素搜索完成后进入半象素搜索,半象素搜索结束后得到运动矢量,进入编码核心;(1) Motion estimation is performed on the pet block of the current frame and the last reconstructed frame pet block. The motion estimation includes integer pixel search and half pixel search, and the integer pixel search adopts a hierarchical motion search method, and the search domain is classified: static point, Small search domain and large search domain (this is a typical level 3, and the implementation can be changed, the same as before). After the search of each level of search domain is completed, it enters the decision device G to judge the motion obtained by searching at the current level. \The prediction error of the vector is calculated by prognosis, then whether the current prediction error macroblock is changed to 0 macroblock by DCT and Q (the 6 blocks in the macroblock are all 0 blocks), if so, the encoding process of the current macroblock ends, go to the next Otherwise, continue the search of the next level, enter the half-pixel search after the whole-pixel search is completed, get the motion vector after the half-pixel search is completed, and enter the encoding core; (2)进到编码核心以后,根据运动估计得到宏块MV的运动矢量对上一重建帧宏块进行预测得到当前预测帧宏块,然后用当前帧宏块减去当前预测帧宏块计算预测误差;(2) After entering the encoding core, the motion vector of the macroblock MV is obtained according to the motion estimation to predict the macroblock of the previous reconstructed frame to obtain the macroblock of the current prediction frame, and then calculate the prediction by subtracting the macroblock of the current prediction frame from the macroblock of the current frame error; (3)对预测误差先不作DCT、Q而提前进入判决器L进行判断,判断当前误差块经DCT、Q是否变为0块,若是则结束当前块的编码过程,转到当前宏块的下一块;若为非0块,则进行DCT、Q。(3) Do not perform DCT and Q on the prediction error and enter the decision device L in advance to judge, judge whether the current error block becomes 0 blocks through DCT and Q, if so, end the encoding process of the current block, and turn to the next block of the current macroblock One block; if it is a non-zero block, perform DCT and Q. (4)由于判决器L不能保证将所有经DCT、Q变为0的预测误差全部提前判断出来,故在DCT、Q之后仍保留一判决器,判断经DCT、Q以后当数据块是否为0块,若是则不必进行后续处理过程,转到下一块;若否则对这些非0块作熵编码与图象重建。(4) Since the decision device L cannot guarantee to judge all the prediction errors that become 0 through DCT and Q in advance, a decision device is still reserved after DCT and Q to judge whether the data block is 0 after DCT and Q block, if so, it is unnecessary to carry out the follow-up process, and go to the next block; otherwise, perform entropy coding and image reconstruction on these non-zero blocks.
CN97104376A 1997-05-23 1997-05-23 Method for improving the realization of video-frequency coding device Expired - Fee Related CN1067832C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN97104376A CN1067832C (en) 1997-05-23 1997-05-23 Method for improving the realization of video-frequency coding device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN97104376A CN1067832C (en) 1997-05-23 1997-05-23 Method for improving the realization of video-frequency coding device

Publications (2)

Publication Number Publication Date
CN1200629A CN1200629A (en) 1998-12-02
CN1067832C true CN1067832C (en) 2001-06-27

Family

ID=5167313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN97104376A Expired - Fee Related CN1067832C (en) 1997-05-23 1997-05-23 Method for improving the realization of video-frequency coding device

Country Status (1)

Country Link
CN (1) CN1067832C (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4151374B2 (en) 2002-03-29 2008-09-17 セイコーエプソン株式会社 Moving picture coding apparatus and moving picture coding method
CN100366091C (en) * 2004-06-24 2008-01-30 华为技术有限公司 A video compression method
KR100723507B1 (en) * 2005-10-12 2007-05-30 삼성전자주식회사 Adaptive Quantization Controller and Adaptive Quantization Control Method for Video Compression Using I-frame Motion Prediction
JP2007166039A (en) * 2005-12-09 2007-06-28 Matsushita Electric Ind Co Ltd Image encoding device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1131881A (en) * 1994-12-13 1996-09-25 汤姆森多媒体公司 Method for selecting motion vectors and image processing device implementing the said method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1131881A (en) * 1994-12-13 1996-09-25 汤姆森多媒体公司 Method for selecting motion vectors and image processing device implementing the said method

Also Published As

Publication number Publication date
CN1200629A (en) 1998-12-02

Similar Documents

Publication Publication Date Title
CN1201590C (en) Video coding method using block matching process
CN100348051C (en) An enhanced in-frame predictive mode coding method
CN102137263A (en) Distributed video coding and decoding methods based on classification of key frames of correlation noise model (CNM)
US7764740B2 (en) Fast block mode determining method for motion estimation, and apparatus thereof
CN101064849A (en) Dynamic image coding method, apparatus and computer readable record medium
CN1209928C (en) Inframe coding frame coding method using inframe prediction based on prediction blockgroup
CN1613259A (en) Method and system for detecting intra-coded pictures and for extracting intra DCT precision and macroblock-level coding parameters from uncompressed digital video
CN101841713A (en) Video coding method for reducing coding code rate and system
CN1194544C (en) Video Coding Method Based on Time-Space Domain Correlation Motion Vector Prediction
CN1212014C (en) Video coding method based on time-space domain correlation quick movement estimate
CN1708134A (en) Method and apparatus for estimating motion
CN100338957C (en) Complexity hierarchical mode selection method
CN1067204C (en) Global decision method for video frequency coding
CN1067832C (en) Method for improving the realization of video-frequency coding device
CN1225915C (en) Method for detecting noise in coded video data stream
CN107343199B (en) Rapid adaptive compensation method for sampling points in HEVC (high efficiency video coding)
CN1665299A (en) Method for designing architecture of scalable video coder decoder
CN1263309C (en) Motion vector prediction method used for video coding
CN1625266A (en) Apparatus for calculating absolute difference value, and motion estimation apparatus and motion picture encoding apparatus
WO2002032143A2 (en) Compression of motion vectors
AU2001293994A1 (en) Compression of motion vectors
CN1941914A (en) Method and apparatus for predicting DC coefficient in transform domain
CN113163199B (en) H265-based video rapid prediction method, rapid coding method and system
CN1574964A (en) Method and device for compressing image data
CN1809167A (en) Quick inter-frame forecast mode selection method

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C14 Grant of patent or utility model
GR01 Patent grant
C19 Lapse of patent right due to non-payment of the annual fee
CF01 Termination of patent right due to non-payment of annual fee