HK1132117B - A method and system for processing video data - Google Patents
A method and system for processing video data Download PDFInfo
- Publication number
- HK1132117B HK1132117B HK09109846.8A HK09109846A HK1132117B HK 1132117 B HK1132117 B HK 1132117B HK 09109846 A HK09109846 A HK 09109846A HK 1132117 B HK1132117 B HK 1132117B
- Authority
- HK
- Hong Kong
- Prior art keywords
- data
- motion vector
- motion
- video
- motion vectors
- Prior art date
Links
Description
Technical Field
The present invention relates to digital video processing, and more particularly, to a method and system for motion compensated image rate up-conversion (PRUC) using information extracted from a compressed video stream.
Background
Video display technology is undergoing revolution and flat screen displays based on Liquid Crystal Displays (LCDs) or Plasma Display Panels (PDPs) are replacing Cathode Ray Tube (CRT) technology for more than half a century of the dominant display field. The prior art has the remarkable characteristic that images are displayed on a flat panel display screen through progressive scanning (progressive scanning) at a high image rate. This new display technology may also facilitate a fast transition from Standard Definition Television (SDTV) to High Definition Television (HDTV). However, conventional legacy video compression systems still use low image rate formats and cannot optimally display legacy video on modern new display screens.
The limitation of the channel capacity may affect the display of images at low image rates. For example, a 30Hz video sequence is broadcast over a mobile network, and a mobile terminal, such as a mobile phone, receives an encoded video sequence from a server. However, only low bit-stream video sequences can be transmitted based on bandwidth limitations. The decoder will therefore remove two out of every three pictures to be transmitted, so that the picture rate of the final video sequence is only 10 Hz. Although the terminal can display 30Hz video, the terminal needs to perform image rate conversion during the display process because the received video is only 10 Hz.
Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with some aspects of the present invention as set forth in the remainder of the present application with reference to the drawings.
Disclosure of Invention
A system and/or method for motion compensated picture rate up-conversion (PRUC) using information extracted from a compressed video stream, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
According to an aspect of the present invention, there is provided a video data processing method, the method comprising:
extracting picture rate boost (PRUC) data from the received compressed video data;
a plurality of interpolation images are generated based on the extracted PRUC data.
Preferably, the extracted PRUC data includes one or more of a block motion vector, a block coding mode, a quantization level, quantized residual data, and/or a decoded image.
Preferably, the method comprises: generating the decoded image based on decompression of the received compressed data.
Preferably, the method comprises: filtering the decoded image to reduce noise.
Preferably, the method comprises: one or more motion vectors are generated based on the block motion vector.
Preferably, the generated one or more motion vectors comprise at least one of: one or more local motion vectors and/or a global motion vector.
Preferably, the method comprises: accumulating a plurality of the block motion vectors to generate the global motion vector.
Preferably, the method comprises: scaling the generated one or more motion vectors.
Preferably, the method comprises generating, based on the quantized residual data, at least one of: a confidence value and/or a consistency value of the generated one or more motion vectors.
Preferably, the method comprises: and performing motion compensation on the generated plurality of interpolation images.
Preferably, the method comprises: and carrying out nonlinear filtering on the plurality of motion compensated interpolation images.
Preferably, said extracting image rate boost (PRUC) data from received compressed video data occurs when said received compressed video data is decompressed.
According to one aspect of the present invention, there is provided a video data processing system, the system comprising:
one or more circuits for extracting picture rate boost (PRUC) data from received compressed video data;
the one or more circuits generate a plurality of interpolated images based on the extracted PRUC data.
Preferably, the extracted PRUC data includes one or more of a block motion vector, a block coding mode, a quantization level, quantized residual data, and/or a decoded image.
Preferably, the one or more circuits generate the decoded image based on decompression of the received compressed data.
Preferably, the one or more circuits filter the decoded image to reduce noise.
Preferably, the one or more circuits generate one or more motion vectors based on the block motion vector.
Preferably, the generated one or more motion vectors comprise at least one of: one or more local motion vectors and/or a global motion vector.
Preferably, the one or more circuits accumulate a plurality of the block motion vectors to generate the global motion vector.
Preferably, the one or more circuits generate one or more pixel motion vectors based on scaling the generated one or more motion vectors.
Preferably, the one or more circuits generate, based on the quantized residual data, at least one of: a confidence value and/or a consistency value of the generated one or more motion vectors.
Preferably, the one or more circuits motion compensate the generated plurality of interpolation contributions.
Preferably, the one or more circuits perform non-linear filtering on the motion compensated plurality of interpolated images.
Preferably, said extracting image rate boost (PRUC) data from received compressed video data occurs when said received compressed video data is decompressed.
Various advantages, aspects and novel features of the invention, as well as details of an illustrated embodiment thereof, will be more fully described with reference to the following description and drawings.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a block diagram of a video processing system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image rate enhancement system according to an embodiment of the present invention;
FIG. 3a is a schematic diagram illustrating image interpolation between two images according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a motion vector of an interpolated image according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating the steps of motion-compensated picture rate enhancement using information extracted from a compressed video stream according to an embodiment of the present invention.
Detailed Description
Some embodiments of the invention relate to systems and/or methods for motion compensated picture rate up-conversion (PRUC) using information extracted from a compressed video stream. The method includes extracting PRUC data from a compressed video data stream while the compressed video data stream is being decompressed by a video decompression engine. The PRUC data includes, for example, a block motion vector, a block coding mode, a quantization level, quantized residual data, and/or a decoded picture. However, the extracted PRUC data is not limited to the above. In addition, the method further includes generating a plurality of interpolated images based on the extracted PRUC data.
Fig. 1 is a schematic structural diagram of a video processing system according to an embodiment of the present invention. As shown in fig. 1, the system includes a video processing unit 102, a processor 104, a memory 106, an encoder 118, and a data/control bus 108. The video processing unit 102 includes registers 110 and a filter 116. In some cases, the video processing unit 102 may also include an input buffer 112 and/or an output buffer 114. The video processing unit 102 may comprise suitable logic, circuitry, and/or code that may enable filtering of pixels of a video image or a video image from a video input stream to reduce noise. For example, video frame images may be used in a video system using a progressive video signal, while video field images (video field pictures) are used in a video system using an interlaced video signal. The video field undergoes a parity transformation between the top and bottom fields. The top and bottom fields in an interlaced system may be deinterlaced and combined to generate a video frame.
The video processing unit 102 may be configured to receive a video input stream and, in some cases, may buffer at least a portion of the received video input in the input buffer 112. Accordingly, the input buffer 112 may comprise suitable logic, circuitry, and/or code that may enable storing of at least a portion of a received video input stream. Similarly, the video processing unit 102 may generate a filtered video output stream to the video decoder, and in some cases, the video processing unit 102 buffers at least a portion of the filtered video output stream in the output buffer 114. Accordingly, the output buffer 114 may comprise suitable logic, circuitry, and/or code that may enable storage of at least a portion of the filtered video output stream.
The filter 116 in the video processing unit 102 may comprise suitable logic, circuitry, and/or code that may enable a filtering operation to be performed on the current pixel to reduce noise. Thus, the filter 116 may have multiple filtering modes, each corresponding to one supported filtering operation. The filter 116 may utilize the video content, filter coefficients, threshold levels, and/or constants to generate respective filtered video output streams according to the selected filtering mode. Accordingly, the video processing unit 102 may generate a corresponding blending factor according to the selected appropriate filtering mode. The registers 110 in the video processing unit 102 may comprise suitable logic, circuitry, and/or code that may enable storage of information corresponding to filter coefficients, threshold levels, and/or constants. Additionally, register 110 may also store information related to the selected filtering mode.
The processor 104 may comprise suitable logic, circuitry, and/or code that may enable processing data and/or performing system control operations. The processor 104 may be used to control at least a portion of the operations in the video processing unit 102. For example, the processor 104 may generate at least one signal to control the selection of the filtering mode in the video processing unit 102. Further, processor 104 may program, update, and/or modify filter coefficients, threshold levels, and/or constants for at least a portion of registers 110. For example, the processor 104 may generate at least one signal to retrieve filter coefficients, threshold levels, and/or constants stored in the memory 106 and transfer the retrieved information into the register 110 via the data/control bus 108.
The memory 106 may comprise suitable logic, circuitry, and/or code that may enable storage of information utilized by the video processing unit 102 in noise filtering a video input stream. The memory 106 may be used to store filter coefficients, threshold levels, and/or constants utilized by the video processing unit 102.
Encoder 118 may be used to receive and process a plurality of statistical inputs from processor 104 and video processing unit 102. The encoder 118 may also generate an encoded compressed video stream by encoding the filtered video output stream.
In operation, the processor 104 may select a filtering mode and may program the selected filtering mode into the registers 110 in the video processing unit 102. Further, the processor 104 may also write appropriate filter coefficients, threshold levels, and/or constants into the register 110 according to the selected filtering mode. The video processing unit 102 receives an input video stream and performs filter processing on pixels in a video image according to a selected filter mode. In some cases, the video input stream will typically be stored in the input buffer 112 before being processed. The video processing unit 102 will generate appropriate averaging coefficients for performing the noise reduction filtering operation selected by the processor 104. The video processing unit 102 generates a filtered video output stream after performing the noise reduction filtering operation. In some cases, the filtered video output stream is stored in output buffer 114 before being transmitted from within video processing unit 102.
The processor 104 determines the operation mode of various portions of the video processing unit 102. For example, the processor 104 may configure data registers in the video processing unit 102 to DMA transfer video data to the memory 106. The processor 104 also transmits instructions to the image sensor to initiate image capture. The memory 106 may be used to store image data that is processed and transferred by the processor 104. The memory 106 may also be used to store code and/or data used by the processor 104. The memory 106 may also store other functional data of the video processing unit 102 as well. For example, the memory 106 may store data related to voice communications. The processor 104 may include a state machine for determining whether the type of video data is interlaced or progressive.
Fig. 2 is a schematic structural diagram of an image rate increasing system according to an embodiment of the invention. The video decoding system 200 shown in fig. 2 includes a decompression engine 202 and a picture rate boost (PRUC) engine 204. The decompression engine 202 includes an entropy decoder 206, an inverse quantization module 208, an inverse transform module 210, an accumulator 212, and a motion compensated prediction module 214.
The PRUC engine 204 further includes: a pixel motion vector generation module 216, a Motion Vector Confidence and Consistency Metric (MVCCM) module 222, a motion compensated interpolation module 224, a noise reduction filter 226, and a non-linear filtering module 228. Wherein the pixel motion vector generation module 216 includes a block motion vector refinement module 218 and a scaling module 220.
The decompression engine 202 may be a video decoder associated with a particular video standard, such as MPEG-2, H.264/MPEG-4AVC, VC1, and VP 6. The entropy decoder 206 may comprise suitable logic, circuitry, and/or code that may enable receiving a compressed video stream from a video encoder, such as the encoder 118. The entropy decoder 206 may perform decoding operations on the received compressed video stream according to a particular video standard, such as MPEG-2, H.264/MPEG-4AVC, VC1, and VP 6. The entropy decoder 206 may also generate block motion vectors based on decoding the received compressed video stream.
The inverse quantization module 208 may comprise suitable logic, circuitry, and/or code that may enable generation of quantized residual data. The inverse transform module 210 may comprise suitable logic, circuitry, and/or code that may enable generation of a recomposed residual pixel and may enable communication of the generated recomposed residual pixel to the accumulator 212.
The motion compensated prediction module 214 may comprise suitable logic, circuitry, and/or code that may be enabled to receive one or more motion vectors from the entropy decoder 206 to generate a block of motion compensated pixels. Accumulator 212 may be used to superimpose the motion compensated pixel block onto the recomposed residual pixels to generate one or more decoded images. The one or more decoded pictures are fed back to the motion compensated prediction module 214. The motion compensated prediction module 214 may be configured to generate a block of motion compensated pixels from a reference image or a previous output image based on one or more received motion vectors from the entropy decoder 206.
The PRUC engine 204 may be used to extract information such as motion vectors, pictures, macroblock coding types, and quantized residual data from the video decompression engine 202.
The noise reduction filter 226 may comprise suitable logic, circuitry, and/or code that may enable receiving a plurality of decoded images from the decompression engine 202. The noise filter 226 may perform de-blocking, de-ringing, or other noise reduction filtering operations on the received decoded image. The noise reduction filter 226 may generate filtered outputs to the pixel motion vector generation module 216, the motion compensated interpolation module 224, and the non-linear filtering module 228.
Distortion of the distribution or spectrum of the block transform domain by the quantizer can lead to the generation of stitching artifacts (blockiness artifacts). The stitching artifacts are typically related to low spectral coefficients or frequency distortions resulting from quantization. Stitching artifacts are visible at block boundaries, e.g. 8x8 pixels for MPEG 1, 2 and 4 and either 4x4 or 8x8 pixels for MPEG4part10 AVC. The stitching artifacts may be observed within a planar structure region in a given image or video.
Also, the distribution of the block transform domain or the spectral distortion introduced by the quantizer can lead to ring coding artifacts, also known as mosquito artifacts. The ring coding artifacts are typically related to high spectral coefficients or frequency distortion resulting from quantization. The ring coding artifacts can be observed at the edges of planar structure regions or at text boundaries.
The pixel motion vector generation module 216 may comprise suitable logic, circuitry, and/or code that may be enabled to receive the extracted block motion vector from the entropy decoder 206 and the encoding mode. The pixel motion vector generation module 216 is configured to determine local block motion vectors and global block motion vectors and determine interpolation and filtering modes. The pixel motion vector generation module 216 may also estimate a Global Motion Vector (GMV) by accumulating a plurality of block motion vectors. The pixel motion vector generation module 216 may classify the motion vector by a histogram and generate a Global Motion Vector (GMV).
The block motion vector refinement module 218 may comprise suitable logic, circuitry and/or code that may enable refinement of motion vectors extracted from a compressed video stream and decomposition of block motion vectors into pixel motion vectors. The block motion vector refinement module 218 may perform a local refinement search and the motion vectors can be refined to sub-pixel precision.
The scaling module 220 may comprise suitable logic, circuitry, and/or code that may enable adjustment of generated motion vectors for interpolated or interpolated images. The pixel motion vector generation module 216 may generate the pixel motion vector using locally adaptive non-linear filtering. In addition, the pixel motion vector generation module 216 may also measure the consistency of the local motion vectors.
The Motion Vector Confidence and Consistency Metric (MVCCM) module 222 may comprise suitable logic, circuitry, and/or code and may be adapted to measure the extracted quantized residual data and the quantization level. The MVCCM module 222 may also generate a motion vector consistency value by comparing neighboring block motion vectors and motion compensated block boundary pixel difference values. For example, a lower quantization level and less residual data may result in a higher motion vector confidence, while a higher quantization level and more residual data may result in a lower motion vector confidence. The MVCCM module 222 passes the generated motion vector confidence value and the motion vector consistency value to the non-linear filtering module 228.
The motion compensated interpolation module 224 may comprise suitable logic, circuitry, and/or code that may enable generation of an interpolated or interpolated image by utilizing the scaled local and global motion vectors in conjunction with the denoised decoded image. The motion compensated interpolation module 224 passes the generated interpolated image to the non-linear filtering module 228.
The non-linear filtering module 228 may comprise suitable logic, circuitry, and/or code that may enable filtering of the received interpolated image to reduce artifacts in the final output interpolated image. The non-linear filtering module 228 can determine whether the motion compensated interpolation will fail by using the motion vector confidence and the consistency measure. If the non-linear filtering module 228 determines that the motion compensated interpolation fails, the PRUC engine 204 turns off the image interpolation operation during the scene change and repeats the previous image.
In operation, the decompression engine 202 may receive a compressed video stream containing a low image rate and perform a decompression operation on the received compressed video stream. The PRUC engine 204 performs a Picture Rate Up (PRUC) operation using motion vectors extracted from the compressed video stream and other coding information. The PRUC engine 204 is capable of generating interpolated images at high image rates in a progressive scan and displaying them on a modern video display, such as an LCD screen or PDP screen.
Digital video compression algorithms, such as MPEG-2, MPEG-4, VC1, and VP6, can perform forward prediction, backward prediction, and bi-directional prediction coding to generate P and B pictures, respectively. Motion compensated predictive coding may exploit the instantaneous link between successive pictures. The video compression encoder 118 may generate Motion Vectors (MVs) between the images within the allowed temporal window. These motion vectors are used for motion compensation in video compression encoding and decoding operations. In the compressed video stream, motion compensation information, such as macroblocks, may include encoded motion vector data as well as transformed residual data.
An artifact known as motion judder occurs when the image rate of the video stream is low. This motion tremor occurs because the instantaneous sampling rate is lower than the actual speed of motion in the scene. A motion compensated interpolation module 224 may be used to reduce the motion jerk. The motion compensated interpolation module 224 may change the processing operations of the image rate converter to enable the converter to stay with the human eye in line with the capture of moving objects. Therefore, an image in which motion tremor is eliminated becomes clear and well-defined. The PRUC engine 204 may discriminate the input image stream to determine how each object in the scene is moving. The PRUC engine 204 may also interpolate the positions of multiple objects at different points in time to generate an output image.
The PRUC engine 204 may also interpolate additional intermediate pictures between the coded pictures to replace the previously employed repeated earlier coded pictures. Motion compensated interpolation is similar to the generation of predicted images such as P-and B-maps during video compression. According to an embodiment of the present invention, the PRUC engine 204 does not need to transmit the motion vectors and the residual data in the process of generating one or more interpolated images. One or more display devices may perform their own PRUC from the compressed video stream upon receiving the decoded pictures, without other additional information.
For mutually independent macroblocks without motion vectors, e.g., intra macroblocks, multiple interpolated motion vectors may be employed. According to an embodiment of the present invention, the PRUC engine 204 may turn off frame interpolation and repeat previous frames during a scene change. The nonlinear filtering module 228 may employ motion adaptive weighted median filtering to generate interpolated images between the I map and the previous P map.
FIG. 3a is a schematic diagram of image interpolation between two images according to an embodiment of the present invention. As shown in fig. 3a, the positions of a plurality of encoded images, for example, P1302, P2304 and the interpolated image 304 are shown. For example, the interpolation image 304 is inserted into a position 1302k time units from the encoded image P1302.
FIG. 3b is a schematic diagram of a motion vector of an interpolated image according to an embodiment of the present invention. As shown in fig. 3b, a plurality of encoded images, for example, P1352, P2354 and interpolated image 356 are shown. For example, the interpolation image 356 is inserted into a position 1352k time units from the coded image P1352.
The motion vector 358 points from a region in the previous image P1352 to a region in the next image P2354, so that the motion vector 358 is able to capture the motion that occurs between the two original images P1352 and P2354. Motion vector 360 is a shifted version of motion vector 358. The motion vector 360 is shifted to align with the interpolated image 356.
Motion vector 360 may be divided into two motion vectors, e.g., MV1362 and MV 2364. Each estimated motion vector, e.g., motion vector 360, may be split and scaled for use in motion compensated interpolation. The directions of the two scaled motion vectors, e.g., MV1362 and MV2364, are opposite. The length of the scaled motion vector, for example, the length of MV1362, is proportional to the time difference between the interpolated image 356 and the original image P1352. The length of the scaled motion vector, for example, MV2364, is proportional to the time difference between the interpolated image 356 and the original image P2354.
Fig. 4 is a flowchart illustrating the steps of performing motion compensated picture rate up-conversion (PRUC) using information extracted from a compressed video stream according to an embodiment of the present invention. As shown in fig. 4, the flow begins at step 402. In step 404, the decompression engine 202 receives compressed video data from the encoder 118. In step 406, the PRUC engine 204 extracts PRUC data from the compressed video data while the compressed video data is being decompressed by the decompression engine 202. The PRUC data includes a local block motion vector, a block coding mode, a quantization level, quantized residual data, and a decoded image, but the extracted PRUC data is not limited thereto. In step 408, the noise reduction filter 226 performs a digital noise reduction filtering operation on the extracted decoded image.
In step 410, the pixel motion vector generation module 216 receives the plurality of block motion vectors from the video decompression engine 202 and generates a pixel motion vector based on the refinement and scaling of the received plurality of block motion vectors. In step 412, the MVCCM module 222 generates a motion vector confidence and a consistency measure. In step 414, the motion compensated interpolation module 224 generates an interpolated image by performing a motion compensated interpolation operation. In step 416, the non-linear filtering module 228 detects the scene change and performs a filtering operation on the interpolated image to reduce artifacts in the final output interpolated image. The flow ends in step 418.
Embodiments of the present invention include a method and system for motion compensated image rate up-conversion (PRUC) using information extracted from a compressed video stream, including a PRUC engine 204 for extracting PRUC data from a compressed video data stream while the compressed video stream is being decompressed by a video decompression engine 202. The PRUC data includes, for example, a local block motion vector, a block coding mode, a quantization level, quantized residual data, and a decoded image. The extracted PRUC data is not limited thereto. The PRUC engine may generate a plurality of interpolated images based on the extracted PRUC data.
The decompression engine 202 may generate a decoded image based on a decompression operation performed on the compressed video data stream. The PRUC engine 204 may include a pixel motion vector generation module 216, a Motion Vector Confidence and Consistency Measure (MVCCM) module 222, a motion compensation interpolation module 224, a noise reduction filter 226, and a non-linear filtering module 228. Wherein the pixel motion vector generation module 216 includes a block motion vector refinement module 218 and a scaling module 220.
The noise reduction filter 226 may perform a digital noise reduction filtering operation on the extracted decoded image to reduce noise. Pixel motion vector generation module 216 may generate one or more motion vectors based on the received block motion vectors. The generated motion vector may include one or more local motion vectors as well as a global motion vector. Pixel motion vector generation module 216 may generate a global motion vector by accumulating a plurality of block motion vectors.
The scaling module 220 is configured to perform scaling adjustment on the generated motion vector. Pixel motion vector generation module 216 can generate a pixel shipping vector based on a scaling adjustment made to the motion vector. The mvctcm module 222 generates at least one metric value of confidence value and consistency value for the motion vector based on the quantized residual data extracted from the video decompression engine 202.
The motion compensated interpolation module 224 is configured to generate a motion compensated interpolated image. The nonlinear filtering module 228 performs a filtering operation on the generated interpolated image to reduce artifacts in the output interpolated image.
Another embodiment of the invention includes a machine-readable storage having stored thereon a computer program. The program comprises at least one code segment for motion compensated image rate up-conversion using information extracted from a compressed video stream, the at least one code segment being executable by a machine for enabling the machine to perform the method steps described herein.
Accordingly, the present invention may be implemented in hardware, software, firmware, or various combinations thereof. The present invention can be realized in a centralized fashion in at least one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware, software and firmware may be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
The present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. The computer program in this document refers to: any expression, in any programming language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to other languages, codes or symbols; b) reproduced in a different format. However, other meanings of computer program that can be understood by those skilled in the art are also encompassed by the present invention.
While the invention has been described with reference to several particular embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from its scope. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
Claims (10)
1. A method of video data processing, the method comprising:
extracting image rate improvement data from the received compressed video data;
generating a plurality of interpolation images based on the extracted image rate improvement data; the extracted image rate improvement data comprises a block motion vector and quantized residual data;
generating one or more motion vectors based on the block motion vector;
generating, based on the quantized residual data, both: a confidence value and a consistency value of the generated one or more motion vectors;
performing motion compensation on the generated plurality of interpolation images;
and carrying out nonlinear filtering on the plurality of motion compensated interpolation images.
2. The method of claim 1, wherein the extracting image rate boost data from the received compressed video data occurs when the received compressed video data is decompressed.
3. The method of claim 2, wherein the method comprises: generating a decoded image based on decompression of the received compressed data.
4. The method of claim 2, wherein the method comprises: the decoded image is filtered to reduce noise.
5. The method of claim 2, wherein the method comprises: scaling the generated one or more motion vectors.
6. The method of claim 5, wherein the generated one or more motion vectors comprise at least one of: one or more local motion vectors and/or a global motion vector.
7. A video data processing system, the system comprising:
first means for extracting image rate boost data from the received compressed video data;
second means for generating a plurality of interpolation images based on the extracted image rate increase data; the extracted image rate improvement data comprises a block motion vector and quantized residual data;
third means for generating one or more motion vectors based on the block motion vector;
fourth means for generating, based on the quantized residual data, both: a confidence value and a consistency value of the generated one or more motion vectors;
and a fifth means for performing motion compensation on the generated plurality of interpolation images and performing nonlinear filtering on the motion-compensated plurality of interpolation images.
8. The system of claim 7, wherein the extracting image rate boost data from the received compressed video data occurs at a time of decompression of the received compressed video data.
9. The system of claim 8, further comprising:
sixth means for generating a decoded image based on decompression of the received compressed data.
10. The system of claim 8, further comprising:
seventh means for filtering the decoded image to reduce noise.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/931,942 | 2007-10-31 | ||
US11/931,942 US8767831B2 (en) | 2007-10-31 | 2007-10-31 | Method and system for motion compensated picture rate up-conversion using information extracted from a compressed video stream |
Publications (2)
Publication Number | Publication Date |
---|---|
HK1132117A1 HK1132117A1 (en) | 2010-02-12 |
HK1132117B true HK1132117B (en) | 2011-12-23 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9247250B2 (en) | Method and system for motion compensated picture rate up-conversion of digital video using picture boundary processing | |
US8218638B2 (en) | Method and system for optical flow based motion vector estimation for picture rate up-conversion | |
KR101056096B1 (en) | Method and system for motion compensated frame rate up-conversion for both compression and decompression video bitstreams | |
US9462296B2 (en) | Method and system for motion-compensated frame-rate up-conversion for both compressed and decompressed video bitstreams | |
CN102017615B (en) | Boundary artifact correction within video units | |
US6931062B2 (en) | Decoding system and method for proper interpolation for motion compensation | |
JP2006041943A (en) | Motion vector detection / compensation device | |
JP5529161B2 (en) | Method and apparatus for browsing video streams | |
US20130201405A1 (en) | Method and System for Adaptive Temporal Interpolation Filtering for Motion Compensation | |
TWI486061B (en) | Method and system for motion compensated picture rate up-conversion using information extracted from a compressed video stream | |
US6804303B2 (en) | Apparatus and method for increasing definition of digital television | |
JP2005278168A (en) | Method and system for processing compressed input video | |
US8848793B2 (en) | Method and system for video compression with integrated picture rate up-conversion | |
HK1132117B (en) | A method and system for processing video data | |
US8270773B2 (en) | Image processing apparatus and image processing method | |
JP4779207B2 (en) | Motion vector conversion apparatus and motion vector conversion method | |
JP4556286B2 (en) | Motion vector conversion apparatus and method | |
JP2001309389A (en) | Device and method for motion vector conversion | |
HK1141377B (en) | A method and system for processing signal |