TWI428024B - Frame rate up conversion system and method - Google Patents
Frame rate up conversion system and method Download PDFInfo
- Publication number
- TWI428024B TWI428024B TW099114570A TW99114570A TWI428024B TW I428024 B TWI428024 B TW I428024B TW 099114570 A TW099114570 A TW 099114570A TW 99114570 A TW99114570 A TW 99114570A TW I428024 B TWI428024 B TW I428024B
- Authority
- TW
- Taiwan
- Prior art keywords
- block
- line
- frame
- buffer
- current
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 19
- 238000006243 chemical reaction Methods 0.000 title description 4
- 239000000872 buffer Substances 0.000 claims description 45
- 238000009499 grossing Methods 0.000 claims description 15
- 230000009471 action Effects 0.000 claims description 10
- 238000010586 diagram Methods 0.000 description 22
- 230000000694 effects Effects 0.000 description 6
- 230000008439 repair process Effects 0.000 description 5
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Landscapes
- Television Systems (AREA)
Description
本發明係有關圖框速率的提升,特別是關於一種針對內插圖框的空間內插(spatial interpolation)處理與平滑化。 The present invention relates to the improvement of the frame rate, and more particularly to a spatial interpolation process and smoothing for the inner frame.
提升圖框速率技術(frame rate up conversion,FRUC)經常應用在數位影像的顯示(例如數位電視),其可在二相鄰原始圖框間產生一或多張內插圖框,以提升顯示圖框的速率,例如由60Hz提升至120Hz甚至240Hz。內插圖框通常係藉由動作補償(motion compensation,MC)的內插技術所產生。第一圖顯示以區塊為基礎的動作估計/補償技術,其根據前一圖框A和目前圖框B以產生內插圖框。首先,估計目前圖框B之巨集區塊(macroblock,MB)相對於前一圖框A之相應巨集區塊的動作。接著,根據動作估計以得到內插圖框。 Frame rate up conversion (FRUC) is often applied to the display of digital images (such as digital TV), which can generate one or more inner illustration frames between two adjacent original frames to enhance the display frame. The rate, for example, is increased from 60 Hz to 120 Hz or even 240 Hz. The inner frame is usually generated by interpolation techniques of motion compensation (MC). The first figure shows a block-based motion estimation/compensation technique that generates an inset frame according to the previous frame A and the current frame B. First, the action of the macroblock (MB) of the current frame B relative to the corresponding macroblock of the previous frame A is estimated. Next, the inner frame is obtained based on the motion estimation.
對於以區塊為基礎之動作補償所產生的內插圖框,經常會產生破裂(或裂開)區域,於該區域內不具有動作向量。再者,對於以區塊為基礎之動作補償,沿著相鄰區塊之間的邊界通常存在有邊界效應(side effect)。為克服破裂區域的問題,傳統系統或方法係使用線緩 衝器(line-buffer)來儲存目前區塊的像素以及前一區塊與下一區塊的某些像素。例如,對於8x8區塊為基礎的系統或方法,需使用十個線緩衝器來儲存目前區塊的8條線、前一區塊的最後一條線及下一區塊的第一條線。由於存取十個線緩衝器的像素需要耗費大量的時間,因此,傳統的系統或方法並無法用於即時影像的顯示。另外,十個線緩衝器會造成電路面積與成本的增加。 For the inner frame created by the block-based motion compensation, a cracked (or split) region is often generated, and there is no motion vector in the region. Furthermore, for block-based motion compensation, there is usually a side effect along the boundary between adjacent blocks. To overcome the problem of rupture areas, traditional systems or methods use line mitigation A line-buffer stores the pixels of the current block and some pixels of the previous block and the next block. For example, for an 8x8 block-based system or method, ten line buffers are needed to store the 8 lines of the current block, the last line of the previous block, and the first line of the next block. Since accessing the pixels of ten line buffers takes a lot of time, conventional systems or methods cannot be used for display of live images. In addition, ten line buffers can cause an increase in circuit area and cost.
鑑於傳統系統與方法無法有效的解決破裂區域的問題與邊界效應,因此,亟需提出一種新穎的系統與方法,以經濟且有效的方式產生沒有破裂區域與邊界效應的內插圖框。 In view of the fact that traditional systems and methods cannot effectively solve the problem of rupture zone and boundary effect, it is urgent to propose a novel system and method to produce an inner frame without rupture zone and boundary effect in an economical and effective way.
鑑於上述,本發明實施例的目的之一在於提供一種提升圖框速率(frame rate up conversion,FRUC)的系統與方法,使用較傳統少的緩衝器資源,以修補及平滑化所產生的內插圖框。 In view of the above, one of the objects of the embodiments of the present invention is to provide a system and method for improving frame rate up conversion (FRUC), which uses less buffer resources to repair and smooth the inner illustrations. frame.
根據本發明實施例,提升圖框速率系統包含動作估計(motion estimation,ME)單元與三線緩衝器為基礎之動作補償單元。動作估計單元根據連續的圖框輸入,產生至少一動作向量(motion vector,MV)。動作補償單元根據動作向量、參考圖框與目前圖框以產生內插圖框,藉此,產生一圖框輸出,其圖框速率高於圖框輸入之圖框速率。 According to an embodiment of the invention, the elevated frame rate system comprises a motion estimation (ME) unit and a three-line buffer based motion compensation unit. The motion estimation unit generates at least one motion vector (MV) according to the continuous frame input. The motion compensation unit generates an inner frame according to the motion vector, the reference frame and the current frame, thereby generating a frame output whose frame rate is higher than the frame rate of the frame input.
21‧‧‧動作估計單元 21‧‧‧Action Estimation Unit
22‧‧‧動作補償單元 22‧‧‧Action Compensation Unit
221‧‧‧時間內插單元 221‧‧‧ time inserting unit
222‧‧‧空間內插單元 222‧‧‧ Space Interpolation Unit
223‧‧‧平滑化單元 223‧‧‧Smoothing unit
2221‧‧‧記憶體 2221‧‧‧ memory
2222‧‧‧三線緩衝器 2222‧‧‧Three-wire buffer
2223‧‧‧空間內插處理器 2223‧‧‧Space Interpolation Processor
31-32‧‧‧步驟 31-32‧‧‧Steps
321-323‧‧‧步驟 321-323‧‧‧Steps
3222-3223‧‧‧步驟 3222-3223‧‧‧Steps
p1、p2、p3、p4、pc、b1、bc‧‧‧像素 P1, p2, p3, p4, pc, b1, bc‧‧ ‧ pixels
第一圖顯示根據前一圖框與目前圖框以產生內插圖框的例子。 The first figure shows an example of generating an inner frame according to the previous frame and the current frame.
第二A圖顯示本發明實施例之提升圖框速率的系統方塊圖。 Figure 2A shows a block diagram of the system for increasing the frame rate in accordance with an embodiment of the present invention.
第二B圖顯示本發明實施例之提升圖框速率的方法流程圖。 FIG. 2B is a flow chart showing a method for increasing the frame rate according to an embodiment of the present invention.
第三A圖顯示本發明實施例中第二A圖之動作補償(MC)單元的詳細方塊圖。 Figure 3A shows a detailed block diagram of the motion compensation (MC) unit of the second A diagram in the embodiment of the present invention.
第三B圖顯示本發明實施例中第二B圖之內插圖框產生步驟的詳細流程圖。 FIG. 3B is a detailed flowchart showing the steps of generating the inset frame in the second B diagram in the embodiment of the present invention.
第四A圖顯示本發明實施例中第三A圖之空間內插單元的詳細方塊圖。 Figure 4A shows a detailed block diagram of the spatial interpolation unit of the third A diagram in the embodiment of the present invention.
第四B圖顯示本發明實施例中第三B圖之空間內插以修補破裂區域步驟的詳細流程圖。 Figure 4B shows a detailed flow chart of the steps of interpolating the space of the third B diagram to repair the rupture zone in the embodiment of the present invention.
第五A圖及第五B圖例示將前一區塊之最後一條線、目前線與下一區塊的第一條線儲存於三線緩衝器。 The fifth A and fifth B diagrams illustrate storing the last line of the previous block, the current line, and the first line of the next block in the three-line buffer.
第六圖顯示藉由空間內插處理器以執行空間內插的例子。 The sixth figure shows an example of performing spatial interpolation by spatially interpolating the processor.
第七圖顯示執行平滑化的例子。 The seventh figure shows an example of performing smoothing.
第二A圖顯示本發明實施例之提升圖框速率(frame rate up conversion,FRUC)的系統方塊圖。第二B圖顯示本發明實施例之提升圖框速率的方法流程圖。提升圖框速率系統主要包含動作估計單元(motion estimation,ME)21與動作補償(motion compensation,MC)單元22。於步驟31,動作估計單元21接收具原始圖框速率(例如60Hz)之連續圖框輸入,用以產生動作向量(motion vector,MV)或動作向量圖(motion vector map,MV map)。於步驟32,動作補償單元22(特別是以三線(triple-line)緩衝器為基礎的動作補償單元)根據動作向量/動作向量圖、參考圖框(例如前一圖框或後一圖框)及目前圖框以產生內插圖框,藉此產生圖框速率提升(例如120Hz)的連續圖框輸出。在本實施例中,係採用以區塊為基礎的動作補償。 The second A diagram shows a system block diagram of a frame rate up conversion (FRUC) according to an embodiment of the present invention. FIG. 2B is a flow chart showing a method for increasing the frame rate according to an embodiment of the present invention. The boost frame rate system mainly includes a motion estimation unit (ME) 21 and a motion compensation (MC) unit 22. In step 31, the motion estimation unit 21 receives a continuous frame input having an original frame rate (for example, 60 Hz) for generating an motion vector (motion Vector, MV) or motion vector map (MV map). In step 32, the motion compensation unit 22 (particularly a motion compensation unit based on a triple-line buffer) is based on an action vector/action vector map, a reference frame (eg, a previous frame or a subsequent frame). And the current frame to generate an inner frame, thereby producing a continuous frame output with a frame rate increase (eg, 120 Hz). In this embodiment, block-based motion compensation is employed.
第三A圖顯示本發明實施例中第二A圖之動作補償單元22的詳細方塊圖。第三B圖顯示本發明實施例中第二B圖之內插圖框產生步驟(步驟32)的詳細流程圖。在本實施例中,動作補償單元22包含時間(temporal)內插單元221、空間內插單元222與平滑化單元223。於步驟321,時間內插單元221根據動作向量/動作向量圖、參考圖框與目前圖框,以產生時間內插圖框(其中,參考圖框與目前圖框可從動作估計單元21或圖框儲存記憶體取得)。鑑於以區塊為基礎的動作補償所產生之時間內插圖框中經常會有破裂(或裂開)區域,因此於步驟322中,使用空間內插單元222針對時間內插圖框執行空間內插,以修補破裂區域。有關破裂區域修補之細節,稍後將會於本說明書詳加說明。另外,鑑於以區塊為基礎的動作補償存在有邊界效應於區塊間的邊界,因此於步驟323中,使用平滑化單元23沿著空間內插圖框的區塊間邊界執行平滑化,以減少邊界效應。有關區塊邊界平滑化之細節,稍後將會於本說明書詳加說明。 The third A diagram shows a detailed block diagram of the action compensation unit 22 of the second A diagram in the embodiment of the present invention. The third B diagram shows a detailed flowchart of the inset frame generating step (step 32) in the second B diagram in the embodiment of the present invention. In the present embodiment, the motion compensation unit 22 includes a temporal interpolation unit 221, a spatial interpolation unit 222, and a smoothing unit 223. In step 321, the time interpolation unit 221 generates a time frame according to the action vector/action vector map, the reference frame and the current frame (where the reference frame and the current frame are available from the motion estimation unit 21 or the frame). Save memory to get). In view of the fact that there are often broken (or split) regions in the illustration frame during the time period caused by the block-based motion compensation, in step 322, the spatial interpolation unit 222 is used to perform spatial interpolation for the time illustration frame. To repair the ruptured area. Details on the repair of the rupture zone will be explained later in this manual. In addition, since the block-based motion compensation has a boundary effect on the boundary between the blocks, in step 323, the smoothing unit 23 is used to perform smoothing along the inter-block boundary of the illustration frame in the space to reduce Border effect. Details on block boundary smoothing will be explained later in this manual.
第四A圖顯示本發明實施例中第三A圖之空間內插單元222的詳細方塊圖。第四B圖顯示本發明實施例中第三B圖之空間內插以修補破裂區域步驟(步驟322)的詳細流程圖。在本實施例中,空間內插單元222包含記憶體2221、三線緩衝器2222與空間內插處理器2223。記 憶體2221提供一些像素區塊的線條。三線緩衝器2222包含三個線緩衝器,分別用以儲存將被處理的目前線、前一區塊的最後一條線與下一區塊的第一條線(步驟3222)。接著,根據所儲存之前一(上端鄰近)區塊的最後一條線與下一(下端鄰近)區塊的第一條線,使用空間內插處理器2223對目前線執行空間內插(步驟3223)。由於本實施例僅使用三線緩衝器以進行空間內插(與平滑化),相較於傳統系統與方法,本實例可大量減少硬體資源與加快內插(與平滑化)速度。 The fourth A diagram shows a detailed block diagram of the spatial interpolation unit 222 of the third A diagram in the embodiment of the present invention. Figure 4B shows a detailed flow chart of the step of interpolating the space of the third B diagram to repair the rupture zone (step 322) in the embodiment of the present invention. In the present embodiment, the spatial interpolation unit 222 includes a memory 2221, a three-line buffer 2222, and a spatial interpolation processor 2223. Remember The memory 2221 provides lines of some pixel blocks. The three-line buffer 2222 includes three line buffers for storing the current line to be processed, the last line of the previous block, and the first line of the next block (step 3222). Then, spatial interpolation is performed on the current line using the spatial interpolation processor 2223 according to the first line of the previous (upper end adjacent) block and the first line of the next (lower end adjacent) block (step 3223). . Since this embodiment uses only three-line buffers for spatial interpolation (and smoothing), this example can significantly reduce hardware resources and speed up interpolation (and smoothing) compared to conventional systems and methods.
第五A圖顯示一例子,其中前一區塊N-1的最後一條線儲存在緩衝器1,目前區塊N的目前線儲存在緩衝器2且下一區塊N+1的第一條線儲存在緩衝器3,上述區塊N-1、區塊N與區塊N+1為影像垂直方向之連續區塊。對於相同區塊N,每次係以下一條線覆蓋緩衝器2的內容。如第五B圖所顯示的另一例子,在處理完區塊N的最後一條線之後,區塊N+1的第一條線成為目前線。由於此目前線已預先儲存在緩衝器3,故不需要再次由記憶體2221擷取。再者,保留在緩衝器2之區塊N所完成處理的最後一條線將成為前一區塊N的最後一條線。同時,由記憶體2221擷取區塊N+2的第一條線並將其儲存在緩衝器1。對同一區塊N+1,每次係以下一條線覆蓋緩衝器3(而非第五A圖所示之緩衝器2)的內容。藉此,第五A圖與第五B圖所示之例子可重複執行於所有區塊。 Figure 5A shows an example in which the last line of the previous block N-1 is stored in the buffer 1, and the current line of the current block N is stored in the buffer 2 and the first block of the next block N+1 The line is stored in the buffer 3, and the block N-1, the block N and the block N+1 are consecutive blocks in the vertical direction of the image. For the same block N, one line below covers the contents of the buffer 2 each time. As another example shown in FIG. 5B, after processing the last line of block N, the first line of block N+1 becomes the current line. Since the current line has been previously stored in the buffer 3, it is not necessary to retrieve it from the memory 2221 again. Furthermore, the last line remaining in the block N of buffer 2 will be the last line of the previous block N. At the same time, the first line of the block N+2 is retrieved by the memory 2221 and stored in the buffer 1. For the same block N+1, the content of the buffer 3 (not the buffer 2 shown in FIG. 5A) is covered by one line at a time. Thereby, the examples shown in the fifth A diagram and the fifth B diagram can be repeatedly executed in all the blocks.
第六圖顯示藉由空間內插處理器2223以執行空間內插(步驟3222)的例子。在一實施例中,根據前一區塊的最後一條線之像素p1與下一區塊的第一條線之像素p2,對目前線的像素pc執行空間內插。例如,可由下列算式算出像素pc的值: pc=[p1*n1+p2*n2]/(n1+n2),其中n1與n2分別為像素p1與p2的權重(weighting)。 The sixth diagram shows an example of spatial interpolation by the spatial interpolation processor 2223 (step 3222). In one embodiment, spatial interpolation is performed on the pixels pc of the current line according to the pixel p1 of the last line of the previous block and the pixel p2 of the first line of the next block. For example, the value of pixel pc can be calculated from the following formula: Pc=[p1*n1+p2*n2]/(n1+n2), where n1 and n2 are the weighting of the pixels p1 and p2, respectively.
在另一實施例中,根據以下四個像素對目前線的像素pc執行空間內插:前一區塊之最後一條線的像素p1、下一區塊之第一條線的像素p2、左側相鄰區塊的像素p3與右側相鄰區塊的像素p4。 In another embodiment, spatial interpolation is performed on the pixels pc of the current line according to the following four pixels: the pixel p1 of the last line of the previous block, the pixel p2 of the first line of the next block, and the left side of the pixel The pixel p3 of the neighboring block and the pixel p4 of the adjacent block of the right side.
接下來,以平滑化單元223對空間內插圖框執行平滑化處理(步驟323)。在本實施例中,採用低通濾波(low-pass filtering,LPF)以平滑化區塊邊界,進而減少邊界效應。第七圖顯示執行平滑化的例子。在本實施例中,根據目前線的像素bc本身與前一區塊之最後一條線的像素b1,以平滑化目前線的像素bc。例如,可由下列算式算出平滑化像素bc’的值:bc’=[b1*n1+bc*n2]/(n1+n2),其中n1與n2分別為像素b1與bc的權重(weighting)。 Next, smoothing processing is performed on the in-space illustration frame by the smoothing unit 223 (step 323). In this embodiment, low-pass filtering (LPF) is employed to smooth the block boundaries, thereby reducing the boundary effect. The seventh figure shows an example of performing smoothing. In this embodiment, the pixel bc of the current line is compared with the pixel b1 of the last line of the previous block to smooth the pixel bc of the current line. For example, the value of the smoothed pixel bc' can be calculated by the following equation: bc' = [b1 * n1 + bc * n2] / (n1 + n2), where n1 and n2 are the weighting of the pixels b1 and bc, respectively.
以上所述僅為本發明之較佳實施例而已,並非用以限定本發明之申請專利範圍;凡其它未脫離發明所揭示之精神下所完成之等效改變或修飾,均應包含在下述之申請專利範圍內。 The above description is only the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention; all other equivalent changes or modifications which are not departing from the spirit of the invention should be included in the following Within the scope of the patent application.
222‧‧‧空間內插單元 222‧‧‧ Space Interpolation Unit
2221‧‧‧記憶體 2221‧‧‧ memory
2222‧‧‧三線緩衝器 2222‧‧‧Three-wire buffer
2223‧‧‧空間內插處理器 2223‧‧‧Space Interpolation Processor
Claims (12)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW099114570A TWI428024B (en) | 2010-05-06 | 2010-05-06 | Frame rate up conversion system and method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW099114570A TWI428024B (en) | 2010-05-06 | 2010-05-06 | Frame rate up conversion system and method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TW201141237A TW201141237A (en) | 2011-11-16 |
| TWI428024B true TWI428024B (en) | 2014-02-21 |
Family
ID=46760491
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW099114570A TWI428024B (en) | 2010-05-06 | 2010-05-06 | Frame rate up conversion system and method |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI428024B (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040252230A1 (en) * | 2003-06-13 | 2004-12-16 | Microsoft Corporation | Increasing motion smoothness using frame interpolation with motion analysis |
| TW200803517A (en) * | 2005-09-27 | 2008-01-01 | Qualcomm Inc | Redundant data encoding methods and device |
| TW200926823A (en) * | 2007-12-06 | 2009-06-16 | Mstar Semiconductor Inc | Image processing method and related apparatus for performing image processing operation only according to image blocks in horizontal direction |
-
2010
- 2010-05-06 TW TW099114570A patent/TWI428024B/en not_active IP Right Cessation
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040252230A1 (en) * | 2003-06-13 | 2004-12-16 | Microsoft Corporation | Increasing motion smoothness using frame interpolation with motion analysis |
| TW200803517A (en) * | 2005-09-27 | 2008-01-01 | Qualcomm Inc | Redundant data encoding methods and device |
| TW200926823A (en) * | 2007-12-06 | 2009-06-16 | Mstar Semiconductor Inc | Image processing method and related apparatus for performing image processing operation only according to image blocks in horizontal direction |
Also Published As
| Publication number | Publication date |
|---|---|
| TW201141237A (en) | 2011-11-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| TWI455588B (en) | Bi-directional, local and global motion estimation based frame rate conversion | |
| CN104219533B (en) | A kind of bi-directional motion estimation method and up-conversion method of video frame rate and system | |
| CN103220488B (en) | Conversion equipment and method on a kind of video frame rate | |
| US20070200838A1 (en) | Image displaying apparatus having frame rate conversion and method thereof | |
| JP2009532984A (en) | Motion compensated frame rate conversion with protection against compensation artifacts | |
| CN108040217B (en) | A method, device and camera for video decoding | |
| US20100067818A1 (en) | System and method for high quality image and video upscaling | |
| JP5081898B2 (en) | Interpolated image generation method and system | |
| CN102025960A (en) | Motion compensation de-interlacing method based on adaptive interpolation | |
| CN101188017A (en) | Digital image scaling method and system | |
| US20110255596A1 (en) | Frame rate up conversion system and method | |
| JP2004007696A (en) | Method and system for edge adaptive interpolation for interlace-progressive transformation | |
| JP4355577B2 (en) | Up-conversion by noise suppression diagonal direction correction | |
| WO2014008329A1 (en) | System and method to enhance and process a digital image | |
| CN111294545B (en) | Image data interpolation method and device, storage medium and terminal | |
| US7787048B1 (en) | Motion-adaptive video de-interlacer | |
| JP3898546B2 (en) | Image scanning conversion method and apparatus | |
| TWI428024B (en) | Frame rate up conversion system and method | |
| US20100272372A1 (en) | Image Processing Apparatus and Image Processing Method | |
| CN102622975B (en) | Frame rate up-conversion apparatus and method | |
| JP2010252117A (en) | Frame rate conversion device, frame rate conversion method, moving image display device, frame rate conversion program, and recording medium | |
| JP2004354593A5 (en) | ||
| CN113286107B (en) | Video deinterlacing method, system, device and storage medium | |
| CN1273930C (en) | Conversion unit and method and image processing apparatus | |
| KR20130031132A (en) | Apparatus for up-converting frame rate of video signal and method thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| MM4A | Annulment or lapse of patent due to non-payment of fees |