[go: up one dir, main page]

CN108174056A - A low-light video noise reduction method based on joint spatio-temporal domain - Google Patents

A low-light video noise reduction method based on joint spatio-temporal domain Download PDF

Info

Publication number
CN108174056A
CN108174056A CN201611115487.3A CN201611115487A CN108174056A CN 108174056 A CN108174056 A CN 108174056A CN 201611115487 A CN201611115487 A CN 201611115487A CN 108174056 A CN108174056 A CN 108174056A
Authority
CN
China
Prior art keywords
window
pixel
low
mid
noise reduction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611115487.3A
Other languages
Chinese (zh)
Inventor
富容国
冯澍
罗浩
杨柳
沈天宇
吕进
王焜
韦方
韦一方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201611115487.3A priority Critical patent/CN108174056A/en
Publication of CN108174056A publication Critical patent/CN108174056A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Picture Signal Circuits (AREA)

Abstract

本发明公开了一种时空域联合的微光视频降噪方法,该方法包括:对原始微光视频进行boxfilter处理;对处理后的视频序列相邻帧进行运动检测,确定相应的滤波系数,并通过三维系数时域递归滤波算法对原始微光视频进行降噪处理;通过改进的双边滤波处理方法进行图像增强,得到降噪后的微光视频。本发明在避免拖影的同时尽可能地增强了降噪的效果;时空域混合滤波解决了微光视频由于环境和器件因素所造成的对比度差、信噪比低等缺陷,提高了人眼的视觉效果。

The invention discloses a low-light video noise reduction method combining time and space domains. The method includes: performing boxfilter processing on the original low-light video; performing motion detection on adjacent frames of the processed video sequence, determining corresponding filter coefficients, and The noise reduction processing of the original low-light video is carried out through the three-dimensional coefficient time-domain recursive filtering algorithm; the image enhancement is carried out through the improved bilateral filtering processing method, and the low-light video after noise reduction is obtained. The invention enhances the effect of noise reduction as much as possible while avoiding smear; the time-space hybrid filter solves the defects of low-light video such as poor contrast and low signal-to-noise ratio caused by environmental and device factors, and improves the human eye. Visual effects.

Description

一种时空域联合的微光视频降噪方法A low-light video noise reduction method based on joint spatio-temporal domain

技术领域technical field

本发明属于视频处理技术领域,特别是一种时空域联合的微光视频降噪方法。The invention belongs to the technical field of video processing, in particular to a low-light video noise reduction method combining time and space domains.

背景技术Background technique

在微光条件下,由于光照较低以及探测器灵敏度等的限制,系统所获视频图像信噪比较低,影响人眼观察,甚至无法有效获得目标场景图像。微光视频的噪声主要由CCD所产生的符合高斯分布的白噪声和像增强器所产生的量子噪声所组成。Under low-light conditions, due to the low illumination and the limitation of detector sensitivity, the signal-to-noise ratio of the video image obtained by the system is low, which affects the observation of the human eye, and even cannot effectively obtain the target scene image. The noise of low-light video is mainly composed of white noise that conforms to Gaussian distribution produced by CCD and quantum noise produced by image intensifier.

现有的微光视频降噪算法主要分为空域滤波和时域滤波两类,时域滤波利用视频帧间相关性,在运动检测或者运动补偿的基础上,对视频进行降噪。空域滤波利用二维图像像素邻域之间的相关性来进行处理,例如均值滤波,维纳滤波等。Existing low-light video noise reduction algorithms are mainly divided into two types: spatial filtering and temporal filtering. Temporal filtering uses the correlation between video frames to reduce video noise on the basis of motion detection or motion compensation. Spatial filtering uses the correlation between two-dimensional image pixel neighborhoods for processing, such as mean filtering, Wiener filtering, etc.

单独使用时域滤波,如时域递归滤波算法,对于运动目标场景降噪效果不明显,且在目标匹配上易出现误差,出现残影现象。单独使用空域滤波,滤波中值滤波、均值滤波等,由于帧间同一位置的噪声存在随机性,因此,易导致滤波后相邻帧间的闪烁现象。Using time-domain filtering alone, such as the time-domain recursive filtering algorithm, has no obvious noise reduction effect on moving target scenes, and errors are prone to occur in target matching, resulting in afterimages. Using spatial filtering alone, filtering median filtering, mean filtering, etc., due to the randomness of the noise at the same position between frames, it is easy to cause flickering between adjacent frames after filtering.

发明内容Contents of the invention

本发明的目的在于提供一种时空域联合的微光视频降噪方法。The object of the present invention is to provide a low-light video noise reduction method combining time and space.

实现本发明目的的技术解决方案为:The technical solution that realizes the object of the present invention is:

一种时空域联合的微光视频降噪方法,包括以下步骤:A low-light video noise reduction method combining time and space domains, comprising the following steps:

步骤1,对原始微光视频进行boxfilter处理;Step 1, perform boxfilter processing on the original low-light video;

步骤2,对处理后的视频序列相邻帧进行运动检测,确定相应的滤波系数,并通过三维系数时域递归滤波算法对原始微光视频进行降噪处理;Step 2: Perform motion detection on adjacent frames of the processed video sequence, determine corresponding filter coefficients, and perform noise reduction processing on the original low-light video through a three-dimensional coefficient time-domain recursive filtering algorithm;

步骤3,通过改进的双边滤波处理方法进行图像增强,得到降噪后的微光视频。Step 3, image enhancement is performed through an improved bilateral filtering method to obtain a noise-reduced low-light video.

与现有技术相比,本发明的显著优点为:Compared with prior art, remarkable advantage of the present invention is:

(1)本发明的利用Boxfilter计算每个∑A(i,j)时,只需要两次运算,简化了逐一求和的过程,且经过Boxfilter预处理之后,在滤波算法中需要求出某个窗口内像素和时,可以直接访问B数组中对应的位置,使原本复杂度为O(9)的求和运算降低到O(1)的复杂度,提高了算法的运行速度;(1) When the present invention utilizes Boxfilter to calculate each ΣA (i, j), only two calculations are required, which simplifies the process of summing one by one, and after Boxfilter preprocessing, a certain value needs to be found in the filtering algorithm When the pixels in the window are summed, you can directly access the corresponding position in the B array, reducing the original complexity of O(9) to the complexity of O(1), and improving the running speed of the algorithm;

(2)本发明在时域滤波的基础上使用改进的双边滤波处理方法对视频进行空域滤波,解决了微光视频由于环境和器件因素所造成的对比度差、信噪比低等缺陷,提高了人眼的视觉效果。(2) On the basis of time domain filtering, the present invention uses an improved bilateral filtering processing method to carry out spatial filtering of video, which solves the defects of low-light video such as poor contrast and low signal-to-noise ratio caused by environmental and device factors, and improves the performance of video. Visual effects of the human eye.

附图说明:Description of drawings:

图1为本发明基于三维系数时域递归滤波和改进双边滤波联合的微光视频降噪算法流程图。FIG. 1 is a flow chart of a low-light video noise reduction algorithm based on the combination of three-dimensional coefficient time-domain recursive filtering and improved bilateral filtering in the present invention.

图2为boxfilter算法示意图。Figure 2 is a schematic diagram of the boxfilter algorithm.

图3(a)和图3(b)分别为静止建筑物原始视频截图和滤波后截图。Figure 3(a) and Figure 3(b) are the screenshots of the original video and the filtered screenshots of the still buildings, respectively.

图4(a)和图4(b)分别为在0.11lux照度下摇摆镜头的原始视频截图和滤波后截图。Figure 4(a) and Figure 4(b) are the screenshots of the original video and the filtered screenshots of the rocking camera under the illumination of 0.11lux, respectively.

图5(a)和图5(b)分别为在0.02lux照度下静止人像的原始视频截图和滤波后截图。Figure 5(a) and Figure 5(b) are the original video screenshots and filtered screenshots of still portraits under 0.02lux illumination, respectively.

图6(a)和图6(b)分别为在0.02lux照度下运动人像的原始视频截图和滤波后截图。Figure 6(a) and Figure 6(b) are the original video screenshots and filtered screenshots of moving portraits under 0.02lux illumination, respectively.

具体实施方式Detailed ways

结合图1,本发明的一种时空域联合的微光视频降噪方法,包括以下步骤:With reference to Fig. 1, a low-light video noise reduction method of joint spatio-temporal domain of the present invention comprises the following steps:

步骤1,对原始微光视频进行boxfilter处理;Step 1, perform boxfilter processing on the original low-light video;

给定滑动窗口大小,对每个窗口内的像素值进行快速相加求和,即创建一个和原图像A相同尺寸的数组B,使数组B中每个元素的值是原图像A对应位置的像素邻域内的像素和:Given the size of the sliding window, quickly add and sum the pixel values in each window, that is, create an array B with the same size as the original image A, so that the value of each element in the array B is the corresponding position of the original image A Sum of pixels in a pixel neighborhood:

B(i,j)=∑A(i,j),(i,j)∈M (1)B(i,j)=∑A(i,j),(i,j)∈M (1)

其中M为以(i,j)为中心的窗口内的所有像素点集。Where M is the set of all pixels in the window centered on (i, j).

设窗口大小为为3×3,该步骤具体包括:Let the window size be 3×3, this step specifically includes:

步骤1-1、滤波窗口大小为3×3,中间数组mid的长度等于原图像像素的列数;Step 1-1, the size of the filtering window is 3×3, and the length of the middle array mid is equal to the number of columns of pixels in the original image;

步骤1-2、滤波窗口从左上角开始,由左至右,由上至下逐个像素滑动,每移动到一个新位置时,设窗口中心像素点为(i,j),将第i-1、i、i+1行每一列依次求和,求和结果放在中间数组mid内,将mid[i,j-1]、mid[i,j]、mid[i,j+1]求和,得到ΣA(i,j),A(i,j)为原图中像素坐标为(i,j)的像素值,将求和结果存放到数组B的(i,j)点;Step 1-2, the filter window starts from the upper left corner, slides from left to right, from top to bottom pixel by pixel, each time it moves to a new position, set the center pixel of the window as (i, j), and set the i-1 , i, i+1 rows and each column are summed in turn, and the summation result is placed in the middle array mid, and mid[i, j-1], mid[i, j], mid[i, j+1] are summed , to get ΣA(i, j), A(i, j) is the pixel value whose pixel coordinates are (i, j) in the original image, and store the summation result in point (i, j) of array B;

步骤1-3、当窗口向右平移一个像素点,窗口中心像素点变为(i,j+1),将上一次的求和结果减去mid[i,j-1],再加上mid[i,j+2],得到新窗口的求和结果:Step 1-3. When the window is shifted to the right by one pixel, the center pixel of the window becomes (i, j+1), subtract mid[i, j-1] from the last summation result, and add mid [i, j+2], get the sum result of the new window:

ΣA(i,j+1)=ΣA(i,j)-mid[i,j-1]+mid[i,j+2] (2)ΣA(i, j+1) = ΣA(i, j) - mid[i, j-1]+mid[i, j+2] (2)

步骤1-4、当窗口移动到行末跳转到下一行时,对mid进行更新,对于每个mid[i+1,j],需加上A(i+2,j),再减去A(i-1,j),然后再开始新的一行的计算。Step 1-4. When the window moves to the end of the line and jumps to the next line, update mid. For each mid[i+1, j], add A(i+2, j) and subtract A (i-1, j), and then start the calculation of a new row.

上述过程如图2所示,图中每个圆点代表原图像中一个像素,正方形框代表所选取的滤波窗口,大小为3×3,正方块为中间数组,长度等于原图像像素的列数;The above process is shown in Figure 2, each dot in the figure represents a pixel in the original image, the square frame represents the selected filter window, the size is 3×3, the square is the middle array, and the length is equal to the number of columns of pixels in the original image ;

步骤2,对处理后的视频序列相邻帧进行运动检测,确定相应的滤波系数,并通过三维系数时域递归滤波算法对原始微光视频进行降噪处理;具体过程为:Step 2: Perform motion detection on adjacent frames of the processed video sequence, determine the corresponding filter coefficients, and perform noise reduction processing on the original low-light video through the three-dimensional coefficient time-domain recursive filtering algorithm; the specific process is:

步骤2-1、设定一个前后帧对应窗口像素和的差值d,递归降噪的同时统计d=|WN-WN-1|;Step 2-1, set a difference d of the corresponding window pixel sum of the front and rear frames, and count d=|W N -W N-1 | while recursively reducing noise;

滤波系数K设置成关于d的分段线性函数:The filter coefficient K is set as a piecewise linear function of d:

其中,d1、d2为分别为第一差值阈值和第二差值阈值,K1、K2分别为第一滤波系数阈值和第二滤波系数阈值,K1>K2Wherein, d 1 and d 2 are respectively the first difference threshold and the second difference threshold, K 1 and K 2 are respectively the first filter coefficient threshold and the second filter coefficient threshold, and K 1 >K 2 ;

步骤2-2、改进后的时域递归滤波滤波公式如下:Step 2-2, the improved time-domain recursive filtering filter formula is as follows:

WN(i,j)=XN(i,j)+KN(i,j)(W(N-1)(i,j)-XN(i,j)) (4)W N(i,j) =X N(i,j) +K N(i,j) (W (N-1)(i,j) -X N(i,j) ) (4)

式中,WN(i,j)为当前帧滤波后的输出图像中以点(i,j)为中心的窗口,W(N-1)(i,j)为前一帧滤波输出图像对应位置的窗口;XN(i,j)为当前的输入图像对应位置的窗口;KN(i,j)为以点(i,j)为中心窗口滤波系数,K∈(0,1)。In the formula, W N(i, j) is the window centered on point (i, j) in the filtered output image of the current frame, and W (N-1)(i, j) is the corresponding filter output image of the previous frame The window of the position; X N(i, j) is the window corresponding to the position of the current input image; K N(i, j) is the filter coefficient of the window centered on the point (i, j), K ∈ (0, 1).

步骤3,通过改进的双边滤波处理方法进行图像增强,得到降噪后的微光视频。基于窗口内像素灰度的相似性原理,在灰度相似度因子中添加了补偿函数,具体过程为:Step 3, image enhancement is performed through an improved bilateral filtering method to obtain a noise-reduced low-light video. Based on the similarity principle of pixel grayscale in the window, a compensation function is added to the grayscale similarity factor. The specific process is:

步骤3-1、计算空间邻近度因子,公式如下:Step 3-1, calculate the spatial proximity factor, the formula is as follows:

其中ωs(p,q)为空间邻近度因子,σs为滤波参数,(x,y)为滤波窗口中心像素坐标,(p,q)为窗口中的其他像素坐标;Where ω s (p, q) is the spatial proximity factor, σ s is the filter parameter, (x, y) is the pixel coordinate of the filter window center, (p, q) is the other pixel coordinates in the window;

步骤3-2、计算改进后的灰度相似度因子Step 3-2. Calculate the improved gray similarity factor

基于窗口内像素灰度的相似性,在灰度相似度因子中添加补偿函数,Based on the similarity of pixel grayscale in the window, a compensation function is added to the grayscale similarity factor,

其中ωr(p,q)为空间邻近度因子,σr为滤波参数,(x,y)为滤波窗口中心像素坐标,(p,q)为窗口中的其他像素坐标,τ(x,y)为补偿函数,τ(x,y)设置规则如下:where ω r (p, q) is the spatial proximity factor, σ r is the filter parameter, (x, y) is the coordinate of the pixel in the center of the filter window, (p, q) is the coordinate of other pixels in the window, τ(x, y ) is the compensation function, and the setting rules for τ(x, y) are as follows:

1)基于相似性判断滤波窗口中像素点和中心点灰度值的相似度;若像素点与中心点像素差值的绝对值小于σr/3,则判断I(p,q)与I(x,y)相似,保留I(p,q)原值,否则I(p,q)为0;1) Judging the similarity between the gray value of the pixel point and the central point in the filter window based on the similarity; if the absolute value of the pixel difference between the pixel point and the central point is less than σ r /3, then judge I(p,q) and I( x, y) are similar, keep the original value of I(p,q), otherwise I(p,q) is 0;

2)根据窗口中相似点的个数设置补偿函数,若窗口像素中置为0的个数小于窗口像素个数的1/3,则设置τ(x,y)=0,否则,按照规则3)设置。2) Set the compensation function according to the number of similar points in the window, if the number of pixels set to 0 in the window is less than 1/3 of the number of pixels in the window, then set τ(x, y)=0, otherwise, follow rule 3 )set up.

3)引进变量min,max,mean,分别代表滤波窗口内像素的最小值、最大值和平均值;令a=I(p,q)-mean,如果a>0,τ(x,y)=max-I(p,q);如果a<0,τ(x,y)=min-I(p,q)如果a=0,τ(x,y)=0;3) Introduce the variables min, max, and mean, which represent the minimum, maximum, and average values of pixels in the filter window respectively; let a=I(p,q)-mean, if a>0, τ(x,y)= max-I(p,q); if a<0, τ(x,y)=min-I(p,q) if a=0, τ(x,y)=0;

步骤3-3、计算改进双边滤波后的滤波图像:Step 3-3, calculate the filtered image after improving bilateral filtering:

其中,为滤波后得到的图像;Mx,y为以(x,y)为中心,以r为半径的空间邻域像素集合;I(p,q)为Mx,y中坐标为(p,q)的点像素值;ωs(p,q)和ωr(p,q)分别为空间邻近度因子和灰度相似性因子。in, is the image obtained after filtering; M x, y is a set of spatial neighborhood pixels centered on (x, y) and radius r; I(p, q) is M x, and the coordinates in y are (p, q ) point pixel value; ω s (p,q) and ω r (p,q) are spatial proximity factor and gray similarity factor respectively.

下面结合附图和具体实施例对本明进行详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

实施例Example

本发明选取了多个实验场景拍摄微光视频,分别对多种情况下拍摄的视频图像进行了实验验证。The present invention selects a plurality of experimental scenes to shoot low-light videos, and performs experimental verification on the video images shot in various situations.

图3(a)和图3(b)分别为静止建筑物原始视频截图和滤波后截图;图4(a)和图4(b)分别为在0.11lux照度下摇摆镜头的原始视频截图和滤波后示意图;图5(a)和图5(b)分别为在0.02lux照度下静止人像的原始视频截图和滤波后示意图;图6(a)和图6(b)分别为在0.02lux照度下运动人像的原始视频截图和滤波后示意图。Figure 3(a) and Figure 3(b) are the screenshots of the original video of the still building and the filtered screenshots respectively; Figure 4(a) and Figure 4(b) are the screenshots of the original video and the filtered screenshots of the swinging lens under the illumination of 0.11lux Figure 5(a) and Figure 5(b) are the original video screenshots and filtered schematic diagrams of still portraits under 0.02lux illumination; Figure 6(a) and Figure 6(b) are respectively under 0.02lux illumination The original video screenshot and filtered schematic diagram of a sports portrait.

通过图3和图5可以看出,针对静止的目标,本发明提出的时空域联合算法能够很好地降低噪声,增强图像的边缘细节等信息。It can be seen from Fig. 3 and Fig. 5 that for stationary targets, the space-time domain joint algorithm proposed by the present invention can well reduce noise and enhance image edge details and other information.

通过图4和图6可以看出,针对运动中的物体,本发明提出的算法也可以很好地完成降噪过程,并且不会出现拖影现象。It can be seen from Fig. 4 and Fig. 6 that for moving objects, the algorithm proposed by the present invention can also complete the noise reduction process well without smear phenomenon.

综上所述,可以看出本发明提出的时空域联合的微光视频降噪算法克服了单一时域滤波在滤除噪声的同时会使得视频的边缘特征变得模糊的缺点,相比于单纯时域或空域算法可获得更好的降噪效果,能够提供良好的视频画质,有效地抑制视频图像噪声,并且较好地保留图像的边缘、纹理等细节信息。To sum up, it can be seen that the low-light video noise reduction algorithm proposed by the present invention combines time-space domain to overcome the disadvantage that a single time-domain filter will make the edge features of the video blurred while filtering out noise. Time-domain or space-domain algorithms can achieve better noise reduction effects, provide good video quality, effectively suppress video image noise, and better preserve image details such as edges and textures.

Claims (5)

1. A low-light-level video noise reduction method based on time-space domain combination is characterized by comprising the following steps:
step 1, carrying out boxfilter processing on an original low-light-level video;
step 2, carrying out motion detection on adjacent frames of the processed video sequence, determining corresponding filter coefficients, and carrying out noise reduction processing on the original low-light-level video through a three-dimensional coefficient time domain recursive filter algorithm;
and 3, carrying out image enhancement through an improved bilateral filtering processing method to obtain the low-light-level video subjected to noise reduction.
2. The method for micro-optic video noise reduction in a temporal-spatial domain combination according to claim 1, wherein the step 1 is specifically:
and (3) giving the size of the sliding window, and performing quick addition summation on the pixel values in each window, namely creating an array B with the same size as the original image A, wherein the value of each element in the array B is the sum of pixels in the pixel neighborhood of the corresponding position of the original image A:
B(i,j)=ΣA(i,j),(i,j)∈M (1)
where M is the set of all pixel points within the window centered at (i, j).
3. The method for low-light level video noise reduction in combination of time-space domain according to claim 1 or 2, wherein the step 1 is specifically:
step 1-1, setting the size of a filtering window to be 3 multiplied by 3, wherein the length of a middle array mid is equal to the number of columns of pixels of an original image;
step 1-2, starting a filtering window from the upper left corner, sliding pixels one by one from the top to the bottom from the left to the right, setting a central pixel point of the window as (i, j) when the filtering window moves to a new position, summing each row of the (i-1) th row, the i +1 th row and each column of the i +1 th row in sequence, putting a summation result in a middle array mid, summing mid [ i, j-1], mid [ i, j ] and mid [ i, j +1] to obtain a sigma A (i, j), wherein A (i, j) is a pixel value with a pixel coordinate of (i, j) in an original image, and storing the summation result to a point (i, j) of an array B;
step 1-3, when the window is shifted to the right by one pixel point, the center pixel point of the window is changed into (i, j +1), mid [ i, j-1] is subtracted from the last summation result, and mid [ i, j +2] is added to obtain the summation result of a new window:
ΣA(i,j+1)=ΣA(i,j)-mid[i,j-1]+mid[i,j+2](2)
and 1-4, when the window moves to the end of the row and jumps to the next row, updating mid, adding A (i +2, j) to each mid [ i +1, j ], subtracting A (i-1, j), and then starting the calculation of a new row.
4. The method for low-light-level video denoising in time-space domain combination according to claim 1, wherein in step 2, motion detection is performed on the adjacent frames of the video sequence after the boxfilter processing, and the corresponding filter coefficients are determined based on the motion detection, and the specific process is as follows:
step 2-1, setting a difference d of corresponding window pixel sums of a front frame and a rear frame, and calculating d ═ W while recursively reducing noiseN-WN-1|;
The filter coefficient K is set as a piecewise linear function with respect to d:
wherein d is1、d2To be a first difference threshold and a second difference threshold, respectively, K1、K2Respectively a first filter coefficient threshold and a second filter coefficient threshold, K1>K2
Step 2-2, the improved time domain recursive filtering formula is as follows:
WN(i,j)=XN(i,j)+KN(i,j)(W(N-1)(i,j)-XN(i,j)) (4)
in the formula, WN(i,j)For a window centered at point (i, j) in the filtered output image of the current frame, W(N-1)(i,j)Outputting a window of a corresponding position of the image for the previous frame; xN(i,j)A window corresponding to the current input image; kN(i,j)The filter coefficients are windowed with point (i, j) as the center, and K ∈ (0, 1).
5. The method for reducing noise of a low-light level video in a time-space domain combination according to claim 1, wherein a compensation function is added to the gray level similarity factor in step 3 based on the similarity principle of pixel gray levels in a window, and the specific process is as follows:
step 3-1, calculating a spatial proximity factor, wherein the formula is as follows:
wherein ω iss(p, q) is a spatial proximity factor, σsIs the filter parameter, (x, y) is the filter window center pixel coordinate, (p, q) is the other pixel coordinate in the window;
step 3-2, calculating the improved gray level similarity factor
Based on the similarity of the pixel gray levels in the window, a compensation function is added to the gray level similarity factor,
wherein ω isr(p, q) is a spatial proximity factor, σrFor the filter parameters, (x, y) is the filter window center pixel coordinate, (p, q) is the other pixel coordinates in the window, τ (x, y) is the compensation function, and τ (x, y) is set as follows:
1) judging the similarity of the gray values of the pixel points and the central points in the filtering window based on the similarity; if the absolute value of the difference value between the pixel point and the central point pixel is less than sigmarIf the I (p, q) is similar to the I (x, y), keeping the original value of the I (p, q), otherwise, keeping the I (p, q) as 0;
2) and setting a compensation function according to the number of the similar points in the window, and if the number of the window pixels which are set to be 0 is less than 1/3 of the number of the window pixels, setting tau (x, y) to be 0, otherwise, setting according to a rule 3).
3) Introducing variables min, max and mean which respectively represent the minimum value, the maximum value and the average value of pixels in the filtering window; let a ═ I (p, q) -mean, if a > 0, τ (x, y) ═ max-I (p, q); if a < 0, τ (x, y) ═ min-I (p, q) if a ═ 0, τ (x, y) ═ 0;
step 3-3, calculating a filtered image after improving bilateral filtering:
wherein,the image obtained after filtering; mx,yA spatial neighborhood pixel set taking (x, y) as a center and r as a radius; i (p, q) is Mx,yA point pixel value with a middle coordinate of (p, q); omegas(p, q) and ωr(p, q) are the spatial proximity factor and the grayscale similarity factor, respectively.
CN201611115487.3A 2016-12-07 2016-12-07 A low-light video noise reduction method based on joint spatio-temporal domain Pending CN108174056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611115487.3A CN108174056A (en) 2016-12-07 2016-12-07 A low-light video noise reduction method based on joint spatio-temporal domain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611115487.3A CN108174056A (en) 2016-12-07 2016-12-07 A low-light video noise reduction method based on joint spatio-temporal domain

Publications (1)

Publication Number Publication Date
CN108174056A true CN108174056A (en) 2018-06-15

Family

ID=62526183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611115487.3A Pending CN108174056A (en) 2016-12-07 2016-12-07 A low-light video noise reduction method based on joint spatio-temporal domain

Country Status (1)

Country Link
CN (1) CN108174056A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110944176A (en) * 2019-12-05 2020-03-31 浙江大华技术股份有限公司 Image frame noise reduction method and computer storage medium
CN113012061A (en) * 2021-02-20 2021-06-22 百果园技术(新加坡)有限公司 Noise reduction processing method and device and electronic equipment
US12094085B2 (en) 2019-12-12 2024-09-17 Tencent Technology (Shenzhen) Company Limited Video denoising method and apparatus, terminal, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014240A (en) * 2010-12-01 2011-04-13 深圳市蓝韵实业有限公司 Real-time medical video image denoising method
CN102769722A (en) * 2012-07-20 2012-11-07 上海富瀚微电子有限公司 Time-space domain hybrid video noise reduction device and method
CN103024248A (en) * 2013-01-05 2013-04-03 上海富瀚微电子有限公司 Motion-adaptive video image denoising method and device
CN103533214A (en) * 2013-10-01 2014-01-22 中国人民解放军国防科学技术大学 Video real-time denoising method based on kalman filtering and bilateral filtering
WO2015172235A1 (en) * 2014-05-15 2015-11-19 Tandemlaunch Technologies Inc. Time-space methods and systems for the reduction of video noise

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014240A (en) * 2010-12-01 2011-04-13 深圳市蓝韵实业有限公司 Real-time medical video image denoising method
CN102769722A (en) * 2012-07-20 2012-11-07 上海富瀚微电子有限公司 Time-space domain hybrid video noise reduction device and method
CN103024248A (en) * 2013-01-05 2013-04-03 上海富瀚微电子有限公司 Motion-adaptive video image denoising method and device
CN103533214A (en) * 2013-10-01 2014-01-22 中国人民解放军国防科学技术大学 Video real-time denoising method based on kalman filtering and bilateral filtering
WO2015172235A1 (en) * 2014-05-15 2015-11-19 Tandemlaunch Technologies Inc. Time-space methods and systems for the reduction of video noise

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XINGYUN-LIU: ""integral image(积分图)和boxfliter"", 《CSDN论坛》 *
张海荣、檀结庆: ""改进的双边滤波算法"", 《合肥工业大学学报》 *
韩义勇等: ""时域递归滤波对微光成像视距影响的研究"", 《应用光学》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110944176A (en) * 2019-12-05 2020-03-31 浙江大华技术股份有限公司 Image frame noise reduction method and computer storage medium
CN110944176B (en) * 2019-12-05 2022-03-22 浙江大华技术股份有限公司 Image frame noise reduction method and computer storage medium
US12094085B2 (en) 2019-12-12 2024-09-17 Tencent Technology (Shenzhen) Company Limited Video denoising method and apparatus, terminal, and storage medium
CN113012061A (en) * 2021-02-20 2021-06-22 百果园技术(新加坡)有限公司 Noise reduction processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US9041834B2 (en) Systems and methods for reducing noise in video streams
KR20210139450A (en) Image display method and device
CN108734670B (en) Restoration method of a single nighttime weak illumination haze image
US9135683B2 (en) System and method for temporal video image enhancement
Lv et al. Real-time dehazing for image and video
CN102768760B (en) Quick image dehazing method on basis of image textures
WO2016206087A1 (en) Low-illumination image processing method and device
CN109754377A (en) A Multi-Exposure Image Fusion Method
CN108765342A (en) A kind of underwater image restoration method based on improvement dark
CN107979712B (en) Video noise reduction method and device
CN102637293A (en) Moving image processing device and moving image processing method
CN115131229A (en) Image noise reduction and filtering data processing method and device and computer equipment
CN109377450A (en) An edge-preserving denoising method
CN105550999A (en) Video image enhancement processing method based on background reuse
CN113724155A (en) Self-boosting learning method, device and equipment for self-supervision monocular depth estimation
RU2419880C2 (en) Method and apparatus for calculating and filtering disparity map based on stereo images
CN111539895B (en) Video denoising method and device, mobile terminal and storage medium
Yu et al. Image and video dehazing using view-based cluster segmentation
Wang et al. Single-image dehazing using color attenuation prior based on haze-lines
CN107818547A (en) The minimizing technology of the spiced salt and Gaussian mixed noise in a kind of sequence towards twilight image
CN105338220B (en) A method of adaptively to the electron multiplication CCD video image denoisings of movement
Hegde et al. Adaptive cubic spline interpolation in cielab color space for underwater image enhancement
CN108174056A (en) A low-light video noise reduction method based on joint spatio-temporal domain
Toka et al. A fast method of fog and haze removal
Hu et al. A low illumination video enhancement algorithm based on the atmospheric physical model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180615

RJ01 Rejection of invention patent application after publication