[go: up one dir, main page]

WO2018133379A1 - 一种低照度图像增强方法 - Google Patents

一种低照度图像增强方法 Download PDF

Info

Publication number
WO2018133379A1
WO2018133379A1 PCT/CN2017/095931 CN2017095931W WO2018133379A1 WO 2018133379 A1 WO2018133379 A1 WO 2018133379A1 CN 2017095931 W CN2017095931 W CN 2017095931W WO 2018133379 A1 WO2018133379 A1 WO 2018133379A1
Authority
WO
WIPO (PCT)
Prior art keywords
exposure
image
equation
model
camera response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/095931
Other languages
English (en)
French (fr)
Inventor
李革
应振强
任俞睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Shenzhen Graduate School
Original Assignee
Peking University Shenzhen Graduate School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School filed Critical Peking University Shenzhen Graduate School
Priority to US16/478,570 priority Critical patent/US10614561B2/en
Publication of WO2018133379A1 publication Critical patent/WO2018133379A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a method for performing low illumination image enhancement using camera response characteristics.
  • a low-illumination image refers to a partially or globally dark image, such as an image taken under conditions of lower illumination.
  • Low illumination images have poor visibility, which seriously affects people's observations and the performance of some computer vision algorithms.
  • Computer vision algorithms usually require high visibility of input images.
  • Most algorithms cannot directly process low-illumination images. Therefore, some low-light images often need to be enhanced before performing corresponding operations.
  • enhancement algorithms for low illumination images there have been many enhancement algorithms for low illumination images.
  • the low illumination enhancement algorithm provides enhanced visibility of the enhancement results by changing the pixel brightness of the input image.
  • the existing low illumination enhancement methods are mainly divided into the following four types:
  • Nonlinear equation mapping for low-light enhancement This uses some nonlinear monotonic equations for gray-scale mapping, such as power functions, logarithmic functions, exponential functions, and so on.
  • histogram equalization for low-light enhancement Considering the uneven distribution of low-luminance image histograms, this method uses histogram equalization to enhance low-light images, and by changing the contrast of images, the images are better. Visibility. However, this method may cause distortion of the enhancement result due to excessive contrast of the image.
  • Retinal theory enhances low-light images by dividing the image into two components, irradiance and reflection. This method can significantly enhance the image details, but the enhancement results often appear light. Halo phenomenon.
  • the existing low-light image enhancement method enhances the image and introduces some artificial traces, such as color distortion, contrast distortion, etc., it is difficult to obtain a natural degree of enhancement, not only affecting people's subjective visual experience, It also affects the performance of computer vision algorithms.
  • the present invention provides a low illumination image enhancement method based on camera response characteristics, which can solve the problem that the existing low illumination image enhancement algorithm introduces more artifacts while enhancing the image. Obtaining an enhanced result with better visual effect and less distortion can maintain the authenticity of the image to a greater extent, thereby obtaining a natural image-enhanced image enhancement result.
  • the principle of the invention is that the response characteristic of the camera can provide a lot of useful information.
  • the invention utilizes the response characteristics of the camera, first selects an exposure model capable of accurately describing the relationship between different exposed images, and obtains a corresponding camera response. Equation model; then determining the parameters of the model by the low-illuminance image corresponding to the camera's camera response curve or two differently exposed images taken by the camera; subsequently, estimating the exposure ratio of the multi-exposure image sequence to be generated to the original image, And using the exposure model to generate a multi-exposure image sequence; finally, the image fusion algorithm is used to fuse these multi-exposure images, thereby obtaining an enhanced result with better visual effect and less distortion.
  • a low illumination image enhancement method based on camera response characteristics, selecting an exposure model and obtaining a camera response equation model corresponding to the exposure model; estimating an exposure ratio of the multi-exposure image sequence to be generated and the original image, and generating an exposure model using the exposure model And exposing the image sequence; and performing image fusion on the multi-exposure image sequence; thereby obtaining an enhanced image result that maintains the naturalness of the image; and the following steps are included:
  • step 1) calculates the luminance component Y of the image B by Equation 6:
  • B r , B g , and B b are the three-channel component values of R, G, and B of image B, respectively.
  • Equation 2 the camera response equation model corresponding to the model can be derived as Equation 2:
  • k is the exposure ratio between images B 1 and B 0 ;
  • E is the irradiance of the scene.
  • the low illumination image enhancement method, and further, the step 22) determining the camera response parameter of the camera response equation model comprises:
  • Method 1 When the camera response curve of the corresponding camera is known, the camera response curve is fitted by a camera response equation model using a least squares fitting method to obtain response parameters a, b;
  • Equation 4 B 0 and B 1 represent two images of two images of different exposures in the same scene
  • Method 3 When there is only one input image and the information of the camera is unknown, the average camera response curve is used to average, and the average camera response curve is obtained as the response curve of the camera; then the response curve is fitted to obtain the parameter parameters. a and b.
  • the low illumination image enhancement method, further, the step 3) estimating the exposure ratio set K comprises the following steps:
  • Equation 7 The weight matrix defining the image is Equation 7, which is used to indicate the exposure of different pixels in an image:
  • Equation 7 Y k represents the luminance component of the image with the exposure ratio of the input image k; W k is the weight matrix corresponding to the image Y k , and the larger the weight value of a certain point in the matrix, the closer the point is to normal exposure;
  • Equations 8 and 9 the max operation represents taking the maximum value of the corresponding element in the matrix
  • the exposure ratio obtained by the recording is recorded Simultaneously update the current exposure sequence weight matrix Go back to step 332) continue searching for the next exposure ratio
  • the threshold value ⁇ 0.01.
  • step 4) sequentially makes k * equal to each element in the exposure set K, and generates a multi-exposure image sequence by Equation 5:
  • k * is an element in the exposure ratio set K described above.
  • B 0 is an input low illuminance image
  • B * is a generated image whose exposure ratio to the input image B 0 is k * .
  • the low illumination image enhancement method described above, further, the image fusion method of step 5) may employ any existing multi-exposure image fusion algorithm.
  • an image fusion method described in the literature (Ma Kede and Wang Zhou, MULTI-EXPOSURE IMAGE FUSION: A PATCH-WISE APPROACH, IEEE ma2015 multi, 2015) is specifically used.
  • the method can fuse the multi-exposure image and obtain a multi-exposure image with good visual effect.
  • the fusion method divides the multi-exposure image into different image blocks, and then decomposes each image block into three parts: signal intensity , signal structure and average brightness. Finally, the image blocks of different exposed images are fused according to the image block intensity and the exposure amount, thereby obtaining the final fused result image.
  • the invention provides a low illumination image enhancement algorithm based on camera response characteristics, which can maintain image naturalness.
  • the invention first selects an exposure model capable of accurately describing the relationship between different exposure images, and obtains a corresponding camera response equation model.
  • the parameters of the model are then determined by the low illumination image corresponding to the camera's camera response curve or two differently exposed images taken by the camera.
  • estimating the exposure of the multi-exposure image sequence to be generated with the original image Scale and use the exposure model to generate a sequence of multiple exposure images.
  • these multi-exposure images are fused using an image fusion algorithm to obtain enhanced results.
  • the invention can solve the problem that the existing low illumination image enhancement algorithm introduces more artificial traces while enhancing the image, and obtains an enhanced result with good visual effect, less artificial traces and less distortion, and can maintain the image to a large extent. Authenticity, which results in a natural image retention result.
  • the method can be applied to various computer vision fields as an image preprocessing method.
  • FIG. 1 is a flow chart of a low illumination image enhancement method provided by the present invention.
  • FIG. 2 is a schematic diagram of a multi-exposure image sequence and its weight matrix and weight matrix operation results in the embodiment
  • the first behavior is a multi-exposure image sequence of a scene
  • the second behavior is a weight matrix diagram of the multi-exposure image sequence
  • the third behavior is a result image of the weight matrix operation.
  • 3 is a low illumination input image used in an embodiment of the present invention.
  • the invention provides a low illumination image enhancement algorithm based on camera response characteristics, which can maintain image naturalness.
  • the invention first selects an exposure model capable of accurately describing the relationship between different exposure images, and obtains a corresponding camera response equation model.
  • the parameters of the model are then determined by the low illumination image corresponding to the camera's camera response curve or two differently exposed images taken by the camera.
  • an exposure ratio of the multi-exposure image sequence to be generated to the original image is estimated, and a multi-exposure image sequence is generated using the exposure model.
  • these multi-exposure images are fused using an image fusion algorithm to obtain enhanced results.
  • FIG. 1 is a flow chart of a low illumination image enhancement method provided by the present invention, including the following steps:
  • Step 1 Selection of the exposure model: We chose an exposure model that accurately describes the relationship between different exposure images.
  • the expression for this exposure model is:
  • B 0 and B 1 represent two images of different exposures of the same scene; ⁇ , ⁇ are two parameters of the exposure model.
  • ⁇ and ⁇ parameters of the R, G, and B channels are small. So we assume that the three channels share a set of parameters.
  • Equation 2 Equation 2
  • k is the exposure ratio between images B 1 and B 0 ;
  • E is the irradiance of the scene;
  • a, b, c are the parameters of the camera response equation model, and their relationship with the parameters ⁇ , ⁇ of the exposure model is Listed in 2.
  • Step 2 the camera responds to the determination of the equation model parameters a, b: for a particular camera, its model parameters need to be determined. If the camera response curve of the corresponding camera is known, the response parameters a, b can be obtained by fitting the camera response curve using a least squares fit method through the camera response equation model.
  • the camera responds to the equation.
  • Equation 4 B 0 and B 1 represent two images of different exposures of the same scene.
  • the parameters a and b of the response model are obtained according to the exposure ratio k of Equation 2 and B 1 and B 0 .
  • Step 3 Low illumination image input: input a low illumination image B, and calculate the luminance component Y of the image, and the luminance component is calculated by Equation 6:
  • B r , B g , and B b are the three-channel component values of R, G, and B of image B, respectively.
  • Step 4 Estimation of exposure ratio K: Different multi-exposure image sequences will have a great impact on the performance of the image fusion algorithm. Therefore, in order to obtain better image fusion results and at the same time make the image fusion program the most efficient, we need It is reasonable to select the exposure ratio set K between each image and the input image in the multi-exposure image sequence. Ideal exposure ratio set K You should use as few images as possible to reflect as much scene information as possible. In general, for an image fusion algorithm, a properly exposed area will carry more scene information than an unreasonable exposure area. Therefore, multiple exposure sequence images should achieve as much exposure as possible in as many points as possible in the scene.
  • the first line of Figure 2 shows a sequence of multiple exposure images of a scene. It can be seen that the three images respectively make the blue sky, road, and building reach a reasonable exposure.
  • Equation 7 To measure the exposure of different pixels in an image, we define the weight matrix of the image as Equation 7:
  • Y k represents the luminance component of the image with the exposure ratio of the input image k
  • W k is the weight matrix corresponding to the image Y k .
  • the second row of Figure 2 shows a weight matrix of the multi-exposure image sequence, the closer the brighter the dots are to the reasonable exposure.
  • the max operation represents taking the maximum value of the corresponding elements in the matrices W i and W j .
  • Equation 9 we also define the subtraction of the weight matrix as shown in Equation 9:
  • Equation 9 shows that in the case of the image B j whose exposure ratio j is known, increasing the image Bi of the exposure ratio i can cause which pixels in the scene tend to be reasonably exposed.
  • the third row in Figure 2 shows the resulting image of the weight matrix operation.
  • the exposure ratio is a generator algorithm, first calculating an exposure weight matrix of the input image, simultaneously initializing the weight matrix W of the current exposure sequence, and the exposure ratio set K, and then searching to obtain the exposure weight W of the current exposure sequence.
  • the percentage of exposure with the largest increase in the total value And determining whether the increased total exposure weight is less than a preset threshold ⁇ . If it is less than the threshold, it is proved that the added weight has been small, then the loop termination program is jumped out, and K is output. On the contrary, the exposure ratio obtained by the record is recorded. Simultaneously update the current exposure sequence weight matrix W and continue to find the next exposure ratio Until the algorithm terminates the output K.
  • Step 5 generating a multi-exposure image sequence: using the parameters a, b obtained in step 2 and the exposure ratio set K obtained in step 4, sequentially making k * equal to each element in the set K, and generating a multi-exposure image by the formula 5 sequence.
  • k * is an element in the exposure ratio set K described above.
  • B 0 is an input low illuminance image
  • B * is a generated image whose exposure ratio to the input image B 0 is k * .
  • Step 6 image fusion: the image obtained in step 5 is fused by image fusion, and the enhanced result is obtained and output.
  • the following embodiment utilizes the above-described low illumination image enhancement method of the present invention to perform image enhancement on the low illumination image B shown in FIG. 3, including the following steps:
  • Low illumination image input input a low illumination image B, as shown in Figure 3, and calculate the luminance component Y of the image using the formula (Equation 6):
  • Image fusion In this step, any multi-exposure image fusion algorithm can be used to fuse the generated image sequences to obtain enhanced results.
  • an image fusion method described in the literature (Ma Kede and Wang Zhou, MULTI-EXPOSURE IMAGE FUSION: A PATCH-WISE APPROACH, IEEE ma2015multi, 2015) is selected.
  • the method can fuse the multi-exposure image and obtain a multi-exposure image with good visual effect.
  • the fusion method divides the multi-exposure image into different image blocks, and then decomposes each image block into three parts: signal intensity , signal structure and average brightness. Finally, according to the image block intensity and the exposure amount, the image blocks of different exposed images are merged, thereby obtaining The image of the final fusion result.
  • the resulting image obtained by image enhancement is shown in FIG.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种基于相机响应特性的低照度图像增强方法,选用一种曝光模型并得到与之对应的相机响应方程模型;确定相机响应模型的参数;估测所要生成的多曝光图像序列与原始图像的曝光比例,生成多曝光图像序列;将多曝光图像进行图像融合,由此得到一个视觉效果更好、失真更少的增强结果。该方法能够解决现有的低照度图像增强算法在增强图像的同时引入较多人工痕迹的问题,得到一个视觉效果更好、失真更少的增强结果,从而得到一个保持自然度的图像增强结果。

Description

一种低照度图像增强方法 技术领域
本发明涉及图像处理技术领域,尤其涉及一种利用相机响应特性进行低照度图像增强的方法。
背景技术
低照度图像是指局部或全局较暗的图像,例如在较低光照度的条件下拍摄的图像。低照度图像可视性较差,严重影响人们的观察和一些计算机视觉算法的性能。计算机视觉算法通常要求输入图像有较高的可视性,绝大多数算法无法直接对低照度图像进行处理,因此,一些低照度图像往往需要增强之后再进行相应的操作。为了解决这一问题,已有很多低照度图像的增强算法。低照度增强算法通过改变输入图像的像素亮度使增强结果有更高的可视性。现有的低照度增强方法主要分以下四种:
一、非线性方程映射进行低光照增强的方法:该使用一些非线性的单调方程进行灰度级的映射,例如幂函数、对数函数、指数函数等等。
二、直方图均衡化进行低光照增强的方法:考虑到低光度图像直方图不均匀分布,这种方法使用直方图均衡化来增强低光度图像,通过改变图像的对比度,使图像有较好的可视性。但是,该方法却可能由于图像对比度的过增强而导致增强结果失真。
三、利用视网膜理论进行低光照增强的方法:视网膜理论通过将图像分为辐照度与反射两种分量来增强低光度图像,这种方法可以明显的增强图像细节,但是增强结果却经常出现光晕现象。
四、基于去雾理论进行低光照图像增强的方法:这类方法能够达到良好的主观结果,但是,它们也会因为对比度的过增强而导致一些颜色失真。
综合来看,现有的低光度图像增强方法在增强图像的同时由于引入一些人工痕迹,例如颜色失真,对比度失真等,很难得到一个自然度保持的增强结果,不仅影响人们的主观视觉感受,同时也会影响计算机视觉算法的性能。
发明内容
为了克服上述现有技术的不足,本发明提供提出一种基于相机响应特性的低照度图像增强方法,能够解决现有的低照度图像增强算法在增强图像的同时引入较多人工痕迹的问题, 得到一个视觉效果更好,失真更少的增强结果,能够较大程度的保持图像的真实性,从而得到一个自然度保持的图像增强结果。
本发明的原理是:相机的响应特性能够提供很多有用的信息,本发明利用相机的响应特性,首先选用一种能够精确描述不同曝光图像之间关系的曝光模型,并得到与之对应的相机响应方程模型;然后通过低照度图像对应相机的相机响应曲线或者该类相机拍摄出的两张不同曝光的图像确定模型的参数;随后,估测所要生成的多曝光图像序列与原始图像的曝光比例,并利用曝光模型生成多曝光图像序列;最后,利用图像融合的算法将这些多曝光图像融合,由此得到一个视觉效果更好、失真更少的增强结果。
本发明提供的技术方案是:
一种低照度图像增强方法,基于相机响应特性,选择曝光模型并得到所述曝光模型对应的相机响应方程模型;估测所要生成的多曝光图像序列与原始图像的曝光比例,利用曝光模型生成多曝光图像序列;再将所述多曝光图像序列进行图像融合;由此得到保持图像自然度的增强图像结果;包括如下步骤:
1)输入一张低照度图像B,计算得到该图像的亮度分量Y;
2)确定相机响应方程模型及其参数,包括:
21)选择曝光模型,进一步得到曝光模型对应的相机响应方程模型;
22)确定所述相机响应方程模型的模型参数;
3)计算所要生成的多曝光图像序列与输入图像B之间的曝光比例集合K;
4)根据所述的曝光比例集合K,利用曝光模型生成多曝光图像序列;
5)将得到的多曝光图像序列利用图像融合的方法进行融合,得到增强结果并输出;
上述低照度图像增强方法,进一步地,步骤1)通过式6计算得到图像B的亮度分量Y:
Figure PCTCN2017095931-appb-000001
其中,Br、Bg、Bb分别为图像B的R、G、B三通道分量值。
上述低照度图像增强方法,进一步地,步骤21)所述曝光模型为式1:
Figure PCTCN2017095931-appb-000002
式中,B0和B1代表相同场景不同曝光的两张图像;β、γ是该曝光模型的两个参数;从式1中,可以推出所述模型对应的相机响应方程模型为式2:
Figure PCTCN2017095931-appb-000003
其中,k为图像B1与B0之间的曝光比例;E是场景的辐照度。
上述低照度图像增强方法,进一步地,步骤22)确定所述相机响应方程模型的相机响应参数的方法包括:
方法一:当已知对应相机的相机响应曲线时,通过相机响应方程模型使用最小二乘拟合法拟合该相机响应曲线,得到响应参数a、b;
方法二:当相机响应曲线未知时,通过式4映射两张相同场景下不同曝光图像之间的关系,得到
Figure PCTCN2017095931-appb-000004
Figure PCTCN2017095931-appb-000005
式4中,B0和B1代表相同场景不同曝光的两张图像的两张图像;
得到
Figure PCTCN2017095931-appb-000006
之后,根据式2相机响应方程模型以及图像B1、B0的曝光比例k,得到响应模型的参数a和b;
方法三:当只有一张输入图像而相机的信息未知时,使用多个真实相机响应曲线进行平均,得到平均相机响应曲线,作为该相机的响应曲线;再对该响应曲线进行拟合得到参数参数a和b。
上述低照度图像增强方法,进一步地,步骤3)所述估测曝光比例集合K包括如下步骤:
31)定义图像的权值矩阵为式7,用于表示一幅图像中不同像素点的曝光情况:
Figure PCTCN2017095931-appb-000007
式7中,Yk表示与输入图像曝光比例为k的图像的亮度分量;Wk是图像Yk所对应的权值矩阵,矩阵中某点的权值数值越大,表明该点越接近正常曝光;
32)定义权值矩阵的加法和减法,加法为式8,减法为式9:
Figure PCTCN2017095931-appb-000008
Figure PCTCN2017095931-appb-000009
式8和式9中,max运算表示取矩阵中对应元素的最大值;
33)估测生成曝光比例集合K,包括:
331)预设曝光权值总量的门限值τ,计算输入图像的曝光权值矩阵W1,并初始化当前曝光序列的曝光权值矩阵W←W1,同时曝光比例集合K←{1};
332)搜索得到使所述当前曝光序列的曝光权值W权值总量增加最多的曝光比例
Figure PCTCN2017095931-appb-000010
333)当曝光权值增加量不小于所述门限值τ时,记录获得的曝光比例
Figure PCTCN2017095931-appb-000011
同时更新当前曝光序列权值矩阵
Figure PCTCN2017095931-appb-000012
返回步骤332)继续搜索下一个曝光比例
Figure PCTCN2017095931-appb-000013
334)当曝光权值总量小于所述门限值τ时,结束操作,得到曝光比例集合K。
上述低照度图像增强方法,本发明实施例中,门限值τ=0.01。
上述低照度图像增强方法,进一步地,步骤4)依次令k*等于所述的曝光集合K中的每一个元素,通过式5生成多曝光图像序列:
Figure PCTCN2017095931-appb-000014
式5中,k*为所述的曝光比例集合K中的某一元素。B0为输入的低照度图像,B*为与输入图像B0曝光比为k*的生成图像。
上述低照度图像增强方法,进一步地,步骤5)所述图像融合方法可以采用任意现有的多曝光图像融合算法。在本发明实施例中,具体采用文献(Ma Kede and Wang Zhou,MULTI-EXPOSURE IMAGE FUSION:A PATCH-WISE APPROACH,IEEE ma2015multi,2015)所记载的图像融合方法。该方法可以将多曝光图像融合并得到视觉效果良好的多曝光图像,具体的,该融合方法将多曝光图像分为不同的图像块,然后将每一个图像块都分解为三个部分:信号强度、信号结构和平均亮度。最后根据图像块强度以及曝光量,将不同曝光图像的图像块进行融合,从而得到最终的融合结果图像。
与现有技术相比,本发明的有益效果是:
本发明提供了一种基于相机响应特性的低照度图像增强算法,能够保持图像自然度。本发明首先选用了一种能够精确描述不同曝光图像之间关系的曝光模型,并得到与之对应的相机响应方程模型。然后通过低照度图像对应相机的相机响应曲线或者该类相机拍摄出的两张不同曝光的图像确定模型的参数。随后,估测所要生成的多曝光图像序列与原始图像的曝光 比例,并利用曝光模型生成多曝光图像序列。最后,利用图像融合的算法将这些多曝光图像融合,得到增强结果。本发明能够解决现有的低照度图像增强算法在增强图像的同时引入较多人工痕迹的问题,得到视觉效果良好、人工痕迹较少、失真更少的增强结果,能够较大程度的保持图像的真实性,从而得到一个自然度保持的图像增强结果。本方法可作为图像预处理方法,应用到多种计算机视觉领域。
附图说明
图1为本发明提供的低照度图像增强方法的流程框图。
图2为实施例中多曝光图像序列及其权值矩阵和权值矩阵运算结果的示意图;
其中,第一行为某场景的多曝光图像序列;第二行为多曝光图像序列的权值矩阵图;第三行为权值矩阵运算的结果图像。
图3为本发明实施例中采用的低照度输入图像。
图4为本发明实施例中得到的增强结果图像。
具体实施方式
下面结合附图,通过实施例进一步描述本发明,但不以任何方式限制本发明的范围。
本发明提供了一种基于相机响应特性的低照度图像增强算法,能够保持图像自然度。本发明首先选用了一种能够精确描述不同曝光图像之间关系的曝光模型,并得到与之对应的相机响应方程模型。然后通过低照度图像对应相机的相机响应曲线或者该类相机拍摄出的两张不同曝光的图像确定模型的参数。随后,估测所要生成的多曝光图像序列与原始图像的曝光比例,并利用曝光模型生成多曝光图像序列。最后,利用图像融合的算法将这些多曝光图像融合,得到增强结果。图1为本发明提供的低照度图像增强方法的流程框图,包括以下步骤:
步骤1,曝光模型的选择:我们选择了一种能够精确描述不同曝光图像之间关系的曝光模型。该曝光模型的表达式为:
Figure PCTCN2017095931-appb-000015
式中,B0和B1代表相同场景不同曝光的两张图像;β、γ是曝光模型的两个参数。实验中我们观察到,对于彩色图像,R、G、B三通道的β、γ参数值相差较小。所以我们假设三通道共用一套参数。
根据式1,我们可以得到该曝光模型对应的相机响应方程模型表达式为式2:
Figure PCTCN2017095931-appb-000016
其中,k为图像B1与B0之间的曝光比例;E是场景的辐照度;a,b,c为相机响应方程模型的参数,它们与曝光模型的参数β,γ的关系在式2中列出。
步骤2,相机响应方程模型参数a、b的确定:对于具体的相机,需要确定它的模型参数。如果已知对应相机的相机响应曲线,可以通过相机响应方程模型使用最小二乘拟合法拟合该相机响应曲线得到响应参数a、b。
Figure PCTCN2017095931-appb-000017
式3中,n是从曲线上抽得的样本点的个数;Ei是第i个样本点对应的辐照度;yi辐照度为Ei时图像的亮度值;f为式2中的相机响应方程。
如果相机响应曲线未知,也可以通过式4映射两张相同场景下不同曝光图像之间的关系得到
Figure PCTCN2017095931-appb-000018
Figure PCTCN2017095931-appb-000019
式4中,B0和B1代表相同场景不同曝光的两张图像。
得到
Figure PCTCN2017095931-appb-000020
之后,根据式2以及B1、B0的曝光比例k得到响应模型的参数a、b。
如果我们仅仅有一张输入图像,并不知道具体相机的信息,可以使用多个真实相机响应曲线进行平均,得到的平均相机响应曲线来近似表达相机的响应曲线;再对该曲线进行拟合得到参数a、b。
步骤3,低照度图像输入:输入一张低照度图像B,并计算出该图像的亮度分量Y,亮度分量通过式6计算得到:
Figure PCTCN2017095931-appb-000021
其中,Br、Bg、Bb分别为图像B的R、G、B三通道分量值。
步骤4,曝光比例K的估测:不同的多曝光图像序列将对图像融合算法性能产生很大的影响,因此,为了得到更好的图像融合结果,同时使图像融合程序运算效率最高,我们需要合理的选择多曝光图像序列中各图像与输入图像之间的曝光比例集合K。理想的曝光比例集合K 应该使用尽可能少的图像反映尽可能多的场景信息。一般的,对于图像融合算法来说,合理曝光的区域会比不合理曝光区域携带更多场景信息。因此,多曝光序列图像应该使场景中尽可能多的点达到过合理曝光。附图2第一行展示了某场景的多曝光图像序列,可以看出来,三幅图像分别使蓝天、道路、建筑物达到了较为合理的曝光。
为了衡量一幅图像中不同像素点的曝光情况,我们定义了图像的权值矩阵为式7:
Figure PCTCN2017095931-appb-000022
式中,Yk表示与输入图像曝光比例为k的图像的亮度分量,Wk是图像Yk所对应的权值矩阵。矩阵中某点的数值越大,表明该点越接近正常曝光,所能提供的场景信息量也就越多。附图2第二行展示了多曝光图像序列的权值矩阵图,图中越亮的点越接近合理曝光。
如前所述,不同曝光值的图像中合理曝光的区域也是不完全相同的。在已知某场景多曝光图像序列的情况下,为了衡量场景中不同像素点曝光情况,我们定义了权值矩阵的加法如式8:
Figure PCTCN2017095931-appb-000023
式中,max运算表示取矩阵Wi和Wj中对应元素的最大值。我们同时定义权值矩阵的减法如式9:
Figure PCTCN2017095931-appb-000024
式9表示在已知曝光比例为j的图像Bj的情况下,增加曝光比例为i的图像Bi能够使场景中的哪些像素趋于合理曝光。附图2中第三行展示了权值矩阵运算的结果图像。
根据以上的定义,可以写出K的生成算法如算法1(曝光比例生成器)所示:
Figure PCTCN2017095931-appb-000025
Figure PCTCN2017095931-appb-000026
上述曝光比例是生成器算法,首先计算输入图像的曝光权值矩阵,同时初始化当前曝光序列的权值矩阵W,以及曝光比例集合K,随后搜索得到使所述当前曝光序列的曝光权值W权值总量增加最多的曝光比例
Figure PCTCN2017095931-appb-000027
并判断增加的曝光权值总量是否小于一个预设的门限值τ,如果小于该门限值,证明增加的权值量已经很少,则跳出循环终止程序,输出K。反之,则记录获得的曝光比例
Figure PCTCN2017095931-appb-000028
同时更新当前曝光序列权值矩阵W,并继续寻找下一个曝光比例
Figure PCTCN2017095931-appb-000029
直至算法终止输出K。
步骤5,生成多曝光图像序列:利用步骤2中得到的参数a、b以及步骤4中得到的曝光比例集合K,依次令k*等于集合K中的每一个元素,通过式5生成多曝光图像序列。
Figure PCTCN2017095931-appb-000030
式5中,k*为所述的曝光比例集合K中的某一元素。B0为输入的低照度图像,B*为与输入图像B0曝光比为k*的生成图像。
步骤6,图像融合:将步骤5中所得到的图像利用图像融合进行融合,得到增强结果,并输出。
以下实施例利用本发明上述低照度图像增强方法,对图3所示的低照度图像B进行图像增强,包括以下步骤:
1、确定相机响应方程模型参数a,b:我们假设不知道具体相机的信息,而将文献(Grossberg,Michael D and Nayar,Shree K,What is the Space of Camera Response Functions?,IEEE grossberg2003space,2003)中提出的真实相机响应曲线DoRF数据集中所有响应曲线取平均, 得到平均相机响应曲线。使用公式(式3)确定相机响应方程模型参数a、b:
Figure PCTCN2017095931-appb-000031
令n=256,Ei在[0,1]上均匀分布,拟合该曲线得到参数a=-0.3293,b=1.1258。
2、低照度图像输入:输入一张低照度图像B,如附图3所示,并使用公式(式6)计算出该图像的亮度分量Y:
Figure PCTCN2017095931-appb-000032
3、曝光比例K的估测:按照算法1(曝光比例生成器),首先根据公式(式7):
Figure PCTCN2017095931-appb-000033
令k=1,σ=0.5,计算输入图像的权值矩阵W1。令W←W1然后根据公式(式11):
Figure PCTCN2017095931-appb-000034
计算对应的
Figure PCTCN2017095931-appb-000035
随后判断式12是否成立:
Figure PCTCN2017095931-appb-000036
式12中,取门限值τ=0.01。如果成立,则跳出循环终止程序,输出集合K。反之,则
记录获得的曝光比例
Figure PCTCN2017095931-appb-000037
同时通过式13更新当前曝光序列图像权值矩阵W:
Figure PCTCN2017095931-appb-000038
并继续寻找下一个曝光比例
Figure PCTCN2017095931-appb-000039
直到算法终止,输出集合K。
4、生成多曝光图像序列:依次令k*等于集合K中的每一个元素,根据公式(式5)生成多曝光图像序列:
Figure PCTCN2017095931-appb-000040
5、图像融合:本步骤中,可以使用任意多曝光图像融合算法对所生成的图像序列进行融合得到增强结果。本示例选用了文献(Ma Kede and Wang Zhou,MULTI-EXPOSURE IMAGE FUSION:A PATCH-WISE APPROACH,IEEE ma2015multi,2015)中所记载的图像融合方法。该方法可以将多曝光图像融合并得到视觉效果良好的多曝光图像,具体的,该融合方法将多曝光图像分为不同的图像块,然后将每一个图像块都分解为三个部分:信号强度、信号结构和平均亮度。最后根据图像块强度以及曝光量,将不同曝光图像的图像块进行融合,从而得 到最终的融合结果图像。经图像增强得到的结果图像如图4所示。
需要注意的是,公布实施例的目的在于帮助进一步理解本发明,但是本领域的技术人员可以理解:在不脱离本发明及所附权利要求的精神和范围内,各种替换和修改都是可能的。因此,本发明不应局限于实施例所公开的内容,本发明要求保护的范围以权利要求书界定的范围为准。

Claims (7)

  1. 一种低照度图像增强方法,基于相机响应特性,选择曝光模型并得到所述曝光模型对应的相机响应方程模型;估测所要生成的多曝光图像序列与原始图像的曝光比例,利用曝光模型与测得的曝光比例生成多曝光图像序列;再将所述多曝光图像序列进行图像融合,由此得到保持图像自然度的增强图像结果;具体包括如下步骤:
    1)输入一张低照度图像,计算得到该图像的亮度分量;
    2)确定相机响应方程模型及其参数,包括:
    21)选择曝光模型,得到曝光模型对应的相机响应方程模型;
    22)确定所述相机响应方程模型的模型参数;
    3)计算所要生成的多曝光图像序列与输入的低照度图像之间的曝光比例集合;
    4)根据所述的曝光比例集合,利用曝光模型生成多曝光图像序列;
    5)将得到的多曝光图像序列利用图像融合的方法进行融合,得到增强结果并输出。
  2. 如权利要求1所述的低照度图像增强方法,其特征是,步骤21)所述曝光模型为式1:
    Figure PCTCN2017095931-appb-100001
    式1中,B0和B1是两个矩阵,代表相同场景曝光值不同的两张图像;β、γ是曝光模型的两个参数;
    对于彩色图像,假设R、G、B三通道共用β、γ参数,得到式1所述模型对应的相机响应方程模型为式2:
    Figure PCTCN2017095931-appb-100002
    其中,k为图像B1与B0之间的曝光比例;E是场景的辐照度;a、b、c均为相机响应方程模型的参数。
  3. 如权利要求1所述的低照度图像增强方法,其特征是,步骤22)确定所述相机响应方程模型的相机响应参数a和b的方法选自下列方法之一:
    方法一:当已知对应相机的相机响应曲线时,通过相机响应方程模型使用最小二乘拟合法拟合该相机响应曲线,得到响应参数a、b;
    方法二:当相机响应曲线未知时,通过式4映射两张相同场景下不同曝光图像之间的关系,得到
    Figure PCTCN2017095931-appb-100003
    Figure PCTCN2017095931-appb-100004
    式4中,B0和B1是两个矩阵,代表相同场景不同曝光的两张图像;
    得到
    Figure PCTCN2017095931-appb-100005
    之后,根据相机响应方程模型和B1、B0的曝光比例k,得到响应模型的参数a和b;
    方法三:当只有一张低照度的图像而相机的信息未知时,使用多个真实相机响应曲线进行平均,得到平均相机响应曲线,作为该相机的响应曲线;再对该响应曲线进行拟合得到参数a和b。
  4. 如权利要求1所述的低照度图像增强方法,其特征是,步骤3)估测曝光比例集合包括如下步骤:
    31)定义图像的权值矩阵为式7,用于表示一幅图像中不同像素点的曝光情况:
    Figure PCTCN2017095931-appb-100006
    式7中,Yk表示与输入图像曝光比例为k的图像的亮度分量;Wk是图像Yk所对应的权值矩阵,矩阵中某点的权值数值越大,表明该点越接近正常曝光;
    32)定义权值矩阵的加法和减法,加法为式8,减法为式9:
    Figure PCTCN2017095931-appb-100007
    式8和式9中,max运算表示取矩阵Wi和Wj中对应元素的最大值;
    33)估测生成曝光比例集合K,包括:
    331)将输入图像放入多曝光序列中;预设曝光权值总量的门限值τ;
    332)搜索得到使所述输入图像曝光权值总量增加最多的曝光比例
    Figure PCTCN2017095931-appb-100008
    333)当曝光权值总量不小于所述门限值时,记录获得的曝光比例
    Figure PCTCN2017095931-appb-100009
    同时更新当前曝光序列权值矩阵W,返回步骤332)继续搜索下一个曝光比例
    Figure PCTCN2017095931-appb-100010
    334)当曝光权值总量小于所述门限值时,结束操作,得到曝光比例集合K。
  5. 如权利要求1所述的低照度图像增强方法,其特征是,步骤4)通过式5生成任意曝光量的多曝光图像序列:
    Figure PCTCN2017095931-appb-100011
    式5中,k*为生成图像B*与原始图像B0之间的曝光比例。
  6. 如权利要求1所述的低照度图像增强方法,其特征是,步骤1)通过式6计算得到所 述低照度图像的亮度分量:
    Figure PCTCN2017095931-appb-100012
    其中,Y为图像的亮度分量;Br、Bg、Bb分别为图像的R、G、B三通道分量值。
  7. 如权利要求4所述的低照度图像增强方法,其特征是,门限值τ=0.01。
PCT/CN2017/095931 2017-01-17 2017-08-04 一种低照度图像增强方法 Ceased WO2018133379A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/478,570 US10614561B2 (en) 2017-01-17 2017-08-04 Method for enhancing low-illumination image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710031020.9A CN106875352B (zh) 2017-01-17 2017-01-17 一种低照度图像增强方法
CN201710031020.9 2017-01-17

Publications (1)

Publication Number Publication Date
WO2018133379A1 true WO2018133379A1 (zh) 2018-07-26

Family

ID=59157586

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/095931 Ceased WO2018133379A1 (zh) 2017-01-17 2017-08-04 一种低照度图像增强方法

Country Status (3)

Country Link
US (1) US10614561B2 (zh)
CN (1) CN106875352B (zh)
WO (1) WO2018133379A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541859A (zh) * 2019-09-23 2021-03-23 武汉科技大学 一种光照自适应的人脸图像增强方法
CN115034991A (zh) * 2022-06-14 2022-09-09 湖北工业大学 用于盾尾间隙智能监测的图像对比度增强方法、系统
CN115619659A (zh) * 2022-09-22 2023-01-17 北方夜视科技(南京)研究院有限公司 基于正则化高斯场模型的低照度图像增强方法与系统
CN116579944A (zh) * 2023-05-11 2023-08-11 中国科学院软件研究所 一种基于零次学习的自适应的低光噪声图像增强方法和系统

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875352B (zh) 2017-01-17 2019-08-30 北京大学深圳研究生院 一种低照度图像增强方法
CN107833184A (zh) * 2017-10-12 2018-03-23 北京大学深圳研究生院 一种基于多曝光生成再融合框架的图像增强方法
CN110232658B (zh) * 2018-03-05 2021-12-03 北京大学 图像去雨方法、系统、计算机设备及介质
US11189017B1 (en) * 2018-09-11 2021-11-30 Apple Inc. Generalized fusion techniques based on minimizing variance and asymmetric distance measures
US11094039B1 (en) 2018-09-11 2021-08-17 Apple Inc. Fusion-adaptive noise reduction
CN109712091B (zh) * 2018-12-19 2021-03-23 Tcl华星光电技术有限公司 图片处理方法、装置及电子设备
CN110211077B (zh) * 2019-05-13 2021-03-09 杭州电子科技大学上虞科学与工程研究院有限公司 一种基于高阶奇异值分解的多曝光图像融合方法
CN110378861B (zh) * 2019-05-24 2022-04-19 浙江大华技术股份有限公司 一种图像融合方法及装置
CN111080565B (zh) * 2019-12-11 2023-07-18 九江学院 基于图像质量变化规律的曝光融合方法、设备和存储介质
CN111429366B (zh) * 2020-03-03 2022-05-17 浙江大学 基于亮度转换函数的单帧弱光图像增强方法
CN111815529B (zh) * 2020-06-30 2023-02-07 上海电力大学 一种基于模型融合和数据增强的低质图像分类增强方法
CN111932471B (zh) * 2020-07-24 2022-07-19 山西大学 用于低照度图像增强的双路曝光度融合网络模型及方法
CN111899193B (zh) * 2020-07-30 2024-06-14 湖北工业大学 一种基于低照度图像增强算法的刑侦摄影系统及方法
CN111915526B (zh) * 2020-08-05 2024-05-31 湖北工业大学 一种基于亮度注意力机制低照度图像增强算法的摄影方法
CN112508814B (zh) * 2020-12-07 2022-05-20 重庆邮电大学 一种基于无人机低空视角下的图像色调修复型去雾增强方法
CN113096033B (zh) * 2021-03-22 2024-05-28 北京工业大学 基于Retinex模型自适应结构的低光照图像增强方法
CN113034413B (zh) * 2021-03-22 2024-03-05 西安邮电大学 一种基于多尺度融合残差编解码器的低照度图像增强方法
CN113034395A (zh) * 2021-03-26 2021-06-25 上海工程技术大学 一种低照度图像增强方法
CN113160096B (zh) * 2021-05-27 2023-12-08 山东中医药大学 一种基于视网膜模型的低光图像增强方法
CN113347369B (zh) * 2021-06-01 2022-08-19 中国科学院光电技术研究所 一种深空探测相机曝光调节方法、调节系统及其调节装置
CN114240767B (zh) * 2021-11-16 2025-09-16 上海赛昉科技有限公司 一种基于曝光融合的图像宽动态范围处理方法及装置
CN114066764B (zh) * 2021-11-23 2023-05-09 电子科技大学 基于距离加权色偏估计的沙尘退化图像增强方法及装置
CN114943652B (zh) * 2022-04-19 2024-12-10 西北工业大学 低照度遥感图像的高动态重建方法及装置
CN114862722B (zh) * 2022-05-26 2023-03-24 广州市保伦电子有限公司 一种图像亮度增强实现方法及处理终端
CN115205137A (zh) * 2022-06-07 2022-10-18 西安工业大学 一种高灰阶焊缝底片图像增强显示方法
CN115660968B (zh) * 2022-09-20 2025-07-25 长春理工大学 一种基于相机成像原理的低照度图像增强方法
JPWO2024127590A1 (zh) * 2022-12-15 2024-06-20
CN116033278B (zh) * 2022-12-18 2024-08-30 深圳市晶帆光电科技有限公司 一种面向单色-彩色双相机的低照度图像预处理方法
CN116129375B (zh) * 2023-04-18 2023-07-21 华中科技大学 一种基于多曝光生成融合的弱光车辆检测方法
CN117014729B (zh) * 2023-09-27 2023-12-05 合肥辉羲智能科技有限公司 一种二次曝光图像融合高动态范围图像的方法及系统
CN118710519B (zh) * 2024-08-28 2024-11-26 杭州高德智感数字科技有限公司 图像融合方法、目标检测方法、监控方法、介质及无人机
CN119450216B (zh) * 2024-11-12 2025-10-28 中国科学院合肥物质科学研究院 一种差分吸收光谱仪的自动曝光方法及系统
CN119359582B (zh) * 2024-12-23 2025-04-15 江西师范大学 一种零参考轻量化的低光照内窥镜视频增强方法
CN120125453B (zh) * 2025-05-15 2025-08-08 山东科技大学 一种基于曝光矫正的水下图像综合清晰化方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101661620A (zh) * 2009-09-18 2010-03-03 北京航空航天大学 单幅图像相机响应曲线标定方法
US7840066B1 (en) * 2005-11-15 2010-11-23 University Of Tennessee Research Foundation Method of enhancing a digital image by gray-level grouping
CN106875352A (zh) * 2017-01-17 2017-06-20 北京大学深圳研究生院 一种低照度图像增强方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8346008B2 (en) * 2009-06-09 2013-01-01 Aptina Imaging Corporation Systems and methods for noise reduction in high dynamic range imaging
CN101916431B (zh) * 2010-07-23 2012-06-27 北京工业大学 一种低照度图像数据处理方法及系统
JP2013038504A (ja) * 2011-08-04 2013-02-21 Sony Corp 撮像装置、および画像処理方法、並びにプログラム
JP2013085040A (ja) * 2011-10-06 2013-05-09 Sanyo Electric Co Ltd 画像処理装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7840066B1 (en) * 2005-11-15 2010-11-23 University Of Tennessee Research Foundation Method of enhancing a digital image by gray-level grouping
CN101661620A (zh) * 2009-09-18 2010-03-03 北京航空航天大学 单幅图像相机响应曲线标定方法
CN106875352A (zh) * 2017-01-17 2017-06-20 北京大学深圳研究生院 一种低照度图像增强方法

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DU, LIN ET AL.: "Camera Response Function Calibration Algorithm Based on Single Frame Image", ACTA OPTICA SINICA, vol. 36, no. 7, 31 July 2016 (2016-07-31) *
DU, LIN ET AL.: "High Dynamic Range Image Fusion Based on Camera Response Function", COMPUTER ENGINEERING & SCIENCE, vol. 37, no. 7, 31 July 2015 (2015-07-31), ISSN: 1007-130X *
DU, LIN ET AL.: "Research Progress of Camera Response Function Calibration", LASER & INFRARED, vol. 46, no. 1, 31 January 2016 (2016-01-31), ISSN: 1001-5078 *
MA, K. ET AL.: "Perceptual Quality Assessment for Multi-Exposure Image Fusion", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 24, no. 11, 30 November 2015 (2015-11-30), XP011585930 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541859A (zh) * 2019-09-23 2021-03-23 武汉科技大学 一种光照自适应的人脸图像增强方法
CN112541859B (zh) * 2019-09-23 2022-11-25 武汉科技大学 一种光照自适应的人脸图像增强方法
CN115034991A (zh) * 2022-06-14 2022-09-09 湖北工业大学 用于盾尾间隙智能监测的图像对比度增强方法、系统
CN115619659A (zh) * 2022-09-22 2023-01-17 北方夜视科技(南京)研究院有限公司 基于正则化高斯场模型的低照度图像增强方法与系统
CN115619659B (zh) * 2022-09-22 2024-01-23 北方夜视科技(南京)研究院有限公司 基于正则化高斯场模型的低照度图像增强方法与系统
CN116579944A (zh) * 2023-05-11 2023-08-11 中国科学院软件研究所 一种基于零次学习的自适应的低光噪声图像增强方法和系统

Also Published As

Publication number Publication date
CN106875352B (zh) 2019-08-30
CN106875352A (zh) 2017-06-20
US20190333200A1 (en) 2019-10-31
US10614561B2 (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN106875352B (zh) 一种低照度图像增强方法
Ancuti et al. I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images
Peng et al. Generalization of the dark channel prior for single image restoration
EP3542347B1 (en) Fast fourier color constancy
CN102768760B (zh) 一种基于图像纹理的图像快速去雾方法
CN111127476A (zh) 一种图像处理方法、装置、设备及存储介质
CN109410126A (zh) 一种细节增强与亮度自适应的高动态范围图像的色调映射方法
CN109829868B (zh) 一种轻量级深度学习模型图像去雾方法、电子设备及介质
CN107292830B (zh) 低照度图像增强及评价方法
CN112384946A (zh) 一种图像坏点检测方法及装置
WO2020172888A1 (zh) 一种图像处理方法和装置
CN103106644B (zh) 克服彩色图像非均匀光照的自适应画质增强方法
CN115100500A (zh) 一种目标检测方法、装置和可读存储介质
Wang et al. End-to-end exposure fusion using convolutional neural network
CN103020924A (zh) 基于相似场景的低照度监控图像增强方法
Liba et al. Sky optimization: Semantically aware image processing of skies in low-light photography
KR101662407B1 (ko) 영상의 비네팅 보정 방법 및 장치
Gao et al. High dynamic range infrared image acquisition based on an improved multi-exposure fusion algorithm
CN116167932A (zh) 一种图像质量优化方法、装置、设备及存储介质
Yan et al. A natural-based fusion strategy for underwater image enhancement
CN110111341B (zh) 图像前景获取方法、装置及设备
CN106686320A (zh) 一种基于数密度均衡的色调映射方法
CN118887152B (zh) 一种基于Retinex理论的低照度图像质量增强方法和装置
CN110070480A (zh) 一种水下光学图像的模拟方法
CN111091522B (zh) 终端及其多曝光图像融合方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17893036

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17893036

Country of ref document: EP

Kind code of ref document: A1