[go: up one dir, main page]

CN1696975A - Method for enhancing digital image - Google Patents

Method for enhancing digital image Download PDF

Info

Publication number
CN1696975A
CN1696975A CN 200410038010 CN200410038010A CN1696975A CN 1696975 A CN1696975 A CN 1696975A CN 200410038010 CN200410038010 CN 200410038010 CN 200410038010 A CN200410038010 A CN 200410038010A CN 1696975 A CN1696975 A CN 1696975A
Authority
CN
China
Prior art keywords
image
contrast
formula
window
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200410038010
Other languages
Chinese (zh)
Inventor
蒲恬
张捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN 200410038010 priority Critical patent/CN1696975A/en
Publication of CN1696975A publication Critical patent/CN1696975A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

A method for intensifying digital image includes remapping image in dynamic range i e carrying out nonlinear intensity transform for image in each light spectrum channel; picking up contrast of image i e using window template in different size to convolute image for obtaining contrast information with different size of digital image, processing those contrast information with different size for improving image quality at different aspects; integrating these improvements and carrying out image combination.

Description

A kind of digital image enhancement method
Technical field
The present invention is a kind of new digital image processing method, is to use the optic nerve kinetic model to carry out the new technology of dynamic range compression, figure image intensifying, constant color and the color fidelity enhancement process of digital picture.
Background technology
At present, the gap between the output image of artificial imaging device and the true perception of physiological vision system, be one generally and usually very serious problem.Causing this problem to exist is because following two limitation: 1. the variation of exterior light irradiation spectrum composition causes the image of imaging device output cross-color to occur, promptly so-called constant color problem.2. the limited dynamic output area of imaging device causes output image usually to lose details and the colouring information of scene illumination than dark areas, promptly so-called dynamic range compression problem.For non-color imaging, mainly solve the dynamic range compression problem, promptly how to make the imaging device output image in scene, under the intensity of illumination greatest differences condition, rationally show the detailed information in the whole scene.
The constant color problem refers generally to output image and output image daylight illumination under the phenomenon that have obvious spectral color deviation of imaging device under artificial light source illumination.For this problem, if conventional radiography, photographer generally compensates this species diversity by manually selecting different photographic films for use and adopting different optical filterings to make up at present; And, then can only come the compensated spectrum skew at present by adopting manual selection filter for digital imaging apparatus.And employed these spectrum correction means all do not possess the dynamic range compression ability, compare human eye vision and observe, for the usually truly demonstration simultaneously of details of illumination dark space in the scene.
The problem of dynamic range compression is reflected as not matching between greatest differences (usually above 10000: 1) that has intensity of illumination in the scene and the limited digital picture output area of using always (generally being that 8bit quantizes, maximum 256).This do not match caused the imaging output image the details expressive force far beyond the human eye vision perception a little less than.
Evolve through For hundreds of millions of years, the human visual system shows outstanding performance aspect these two.The human visual system not only can also can keep the correct perception to the scenery color to there being the scene blur-free imaging of huge intensity of illumination difference on sizable degree when environment spectrum changes in distribution.Therefore, from physiological vision attribute angle, setting up corresponding Vision Builder for Automated Inspection is a kind of reasonable solution of this type of problem.Center-surround shunting equation is a kind of physiology physics theory model that the light stimulus response of retinal ganglial cells is carried out electric physiology simulation, it is proposed by Stephen Grossberg at first, and this optic nerve meta-model and expansion thereof have obtained application at aspects such as some related disciplines such as automatic control, pattern-recognitions subsequently.
In image processing field, center-surround shunting equation has had some application.The preprocessing means that Grossberg utilizes this equation to handle as diameter radar image is for the follow-up noise reduction process of diameter radar image provides input.The Another application of this equation is that people such as Waxman uses crotalic infrared, the double mode photosensory cell of visible light of this equation simulation, thereby visible images and graphic images are organically merged, and forms fused images for observing.Yet, do not seen center-surround shunting equation and realized dynamic range compression and constant color simultaneously, and the application of the image result of approaching true visually-perceptible was provided.
Summary of the invention
The invention provides a kind of black and white and color digital image method for quality improved, input dynamic range interval that can difference is huge significantly is compressed in the out-put dynamic range of qualification, simultaneously, the coloured image that is independent of scene illumination spectral distribution output can be provided, on this basis, the present invention also is to improve the quality of output image, makes output image can approach vision system under the different light situation, to the true perception image that actual scene produced.
For achieving the above object, main flow process is as follows:
1. dynamic range of images compression:
Each pixel to input digital image carries out the dynamic range adjustment.Utilize the awl cellular response functional equation of expansion that image intensity is carried out nonlinearities change.
2. picture contrast is extracted.
Utilize center-surround shunting equation to extract the picture contrast of simulation retinal ganglial cells stimuli responsive.Extraction to picture contrast can just be carried out on the space scale in the spatial resolution of different sizes.Contrast information with different scale merges then, so that rationally comprise the image information under the different scale.At this moment, independent contrast image can independent draws show, to satisfy the needs of different use occasions.
3. contrast modulation and output.
The contrast image of utilizing flow process 2 to obtain is modulated the output image that is obtained by flow process 1, the stimulus intensity of picture contrast perception is subjected to the influence of extraneous intensity of illumination size with reflection optic nerve unit, then, modulation signal approaches the image output of true visually-perceptible again with the output signal merging formation of flow process 1, obtain finally to show.
Description of drawings
Fig. 1 is the algorithmic system process flow diagram.
Fig. 2 is the algorithm flow chart of contrast extraction module.
Fig. 3 is the constant color legend.
Fig. 4 is single scale and multiple dimensioned contrast image legend.
Fig. 5 is the application legend of the present invention at different imaging types, and wherein, left column is former figure, and right row are results.
Embodiment
Main symbol tabulation
I k(i, j): coordinate is (i, k spectral color channel image intensity j).
I k' (i, j): through the image intensity of dynamic range compression adjustment.
r k(i, j): scene reflectivity.
L k(i, j): scene light is according to illumination.
C k(i, j): the center video in window.
S k, n (i, j): on n yardstick around video in window.
N: employed space scale sum.N≥1
σ: the standard deviation of Gaussian window.If have subscript c, s then refer to the corresponding parameter of center window profit respectively around window.If have subscript n, then represent the corresponding parameter of n yardstick.
W: Gaussian template window.If have subscript c, s then refer to the center window respectively and around the corresponding parameter of window; If have subscript n, then represent the corresponding parameter of n yardstick.
*: convolution operator.
A: attenuation constant
[ω] + :max(ω,0)
x k, x k: the stimuli responsive output of ON, OFF gangliocyte.If have subscript n, then represent the corresponding parameter of n yardstick.
Msx k,
Figure A20041003801000051
: ON, the multiple dimensioned output of OFF cytositimulation.
ξ n: the weight factor of contrast image under n yardstick
Gain: gain factor.
Offset: dc offset
d k(i, j): the output contrast image.If have subscript n, then represent the corresponding parameter of n yardstick.
Out k(i, j): final output image.If have subscript n, then represent the corresponding parameter of n yardstick.
The core concept of this method is to finish Digital Image Processing work according to the physical achievement in research of optic nerve physiology, improves the display quality of image.Strength Changes or ambient light that human vision is huge in the response external world are shone when having the light stimulus of tangible spectral shift, have very strong conformability, therefore, might come the guide image work of treatment by simulating human vision physiological attribute in this respect.This method is in the biophysics model of the different physiological attributes of numerous description vision systems, by to a kind of model of describing the ganglia retinae stimuli responsive: the special realization and the expansion of center-surroundshunting equation, and, finish the purpose of figure image intensifying and color balance in conjunction with the special expansion of the Weber rule of describing the retinal cones cellular response.
Step 1:
According to Fig. 1, become black and white or color digital image, digital picture I after scenery process image acquisition and the A/D conversion k(i, j) by pixel space coordinate, intensity and spectrum channel unique identification, wherein, for black white image, the value of three spectrum channels is identical.Then, digital picture enters the dynamic range compression module respectively and the contrast extraction module is handled.
For the dynamic range compression module, according to the Weber rule in the Physiologic Studies achievement, the retinal cones cell is directly proportional with the logarithmic approximation of stimulus intensity by the stimuli responsive after the illumination, so, through the image intensity value after the dynamic range compression is:
I k′(i,j)=log[I k(i,j)] (1)
This output image intensity is in order to the neural image output of simulation retinal cones cell.
Step 2:
According to Fig. 2, the contrast extraction module of digital picture can be cut into N the contrast on the space scale independent of each other and extract operation.For any one space scale n, the response of retina ON, OFF gangliocyte output is described by following center-surround shunting equation:
ON cellular response x k, n (i, j)
d dt x k , n ( i , j ) = - Ax k , n ( i , j ) + [ 1 - x k , n ( i , j ) ] C k ( i , j ) - [ 1 + x k , n ( i , j ) ] S k , n ( i , j ) - - - ( 2 )
OFF cellular response x k, n (i, j)
d dt x ‾ k , n ( i , j ) = - A x ‾ k , n ( i , j ) + [ 1 - x ‾ k , n ( i , j ) ] S k , n ( i , j ) - [ 1 + x ‾ k , n ( i , j ) ] C k ( i , j ) - - - ( 3 )
This method is used the Gaussian window function
w ( i , j ) = 1 2 πσ 2 exp ( - i 2 + j 2 2 σ 2 ) - - - ( 4 )
Realization center and around the convolutional filtering of window, promptly
C k(i,j)=I k(i,j)*w c(i,j) (5)
S k,n(i,j)=I k(i,j)*w s,n(i,j) (6)
The present invention uses center-surround shunting equation to separate contrast information as image when reaching equilibrium state:
The ON cell:
x k , n ( i , j ) = [ C k ( i , j ) - S k , n ( i , j ) A + C k ( i , j ) + S k , n ( i , j ) ] + - - - ( 7 )
The OFF cell:
x ‾ k , n ( i , j ) = [ S k , n ( i , j ) - C k ( i , j ) A + C k ( i , j ) + S k , n ( i , j ) + - - - ( 8 )
As can be seen, the positive and negative contrast information of image on yardstick n extracted in the output of ON, OFF cell respectively, and Fig. 2 decomposes calculation process herein.
Can obtain the constant color of this method from formula (7), (8), just output image is independent of extraneous spectrum.According to physical optics, picture signal is the surface reflectivity in the scene and the product of environment incident intensity, that is:
I k(i,j)=r k(i,j)L k(i,j) (9)
Substitution (7), (8) formula have:
x k ( i , j ) = [ r k ( i , j ) L k ( i , j ) - r ‾ k , n ( i , j ) L ‾ k , n ( i , j ) A + r k ( i , j ) L k ( i , j ) + r ‾ k , n ( i , j ) L ‾ k , n ( i , j ) ] + - - - ( 10 )
Herein, this method is the example explanation with the ON cell, the output symmetry of the output of OFF cell and ON cell, and its processing is similar, down together.In the formula, r K, n, L K, nBe respectively r, the L ambient territory mean value on yardstick n.If the size of attenuation constant A is compared and can be ignored with the local light stimulus intensity, promptly A<<I k(i, j), slowly change in hypothesis surround lighting space so and surface reflectivity under the condition of scene border sudden change, i.e. L k(i, j) ≈ L k, (i j), has n
x k ( i , j ) ≈ [ r k ( i , j ) - r ‾ k , n ( i , j ) r k ( i , j ) + r ‾ k , n ( i , j ) ] + - - - ( 11 )
Thus, generate an image that only depends on surface reflectivity by removing ambient light.Satisfying the approximate condition of following formula all sets up under most physical environment situations.Can not strict situation about satisfying for condition, for example utmost point specific condition illumination of making, the general effect that also surpasses ambient light ratio of the influence of reflectivity ratio for the laboratory.The constant color example is seen Fig. 3.
Multiple dimensioned operation is to have the physiological attribute of different selective response output for the light stimulus of different scale in order to simulate retinal ganglial cells, and better visual processes effect can be provided, its specific implementation obtains by the contrast information under the different scale being weighted summation:
msx k ( i , j ) = Σ n = 1 N ξ n x k , n ( i , j ) msx k ‾ ( i , j ) = Σ n = 1 N ξ n x ‾ k , n ( i , j ) - - - ( 12 )
In the formula, weight factor satisfies
Σ n = 1 N ξ n = 1 - - - ( 13 )
Multiple dimensioned contrast is handled the central window mouth template that is to use identical size with the difference of single scale operation, and around the window size difference.That is,, adopt the σ of different sizes respectively for the contrast operation of different scale S, nDetermine size and template window numerical value around window.The weight coefficient of employed yardstick sum and each yardstick distributes and can determine according to actual needs.General, for conventional digital picture, adopt large, medium and small three yardsticks and each yardstick is carried out the equal weight distribution, just be enough to obtain promising result.Single scale result and multiple dimensioned example are seen Fig. 4, and wherein, multiple dimensioned result is to adopt three yardstick (σ S, 1=10, σ S, 2=50, σ S, 3=180) result of weighted sum.
The result that single scale or multiple dimensioned contrast are extracted can independently export to satisfy special requirement, sees the contrast image output module that satisfies special requirement among Fig. 1.The special formula that is achieved as follows that the output of the contrast image on the yardstick n is shown:
d k,n(i,j)=Gain[x k,n(i,j)- x k,n(i,j)]+offset (14)
For multiple dimensioned, then
d k ( i , j ) = Gain [ msx k ( i , j ) - msx k ‾ ( i , j ) ] + offset - - - ( 15 )
The constant dc offset and the constant gain factor in formula (14), (15) are consistent for each Color Channel.
Step 3:
Independent contrast output image can not provide the image that approaches true visually-perceptible fully, because vision system is not fully only the contrast information in the image to be produced response to stimulate, the flip-flop in the image influences visual imaging equally.This method is modulated and the special realization that merges organically combines the flip-flop and the contrast information of digital picture contrast image by adopting, the processing result image that approaches true visually-perceptible is provided, process flow diagram is seen the modulation of Fig. 1 and merges module that this process is expressed as follows with the formula form:
Out k,n(i,j)=Gain{[x k,n(i,j)- x k,n(i,j)]×I′(i,j)}+offset×I′(i,j) (16)
Multiple dimensioned:
Out k ( i , j ) = Gain { [ msx k ( i , j ) - msx k ‾ ( i , j ) ] × I ′ ( i , j ) } + offset × I ′ ( i , j ) - - ( 17 )
Equally, gain factor and dc offset are consistent to each Color Channel.
Formula (16), (17) are final processing expression formulas, and through a large amount of evidences, this disposal route is extensive and effective, and very desirable processing result image can be provided.Fig. 5 is the result legend of this method to different imaging types.
This method can be applied in the occasion of all results for visual observation, for example following typical fields:
(a) military affairs: good military surveillance image is provided, has the target image that enriches level of detail for commander's decision-making provides.
(b) remote sensing: the enhancing of target area image details, the dynamic range that remote sensing images are huge is compressed, the characterization image that is fit to human eye observation is provided.
(c) medical science: improve CT, nuclear magnetic resonance, the quality of images such as X ray rationally strengthens details, for medical worker accurate identification physiological tissue and judge that the state of an illness provides high-quality medical science contrastographic picture.
(d) civilian: as to improve the image quality of digital imaging apparatus, supply many as far as possible and rational image detail in image lossy compression method storage prerequisite.
(e) industry: improve the sharpness of lossless detection image, for subsequent treatment provides high-quality front end image information.
Therefore, this method has wide application prospect.

Claims (9)

  1. The present invention is a kind of digital image enhancement method, and claim is as follows:
    1. this is the method that a kind of digital picture strengthens, and comprises following steps:
    A) image intensity conversion.
    With digital picture I k(i j) carries out intensity transformation according to the form of formula (1), and digital service unit is arrived log-domain;
    B) picture contrast is extracted.
    The digital picture contrast of using formula (7), (8) to carry out single scale is extracted, and wherein uses formula (5), (6) that digital picture is added the window filtering average computation.The contrast information that this step is extracted can show output separately.Extracting the operation of contrast can carry out at multiscale space.
    C) result that is obtained by the step (a) and (b) is modulated according to the form of formula (16) and is merged, and forms final output image.If use multiple dimensioned contrast to extract, then use formula (17) to form output in step b).
  2. 2. according to the method for claim 1, image degree of comparing is extracted and can carry out on a plurality of yardsticks, comprise following steps:
    A) use different sizes around the Gaussian template window to the digital picture convolutional filtering, and make up contrast image on the different scale according to the method for the step b) of claim 1.
    B) use formula (12) to make up the multiple dimensioned contrast image of output.
  3. 3. according to claim 1,2 method, when the single scale contrast image is independently exported demonstration, formulae express is seen formula (14), independently export demonstration if use multiple dimensioned contrast image, formulae express is seen formula (15), gain factor in two formulas and dc offset are consistent to each Color Channel, are constant.
  4. 4. according to claim 1,2 method, when image is extracted contrast, the center window on each yardstick, around all unique use of the window function Gaussian template window function of window.
  5. 5. according to claim 1,2 method, the size of the center window on each yardstick is a pixel.
  6. 6. according to claim 1,2 method, when on each yardstick, making up, all be arranged in 1% 75% interval that constitutes of the higher value of image length and width numerical value around the standard deviation constant scope of Gaussian template window to this higher value around the Gaussian template window.
  7. 7. according to the method for claim 2, weight coefficient satisfies formula (13).
  8. 8. according to claim 1,2 method, the span of constant A from 1 to 10 times of the input digital image quantized interval upper limit.
  9. 9. according to claim 1,2 method, obtain showing output digital image according to formula (16), (17), wherein gain factor and dc offset are consistent for each color spectrum passage, are constant.
CN 200410038010 2004-05-14 2004-05-14 Method for enhancing digital image Pending CN1696975A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200410038010 CN1696975A (en) 2004-05-14 2004-05-14 Method for enhancing digital image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200410038010 CN1696975A (en) 2004-05-14 2004-05-14 Method for enhancing digital image

Publications (1)

Publication Number Publication Date
CN1696975A true CN1696975A (en) 2005-11-16

Family

ID=35349689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200410038010 Pending CN1696975A (en) 2004-05-14 2004-05-14 Method for enhancing digital image

Country Status (1)

Country Link
CN (1) CN1696975A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100336080C (en) * 2006-01-25 2007-09-05 西安交通大学 Dimensional space decomposition and reconstruction based balanced X-ray image display processing method
CN101188101B (en) * 2006-11-21 2010-06-16 胜华科技股份有限公司 Adjusting device and method for enhancing image contrast
CN101567080B (en) * 2009-05-19 2011-01-26 华中科技大学 Method for strengthening infrared focal plane array image
CN102025979A (en) * 2010-12-14 2011-04-20 中国科学院长春光学精密机械与物理研究所 Infrared video real-time enhancing display device based on dual DSPs (digital signal processors)
CN101167368B (en) * 2005-12-05 2012-03-28 华为技术有限公司 A method and device for realizing arithmetic encoding and decoding
CN102420980A (en) * 2010-09-27 2012-04-18 深圳市融创天下科技股份有限公司 Frame layer and macro block layer quantization parameter adjusting method and system
CN101595719B (en) * 2006-11-27 2012-06-13 杜比实验室特许公司 Apparatus and methods for boosting dynamic range in digital images
CN101821775B (en) * 2007-07-20 2012-07-18 爱克发医疗保健公司 Method of generating multiscale contrast enhanced image
CN101556691B (en) * 2008-04-02 2012-08-22 英属开曼群岛商恒景科技股份有限公司 Apparatus and method for contrast enhancement
CN102881004A (en) * 2012-08-31 2013-01-16 电子科技大学 Digital image enhancement method based on optic nerve network
CN103810683A (en) * 2012-11-14 2014-05-21 三星电子(中国)研发中心 Photo processing method and device
CN103907109A (en) * 2011-06-27 2014-07-02 耶路撒冷希伯来大学伊萨姆研究发展有限公司 Applying rapid numerical approximation of convolutions with filters for image processing purposes
CN108959794A (en) * 2018-07-13 2018-12-07 北京航空航天大学 A kind of structural frequency response modification methodology of dynamics model based on deep learning
CN111640111A (en) * 2020-06-10 2020-09-08 詹俊鲲 Medical image processing method, device and storage medium

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101167368B (en) * 2005-12-05 2012-03-28 华为技术有限公司 A method and device for realizing arithmetic encoding and decoding
CN100336080C (en) * 2006-01-25 2007-09-05 西安交通大学 Dimensional space decomposition and reconstruction based balanced X-ray image display processing method
CN101188101B (en) * 2006-11-21 2010-06-16 胜华科技股份有限公司 Adjusting device and method for enhancing image contrast
CN101595719B (en) * 2006-11-27 2012-06-13 杜比实验室特许公司 Apparatus and methods for boosting dynamic range in digital images
CN101821775B (en) * 2007-07-20 2012-07-18 爱克发医疗保健公司 Method of generating multiscale contrast enhanced image
CN101556691B (en) * 2008-04-02 2012-08-22 英属开曼群岛商恒景科技股份有限公司 Apparatus and method for contrast enhancement
CN101567080B (en) * 2009-05-19 2011-01-26 华中科技大学 Method for strengthening infrared focal plane array image
CN102420980B (en) * 2010-09-27 2014-08-27 深圳市融创天下科技股份有限公司 Frame-layer and macroblock-layer quantization parameter adjusting method and system
CN102420980A (en) * 2010-09-27 2012-04-18 深圳市融创天下科技股份有限公司 Frame layer and macro block layer quantization parameter adjusting method and system
CN102025979A (en) * 2010-12-14 2011-04-20 中国科学院长春光学精密机械与物理研究所 Infrared video real-time enhancing display device based on dual DSPs (digital signal processors)
CN103907109B9 (en) * 2011-06-27 2017-02-22 耶路撒冷希伯来大学伊萨姆研究发展有限公司 Applying rapid numerical approximation of convolutions with filters for image processing purposes
CN103907109A (en) * 2011-06-27 2014-07-02 耶路撒冷希伯来大学伊萨姆研究发展有限公司 Applying rapid numerical approximation of convolutions with filters for image processing purposes
US9361663B2 (en) 2011-06-27 2016-06-07 Yissum Research Development Company Of The Hebrew University Of Jerusalem Ltd. Applying rapid numerical approximation of convolutions with filters for image processing purposes
CN103907109B (en) * 2011-06-27 2016-11-16 耶路撒冷希伯来大学伊萨姆研究发展有限公司 Fast numerical for image procossing purpose wave filter application convolution is approached
CN102881004A (en) * 2012-08-31 2013-01-16 电子科技大学 Digital image enhancement method based on optic nerve network
CN103810683A (en) * 2012-11-14 2014-05-21 三星电子(中国)研发中心 Photo processing method and device
CN103810683B (en) * 2012-11-14 2017-04-05 三星电子(中国)研发中心 Photo processing method and device
CN108959794A (en) * 2018-07-13 2018-12-07 北京航空航天大学 A kind of structural frequency response modification methodology of dynamics model based on deep learning
CN108959794B (en) * 2018-07-13 2023-04-07 北京航空航天大学 Structural frequency response dynamic model correction method based on deep learning
CN111640111A (en) * 2020-06-10 2020-09-08 詹俊鲲 Medical image processing method, device and storage medium

Similar Documents

Publication Publication Date Title
CN115880225B (en) A dynamic illumination face image quality enhancement method based on multi-scale attention mechanism
CN1696975A (en) Method for enhancing digital image
CN113379661B (en) Double-branch convolution neural network device for fusing infrared and visible light images
Zheng et al. Qualitative and quantitative comparisons of multispectral night vision colorization techniques
CN109272455B (en) Image Dehazing Method Based on Weakly Supervised Generative Adversarial Network
Zheng et al. A local-coloring method for night-vision colorization utilizing image analysis and fusion
CN110675462B (en) Gray image colorization method based on convolutional neural network
CN103971340A (en) High-bit-width digital image dynamic range compression and detail enhancement method
CN117649349A (en) Polarization image fusion method based on dual attention mechanism generative adversarial network
CN114972748A (en) Infrared semantic segmentation method capable of explaining edge attention and gray level quantization network
CN116452872B (en) A tree species classification method for forest scenes based on improved Deeplabv3+
Xie et al. Dual camera snapshot hyperspectral imaging system via physics-informed learning
CN105469364A (en) Medical image fusion method combined with wavelet transformation domain and spatial domain
Łabędź et al. Histogram adjustment of images for improving photogrammetric reconstruction
CN109753996A (en) Hyperspectral image classification method based on D light quantisation depth network
CN106097303B (en) A kind of construction method of suitable processing micro-image PCNN model
Zhou et al. Low‐light image enhancement for infrared and visible image fusion
CN112102214B (en) An image defogging method based on histogram and neural network
KR101769042B1 (en) Heteroptics analysis device, heteroptically-altered image generating device for generating heteroptics to create perception approximating original image, heteroptics analysis method, method for generating heteroptically-altered image to create perception approximating original image, and program
CN112019774B (en) A high-quality display method of infrared high-bit-width digital images
Zhang et al. Sub-pixel dispersion model for coded aperture snapshot spectral imaging
CN116128987A (en) Digital camouflage design method based on image recognition
Martínez-Domingo et al. Single image dehazing algorithm analysis with hyperspectral images in the visible range
Li et al. Image fusion algorithm based on contrast pyramid and application
CN115601611B (en) A Deep Learning Spectral Reconstruction Method and System Adapting to Exposure Changes

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20051116