CN104318550A - Eight-channel multi-spectral imaging data processing method - Google Patents
Eight-channel multi-spectral imaging data processing method Download PDFInfo
- Publication number
- CN104318550A CN104318550A CN201410539191.9A CN201410539191A CN104318550A CN 104318550 A CN104318550 A CN 104318550A CN 201410539191 A CN201410539191 A CN 201410539191A CN 104318550 A CN104318550 A CN 104318550A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msup
- image
- mtd
- mtr
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention provides an eight-channel multi-spectral imaging data processing method. The eight-channel multi-spectral imaging data processing method comprises multi-spectral image preprocessing; multi-spectral image pre-rectification; wavelength calibration; radiation calibration; spectral curve building; multi-spectral image color calculation; multi-spectral image feature extraction; spectrum and color matching. According to the eight-channel multi-spectral imaging data processing method, the complete integrated data processing steps the detailed specific guidance are provided for the technical personnel, the corresponding method is selected according to the characteristics of eight-channel multi-spectral imaging data, and accordingly the good effect is achieved.
Description
Technical Field
The invention relates to a multispectral imaging data processing method, in particular to an eight-channel multispectral imaging data processing method, and belongs to the field of multispectral imaging.
Background
Multispectral Imaging (Multispectral Imaging) is a short name of multiband spectral Imaging technology, and is a spectrum-integrated information acquisition and processing technology, namely, the spatial dimension information of a target is acquired, and simultaneously, the spectral dimension information of the target is acquired. According to the difference between the number of spectral bands or the spectral resolution of the acquired information, the current spectral Imaging technologies can be roughly classified into Multi-spectral Imaging (Multi-spectral Imaging), Hyper-spectral Imaging (Hyper-spectral Imaging), and Ultra-spectral Imaging (Ultra-spectral Imaging).
The multispectral image data cube has the characteristics of multispectral and high spatial resolution, and therefore contains richer surface feature target information than a full-color image or a pure spectral measurement. From the spectral image data, one can analyze and identify the surface feature target from both spatial and spectral matching. Therefore, the spectral imaging technology has wide application prospects in the aspects of geology and geography, vegetation investigation, atmospheric exploration, ocean remote sensing, agricultural science and technology, environmental monitoring, disaster reduction and prevention, military science and the like. Particularly in the aspect of military reconnaissance, the spectral imaging system can judge the attributes of the targets according to the characteristic spectrums of radiation or reflection of various weaponry systems and discover military targets which cannot be discovered by the traditional visible light reconnaissance system; the type and the model of the weapon equipment can be judged and the biochemical toxic gas can be identified and forecasted through analyzing the characteristic spectrum.
From the 20 th century and the 70 th century, the multispectral imaging technology is firstly applied to the field of aerospace remote sensing and earth observation, and gradually plays a role in the fields of agriculture, biomedicine, museum work collection, beauty, high-precision color printing, computer graphics and the like. Since the 90 s of the 20 th century, the research on the multispectral imaging technology of the visible light band has received great attention and is now the leading edge of research on related science. The international commission on illumination (CIE) introduced multispectral imaging technology into the color image technology committee, division 8 (TC8-7) at 11 months 2002.
The multispectral imaging and data processing have a certain foundation in China, and based on image processing, multispectral imaging data mainly relates to the steps of image preprocessing, denoising, correcting, registering and the like, but the images after the processing steps are analyzed, a scientific and comprehensive method is not provided, so that a systematic, comprehensive and comprehensive analysis method for the whole multispectral imaging data processing is needed.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention provides an eight-channel multispectral imaging data processing method, which provides comprehensive and complete multispectral imaging data processing steps for people
Compared with the prior art, the eight-channel multispectral imaging data processing method provided by the invention comprises the steps of firstly, multispectral image preprocessing; secondly, pre-registering the multispectral image; step three, wavelength calibration; step four, radiation calibration; constructing a spectrum curve; sixthly, calculating the color of the multispectral image; seventhly, extracting multispectral image features; and step eight, matching the spectrum with the color.
The invention also provides an eight-channel multispectral imaging data processing method, which comprises the following steps:
step one, multispectral image preprocessing, including spectral image noise removal, CCD uniformity correction, CCD nonlinear correction and the like; the spectral image noise removal is to remove salt and pepper noise by using a temporary domain averaging method and a median filtering method; the CCD uniformity correction is implemented by imaging a plane target with uniform brightness distribution, calculating and extracting uniformity correction coefficients of all pixels on the surface of the CCD according to response values of all pixels of the CCD, and storing the coefficients into a calibration file of a software system; the CCD nonlinear correction is realized by measuring targets with different brightness levels and then acquiring a nonlinear correction coefficient by adopting curve fitting or other mathematical methods according to response data of each brightness level;
secondly, pre-registering the multispectral image, including image edge extraction and image transformation according to the feature points after the edge extraction; the image edge extraction is carried out by the following method:
for a two-dimensional image signal, smoothing is performed using the following Gauss function:
g (x, y, σ) is a circularly symmetric function, the smoothing effect of which can be controlled by σ, and since the image is linearly smoothed, let G (x, y) be the smoothed image, we get: g (x, y) ═ G (x, y, σ) × f (x, y), where f (x, y) is the image before smoothing;
since the edge point is a place where the gray value in the image changes drastically, such abrupt change in image intensity will produce a peak in the first derivative, or equivalently a zero crossing in the second derivative, while the second derivative along the gradient direction is non-linear, so the Laplacian operator is used instead, namely:
at zero crossing point ofAre edge points, in whichIn order to be a LOG filter, the LOG filter,
the LOG operator is approximate to retina ganglion receptive field spatial tissue, can be regarded as an excited central area and an inhibitory peripheral area, appears in a mode of a template with the size of 62 sigma multiplied by 62 sigma, when the sigma takes different values, the operator can be used for detecting the image edges under different scales, usually, the sigma is not less than 1, and the LOG edge detection operator is used for carrying out edge detection on the image, namely, extracting the image edges;
the image transformation uses rigid transformation, and the transformation formula is as follows:
in the formula: (x, y) is the point of the first image, i.e. the reference image, transformed to correspond to the point in the second imageK, theta, deltax and deltay are respectively a scale factor, a rotation factor and a coordinate translation quantity of the first graph and the second graph, and the parameters are obtained by manual operation;
step three, wavelength calibration, namely selecting a cubic spline interpolation method, establishing a spline curve of the known wavelength reflectivity data of corresponding points according to the number of the optical filters, namely a spectral reflectivity function curve, and dividing the wavelength between every two known wavelengths according to uniform step length according to the obtained function curve to obtain spectral reflectivity data of other wavelengths;
step four, radiation calibration, namely establishing quantitative relations between spectral radiance values at the entrance pupil of the imaging spectrometer and digital quantization values output by the imaging spectrometer in different spectral bands through various standard radiation sources, selecting a target light source with known spectral radiance characteristics as a measuring object of the multispectral imager, and then simulating the working state and environment of the multispectral imager;
constructing a spectral curve, and establishing a spectral reflectivity curve by adopting a cubic spline interpolation function;
calculating the color of the multispectral image, including calculating the tristimulus value of the image color, calculating the RGB of the color image, calculating the color difference of the image and calculating the beta value of the near-infrared brightness factor;
seventhly, extracting multispectral image features, including feature selection and feature extraction;
and step eight, matching the spectrum with the color, wherein the matching comprises color matching based on the tristimulus value, color matching based on the spectrum and spectrum matching.
The eight-channel multispectral imaging data processing method provided by the invention provides a complete and comprehensive data processing step, gives a very detailed and specific guidance to technical personnel, selects a corresponding method according to the characteristics of the eight-channel multispectral imaging data, and achieves a good effect.
Drawings
FIG. 1 is a schematic diagram of a Laplacian operator common template;
FIG. 2 is a schematic diagram of a LOG operator 5 × 5 order template;
FIG. 3 is a schematic view of a radiometric calibration device.
Detailed Description
The invention provides an eight-channel multispectral imaging data processing method, and in order to make the purpose, technical scheme and effect of the invention clearer and clearer, the invention is further described in detail below by referring to the attached drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
In the eight-channel multispectral imaging data processing method disclosed by the embodiment, the data acquisition system comprises eight channels, wherein each eight channel consists of a visible light CCD objective lens with a fixed focal length, narrow-band interference filters with different wavelengths, a CCD sensor, an image acquisition card, an industrial control computer and a data processing software system. Wherein, 8 narrow-band interference filters with different wavelengths cover visible light and near infrared wave bands; optical lenses installed in front of the 8 channels, respectively; the center wavelength and bandwidth (FWHM) parameters are shown in the table below.
In order to improve the radiometric accuracy and precision of the multispectral imaging system, the collected multispectral image data cube generally needs to be preprocessed. The preprocessing mainly comprises the contents of spectral image noise removal, CCD uniformity correction, CCD nonlinear correction and the like.
The random Noise of black and white spots in the product image of each stage of the imaging spectrometer caused by the performance of the focal plane sensor, the target brightness level, the circuit Noise, etc. is often called ImPulse Noise (Impulse Noise) or Salt and Pepper Noise (Salt and Pepper Noise). Salt and pepper noise greatly reduces image quality, and therefore detection and processing are necessary. The salt-pepper noise image is characterized in that noise points are uniformly distributed in the whole image, and the percentage of the number of the noise distributed in the image in the total number of pixels is called noise rate. There are various methods for detecting salt and pepper noise. One common method is to determine whether the signal level of the pixel to be measured is abnormal according to the signal level difference between the pixel to be measured and the surrounding pixels, and if the signal level of the pixel to be measured is abnormal, the pixel to be measured belongs to a salt and pepper noise point.
The traditional salt-pepper noise denoising method comprises a temporary domain averaging method and a median filtering method.
(1) Critical domain averaging method
The adjacent domain averaging method is to replace the noise value a (i, j) with the average value a (i, j)' of the DN values of the surrounding pixels, for example, 8 points before and after the noise value are averaged:
(2) median substitution method
The median substitution method is to adopt a plurality of points before and after the noise point to be sorted, substitute the noise point by the sorted median, for example, select nine points including the noise point:
a(i,j)′=mid(a(i,x),x∈(j-4,j+4)
for an imaging type radiometric system, the whole image plane of an image sensor is required to have uniform radiation response characteristics under ideal conditions; that is, when the measured object has a uniform brightness distribution, the response of the entire image plane of the image sensor should also be uniform, otherwise the measurement result will not accurately reflect the brightness relationship between various objects in the measured scene. However, in many cases, the front optical system generates a vignetting phenomenon at a focal plane, i.e., an image sensor surface; in addition, the image sensor CCD itself may have an uneven response characteristic of the surface; therefore, uniformity correction of the image sensor CCD is required in the above case.
CCD uniformity corrections in spectral imaging systems are also classified in many documents as "relative radiometric corrections" or "relative radiometric scaling" and their purpose is consistent. It is also noted that another basic concept in the calibration of spectral imaging systems-absolute radiometric correction or absolute radiometric calibration-is a completely different concept than "relative radiometric correction" or "relative radiometric calibration". "Absolute radiometric calibration" refers to a calibration of the relationship between the response of the CCD and the radiometric value of the target under test. In general, "absolute radiation correction" needs to be performed after "relative radiation correction" is completed.
One of the basic methods for CCD uniformity correction is to image a planar object with uniform brightness distribution, then calculate and extract the uniformity correction coefficients for each pixel on the CCD surface from the response values of each pixel of the CCD, and save these coefficients into the calibration file of the software system. In many cases, CCD uniformity correction can be combined with CCD non-linearity correction.
The CCD nonlinear correction refers to the correction of the nonlinear response characteristic of the multispectral imager CCD. A conventional CCD sensor has a linear photoelectric response characteristic in a certain illumination intensity range, but when the intensity of an optical signal received by the CCD is small or high, the response characteristic of the CCD deviates from linearity. Therefore, nonlinear correction of the CCD is required in many cases.
One of the basic methods for CCD non-linearity correction is to measure objects having different brightness levels and then use curve fitting or other mathematical methods to obtain non-linearity correction coefficients based on the response data of each brightness level.
(1) And (3) performing calibration experiments on the multispectral imager CCD under 10 radiance levels, and obtaining 10 DN data of 768 × 576 pixels responded by the CCD by taking uniform light emitted by the integrating sphere as a light source.
(2) And measuring the output of each pixel of the CCD chip under the condition of no illumination for many times to obtain a plurality of dark current data DN values of 768 multiplied by 576 pixels.
The correction method comprises the following steps:
the CCD uniformity correction experimental data is used for calculating the correction coefficient of the nonlinear response of the row and the column of the CCD chip,
i.e. each picture element gets a set of polynomial coefficients. The calibration process is analyzed in detail below.
(1) Averaging dark current values of each pixel on the CCD chip for multiple measurements to obtain Ai, j, and integratingAveraging the chips to obtain A.
(2) The DN output value of the ith row and jth column of pixels of the CCD chip under the nth radiance level is recorded as (DN)n)i,jRecording DN values of all pixels on the CCD chip under 10 radiation levels, calculating the average value of DN data of all pixels of the CCD chip under the nth radiation level, and recording the average value as DNn。
(3) Using pixel dark current value Ai,jMean value of dark current of CCD chip A, response of each pixel of CCD chip at each radiation level (DN)n)j,jAnd mean value DNnCalculating a uniformity correction coefficient anI, j, the calculation formula is equation set (1).
(4) Solving equation set (1) can obtain polynomial coefficients ak, k being 1, 2n corresponding to each pixel. After all the pixels on the CCD are corrected, the array can be stored in a three-dimensional array form of a [ k ] [ i ] [ j ]. For the same pixel on the CCD, different brightness levels comprise a set of coefficients, the set of coefficients are subjected to curve fitting, and the fitting coefficients are stored.
The registration of multispectral images is one of the important links of the data processing of the whole spectral imaging system, and is related to the accurate fusion problem of spectral dimension and image dimension. In a data cube of the multispectral imager, due to assembly errors of an optical system and target motion existing in the sampling process or motion of the instrument, differences in geometric positions, amplification rates and the like exist among multi-channel images of the same scene, so that an image registration link is necessary to be adopted in data processing of the multispectral imager, and images of the data cube are accurately registered in a spatial dimension.
The cross-correlation method is an image gray scale information-based method that is suitable for registration between images obtained by the same sensor. The method requires a large amount of calculation, is not suitable for processing the problems of nonlinear deformation and local deformation, and needs to be improved.
The fourier transform method is to apply the phase correlation and other techniques to process the image registration with rotation, translation and scaling mismatch after performing fast fourier transform on the image. However, the fourier transform method cannot deal with the problems of non-linear deformation and the like and the registration of images with different gray attributes.
Point mapping is the most commonly used registration method when the mapping of the two images is not known. However, the position accuracy of the feature points is easily affected by subjective judgment of people, and an accurate and stable registration result cannot be obtained, so the point mapping method also often finds the optimal transformation by using feedback among various stages.
Elastic modeling is currently mainly used for registration between medical images. The registration method based on wavelet transform has attracted high attention in recent years, and has the characteristic of greatly reducing the calculation amount during image registration.
Considering that the 8-channel imaging spectrometer has the factors of less channel number, unlimited processing time and the like, the image registration of the invention adopts a method for extracting image characteristic points for matching through manual intervention, and the method can be classified as a point mapping method.
The point mapping method needs to extract edge details of each image before registration so as to extract feature points or feature matching vectors. The existing mature methods of image edge detection can be used, and the edge detection operators adopted by people at present can be roughly divided into two categories, namely a first order differential operator and a second order differential operator. The first order differential operator mainly comprises a gradient operator, a Roberts cross operator, a Prewitt operator and a Sobel operator; the second order differential operator mainly refers to Laplacian operator. These operators are all responsive to gray level changes, or average gray level changes.
If a Gauss transform is added before the second order differential Laplacian operator, the well-known LOG operator, i.e., the Laplacian of gaussian, is formed.
The edge detection operators have their respective characteristics. The operator used for edge extraction in the invention depends on the analysis of the actual application effect. To this end we apply various edge extraction operators to the 8-channel image sequence of the present invention.
Although the edge extraction of the image by the first-order gradient operator and the Roberts operator is not particularly clear, the edge characteristics of the original image are basically reserved, but the extraction effect of small borders of the image, such as windows, is not good; on the contrary, the Sobel operator and the Prewitt operator can extract clear image edges, and small frames relative to the overall outline in the image can be extracted completely, but the Sobel operator and the Prewitt operator can easily generate double edges when the image is extracted; the Laplacian operator is a second derivative operator, and is sensitive to noise in the image. In addition, for the Laplacian operator, after the second differentiation, the extracted boundary pixels are not on the original boundary, that is, the second differentiation operator has no directionality, and the positioning distortion cannot be corrected, so that the Laplacian operator is not considered in the actual edge extraction.
The contour edge is extracted most clearly by the LOG operator, and the edge loss phenomenon is least, so the invention finally selects the LOG operator to finish the image edge extraction work of image registration.
The edge detection operator adopted by the invention is a LOG operator, which is realized by adding Gauss transformation on the basis of the Laplacian operator, so that the Laplacian operator needs to be introduced first.
The Laplacian operator is a second derivative operator, and defines Laplacian values at position (x, y) for 1 continuous function f (x, y) as follows:
the above approximate formula is centered around [ i, j +1], and is replaced by j-1:
in the processing procedure of digital images, the Laplacian value of the calculation function can also be realized by various templates. The basic requirement for the template is that the coefficients corresponding to the central pixel should be positive, while the coefficients corresponding to the central neighboring pixels should be negative, and that they should be zero. Two commonly used Laplacian operator templates are shown in fig. 1.
The Laplacian operator is a second derivative operator and is therefore quite sensitive to noise in the image. In addition it often produces edges that are double pixel wide and also does not provide information on the direction of the edge. For the above reasons, the Laplacian operator is rarely used directly for detecting an edge, but is mainly used for determining whether an edge pixel is located on the side of a dark region or a bright region of an image after the edge pixel is known.
For the Laplacian operator, after secondary differentiation, the extracted boundary pixels are not on the original boundary. Because the second differential operator has no directivity, the positioning distortion can not be corrected. Therefore, positioning distortion is a problem for Laplacian operators.
The image registration of the present invention first uses a LOG filter for edge extraction. Among the various methods of image edge extraction, the first proposed edge detection operators were the gradient operator and the Laplacian operator, since larger variations in gray scale always correspond to larger derivatives. The gradient method is simple in operation and convenient in software implementation, but generates wider response in the area near the boundary, the obtained result is often required to be refined, the positioning precision of the boundary is influenced, the quality of the boundary is influenced, and the Laplacian operator is sensitive to high frequency, so that the Laplacian operator is greatly influenced by high frequency noise. In order to effectively suppress the influence of high frequency noise, an improvement method is to perform appropriate smoothing on the image to suppress noise, and then perform differentiation, i.e., LOG filter edge detection.
For a two-dimensional image signal, smoothing is performed using the following Gauss function:
g (x, y, σ) is a circularly symmetric function whose smoothing effect can be controlled by σ, and since the image is linearly smoothed, mathematically convolved, let G (x, y) be the smoothed image, we get: g (x, y) ═ G (x, y, σ) × f (x, y), where f (x, y) is the image before smoothing.
Since the edge point is a place where the gray value in the image changes drastically, such abrupt change in image intensity will generate a peak in the first derivative, or equivalently, a zero crossing in the second derivative, while the second derivative along the gradient direction is nonlinear and complicated to calculate, the Laplacian operator is used instead, that is:
as an edge point, whereIs a LOG filter.
The above equation is the LOG edge detection operator. The LOG operator is a mexican grass hat, which is an approximation to the retinal ganglion receptive field spatial tissue, and can be viewed as consisting of an excitatory central area and an inhibitory peripheral area, usually appearing as a 62 σ × 62 σ size template, and when σ takes different values, the operator can be used to detect the image edges at different scales. Generally, we take σ ≧ 1.
The LOG filter has two significant features:
(1) the Gauss function part G in the filter can blur the image, and effectively eliminates the image intensity change of which all scales are far smaller than a Gauss distribution space constant sigma. The Gauss function is chosen to blur the image because it is smooth, localized, both in the spatial and frequency domains, and therefore introduces the least likelihood of any changes that were not present in the original image.
(2) The filter adopts a Laplacian operator to reduce the calculation amount. If images are usedOrSuch first order directional derivatives, their peaks and valleys must be found along each orientation; if images are usedOrSuch second order directional derivatives must detect their zero crossings. However, all these operators have a common disadvantage: have directionality and all of them are orientation dependent. In order to avoid the computational burden due to directionality, it is necessary to try to select an orientation-independent operator. The lowest-order isotropic differential operator is exactly the Laplacian operator
In the actual programming process of the present invention, the LOG operator can be implemented with the help of templates. FIG. 2 is a 5 × 5 template of the LOG operator employed in the present invention.
After the edge extraction of the image is completed, the image transformation can be carried out according to the characteristic points, so that the registration is realized. Aligning one image with another image often requires a series of transformations of one image, which can be classified as rigid body transformations, affine transformations, projective transformations, and nonlinear transformations.
(1) Rigid body transformations
If the distance between two points in the first sub-image remains unchanged after transformation into the second sub-image, this transformation is called a rigid transformation. The rigid body transformation can be decomposed into translation, rotation and inversion (mirror image), and in two-dimensional space, the transformation formula of the point (x, y) through the rigid body transformation to the point (x ', y') is as follows:
whereinIs the angle of rotation (t)x,ty) For the translation vector, k is the scaling factor.
(2) Affine transformations
The straight line on the first sub-image after transformation is mapped to the straight line on the second sub-image and keeps parallel relation, and the transformation is called affine transformation. Affine transformations can be decomposed into linear (matrix) transformations and translation transformations. In 2D space, the transformation formula is:
wherein, is a real matrix
(3) Projective transformation
The straight line on the first sub-image is mapped to the second sub-image after transformation, but the parallel relation is still straight line, and the transformation is projection transformation which can be represented by linear (matrix) transformation on high-dimensional space. The transformation formula is as follows:
(4) non-linear transformation
The non-linear transformation may transform a straight line into a curved line. In 2D space, it can be represented by the following formula:
(x′,y′)=F(x,y)
where F denotes any functional form that maps the first sub-image onto the second sub-image. Typical non-linear transformations are e.g. polynomial transformations, which in 2D space can be written as follows:
x′=a00+a10x+a01y+a20x2+a11xy+a02y2+…
y′=b00+b10x+b01y+b20x2+b11xy+b02y2+…
the non-linear transformation is suitable for image registration problems with global deformation and registration cases where the whole body is approximately rigid but the part has deformation.
According to the characteristics of the images collected by the invention, the rotation difference and the translation difference exist between the images, but the difference of the magnification factor is very small, so that the rigid body transformation is mainly selected to process the images. The rigid body transformation formula adopted in the invention is as follows:
in the formula: (x, y) is the point of the first image, i.e. the reference image, transformed to correspond to the point in the second imageK, theta, deltax and deltay are respectively a scale factor, a rotation factor and a coordinate translation amount of the first graph and the second graph; these parameters are all obtained manually in the image registration interface of the data processing software of the present invention.
The purpose of imaging spectrometer spectral or wavelength scaling is to determine the center wavelength of each spectral band of the imaging spectrometer. The spectral calibration result is one of the main performance indexes of the imaging spectrometer, and the accuracy of the spectral calibration result directly influences the reliability of measured data.
The wavelength calibration method of the imaging spectrometer can refer to the calibration method of the same type of spectral measuring instruments.
For example, a standard linear light source with known wavelength or a target with known spectral reflectivity characteristics is used for calibrating a specific wavelength point, and the rest wavelength positions are determined by a nonlinear or linear interpolation method.
Although the calibration method of the existing spectrum instrument belongs to the mature technology and has a plurality of schemes for reference, the spectrum calibration method of the 8-channel imaging spectrometer has the particularity. Namely, the light splitting system of the 8-channel multispectral imager of the invention is composed of 8 narrow-band filters with fixed wavelengths, so that the original measured data cube is composed of 8 channels, and the wavelengths of the 8 channels are known. The response data of the rest spectral bands are obtained by nonlinear interpolation on the basis of the data of 8 channels, so that the specific wavelength values of the interpolated bands are also obtained by nonlinear interpolation.
The invention selects a cubic spline interpolation method to establish a 8-point spline curve with known wavelength reflectivity data, namely a spectral reflectivity function curve. According to the obtained function curve, the wavelength is divided between every two known wavelengths according to a uniform step size, and the spectral reflectivity data of other wavelengths are obtained.
Therefore, the wavelength calibration precision of the invention mainly depends on the accuracy of 8-point fixed wavelength, the interval width of every two wavelength points and the nonlinear change degree of the spectrum curve obtained by interpolation.
Radiometric calibration refers to "absolute radiometric calibration" of a multispectral imager, and is also called "absolute radiometric calibration" in some documents, and the task of the radiometric calibration is to establish a quantitative relationship between a digital quantization value (DN) output by a detection element of an imaging device of each spectral channel (8 channels in the present invention) of the multispectral imager and an output radiance value in a corresponding field of view.
After the multispectral imager is subjected to radiometric calibration and obtains the radiometric correction coefficient, the measured data can be converted into standard spectral radiometric data, such as a spectral amplitude brightness curve, according to the radiometric calibration coefficient, so that the spectral radiometric characteristics of the measured target can be correctly reflected. Parameters such as spectral reflectivity of the target can be further calculated by knowing the spectral amplitude brightness curve and the spectral characteristics of the illumination light source.
Radiometric calibration, as previously described, is the establishment of a quantitative relationship between the values of spectral radiance at the imaging spectrometer's entrance pupil and the digitally quantized values of the imaging spectrometer's output at different spectral bands by various standard radiation sources. Therefore, a target light source with known spectral radiation characteristics is generally selected as a measurement object of the multispectral imager, and then the working state and environment of the multispectral imager are simulated.
Considering that the light energy received by the multispectral imager is mainly from solar radiation reflected by the earth, the light source used for radiometric calibration should try to simulate the spectral distribution of solar radiation. In laboratory calibration facilities, however, it is very difficult to accurately simulate the spectral distribution of solar radiation, so that halogen lamps with high color temperatures are generally used as light emitters. In addition, for the calibration of the focal plane detector of the multispectral imager, the exit of the integrating sphere with a larger aperture should be used as the light source and illuminate the entire field of view of the multispectral imager sensor. In addition, the spectral radiance distribution (for example, the spectral radiance) of the light source needs to be measured by using a standard spectral radiance meter, and the measured spectral radiance distribution is used as a reference for calibrating the spectral radiance. The scaling device is shown in fig. 3.
In order to establish an analytical expression for radiometric calibration of each spectral channel, assuming that the spectral radiance obtained in a standard field of view of a channel sensor of a multispectral imager is Y (which can be measured by an instrument), the DN value output by the sensor is X, the slope in the radiometric calibration coefficient is a (amplification factor of the instrument, assumed to be linear), and the intercept is B (direct current component or bottom level of the circuit), the relationship between the spectral radiance and the image output DN value is:
Y=B+AX
in the formula: the unit of Y is Wcm-2sr-1nm-1。
The radiance of the light source is adjusted in the dynamic range of the multispectral imager, and a set of relational expression (radiance scaling formula) between the spectral radiance value of the measured object and the output DN value of the multispectral imager can be established:
Lj(λi)=aiDN(j,i)+bi
wherein: l isj(λi) The standard light source radiance with radiance level j and spectral band i is known quantity; DN(j,i)The output DN value of the multispectral imager with the radiance level j and the spectral segment i is a known quantity. a isi,biAnd (4) determining the radiation calibration coefficient with the spectral section i to be determined. When a isi,biAfter the determination, the above formula can be used as a radiometric calibration formula of the multispectral imager. I.e. each spectral channel has its own radiometric calibration formula.
Because the wavelength i of the radiometric calibration formula respectively corresponds to a certain spectral measurement channel (namely a CCD channel) of the multispectral imager, ai,biThe determination of (2) is done in the present channel. Note that: from Lj(λi) And DN(j,i)Data sets, therefore can be based on these Lj(λi)、DN(j,i)Data set determination of the coefficients a of equations (4-19)i,bi. For example, a can be determined by fitting a linear fiti,bi。
In some cases, the photoelectric response of the CCD is non-linear, and then the L can be directly usedj(λi) And DN(j,i)And (3) fitting a quadratic curve as a calibration formula:
Y=P1ji+P2jiX+P3jiX2
the basic method of radiometric calibration of a multi-spectral imager is described above. The method is generally suitable for the case where the optical system parameters of the imaging spectrometer are fixed, such as radiometric calibration of a satellite remote sensing imaging spectrometer. However, for the multispectral imager of the present invention, the above calibration scheme is not the best option since the size and distance of the target to be measured is not very fixed.
Given that multispectral imagers are primarily used to measure the spectral reflectance or spectral reflectance (i.e., reflectors) of a target, calibration methods similar to reflectance spectrophotometers can be employed.
The radiometric calibration method of a reflectance spectrophotometer is also known as calibration of spectral reflectance and its basic principle is similar to the "comparative colorimetry" method in spectrophotometry.
The comparative colorimetry measures the spectral reflectance or spectral radiance factor of a sample by quantitatively comparing the power of monochromatic radiation reflected at the same wavelength by a "standard" (reference) of some known spectral characteristics with that of the sample.
When measuring reflective samples, total reflection diffusers are typically used as reference standards. The reflectance of the total reflection diffuser is 1 at each wavelength. However, none of the actual materials have such characteristics that only materials with properties close to those of the actual materials can be selected as working standards. The present invention uses a matt flat white template as a reference, the spectral reflectance of which can be measured by a standard spectrophotometer.
The colorimetric principle of the comparative colorimetric method is summarized as follows:
setting W (lambda) as the instrument response value of the standard white board; w0(lambda) is the spectral reflectivity of the standard white board calibrated in advance; s (lambda) is an instrument response value of a sample (target) to be detected; r (lambda) is the spectral reflectivity of the measured sample to be calculated;
φ0(λ) is the incident radiation flux; phi is aS(lambda) and phiW(lambda) is the reflected flux received when measuring the sample and when measuring the white board, respectively;
k is the transformation coefficient of the instrument. By definition, reflectance is the ratio of reflected flux to incident flux, and the instrument response values for the sample and whiteboard measurements, respectively, are written:
S(λ)=kΦS(λ)=kΦ0(λ)R(λ)
W(λ)=kΦW(λ)=kΦ0(λ)W0(λ)
and comparing the two formulas to obtain a spectral reflectance calculation formula of the detected sample:
from the above equation, in the spectral analysis of the measurement image, we only need to obtain the instrument response value W (λ) of the standard plate and the instrument response value S (λ) of the target to be measured, and then can calculate the spectral reflectance R (λ) of the target.
From the above analysis, in the actual measurement work of the multispectral imager, in order to obtain the spectral calibration data, a standard white board with known spectral reflectivity or an approximate standard white board is required to exist in the target scene, and the method is feasible in most cases.
As described previously, the original data obtained by the 8-channel multispectral imager of the present invention using 8 narrow-band color filters are spectral response values having wavelengths distributed in eight wavelength bands from visible light to infrared radiation (420-940 nm). According to the obtained 8 original values, we can only connect into a spectral broken line, which cannot truly reflect the spectral characteristics of the object, but to perform further spectral analysis and color synthesis on the target scene, we must obtain a continuous spectral curve, so we need to use interpolation algorithm to obtain other waveband spectral response values to complete the smoothing operation on the original spectral broken line containing only 8 data points.
Light strikes objects and may be reflected, absorbed, and transmitted. Where the ratio of reflected flux to incident flux is referred to as reflectance. According to the reflection of light from the surface of an object, the method can be divided into
(1) The reflected light follows the law of reflection and emerges from the specular direction, this portion being called the specular reflectance.
(2) A perfect diffusive reflective surface reflects the incident radiation flux all out without loss, with a reflectance equal to 1 and with the same brightness in all directions.
(3) The combination of regular reflection and diffuse reflection, the magnitude of the reflected flux within a certain solid angle is related to both the incident direction and the test direction, i.e. the spectral reflection factor. The spectral reflection factor is the spectral radiant flux phi of a wavelength lambda from the object in a direction defined by a defined solid angle under specific illumination conditionss(λ) spectral radiant flux φ of wavelength λ reflected from a perfect diffusive reflective surface under the same conditionsn(lambda) ratio. When a solid angle is givenωThe spectral reflectance factor measured near 2 π is the spectral reflectance ρ (λ). If spectral radiance is measured, then the resulting spectral radiance factor β (λ).
The spectral reflectance, the spectral reflectance factor and the spectral radiance factor can all reflect the characteristics of selective reflection of an object on an incident spectrum, and only the geometrical conditions are different during measurement. The color characteristic of the reflecting surface can be calculated from its spectral reflection characteristic, so that the measurement of the reflection characteristic of an object is of great importance in colorimetry.
When the reflectance of the surface of an object does not vary with the thickness of the surface of the object, the reflectance is also called reflectivity. Reflectivity is related to the physical structure and chemical composition of the surface of the object. Reflectance and reflectance are not strictly distinguished herein and are referred to as reflectance.
Whether measured by the human eye or by an instrument, the light source, the sample, and the detector (or the human eye) are three factors corresponding to the output of the finally obtained sample. The relationship between the three is called lighting and receiving geometric conditions, and is called geometric conditions for short. Geometric inconsistencies may cause differences in the measurements. To unify the measurement results CIE 15: the 2004 third edition recommends 10 geometric conditions for the reflectance samples:
(1) di: 8 ° (diffuse illumination, 8 degree directional acceptance, including specular component)
(2) De: 8 ° (diffuse illumination, 8 degree directional acceptance, specular component excluded)
(3) 8 °: di (8 degree directional lighting, diffuse acceptance, including specular component)
(4) 8 °: de (8 degree directional lighting, diffuse acceptance, specular component exclusion)
(5) D: d (diffuse lighting, diffuse reflection acceptance)
(6) D: 0 ° (diffuse illumination, 0 degree directional acceptance, specular component excluded)
(7) 45 ° a: 0 ° (45 degree ring illumination, 0 degree directional acceptance)
(8) 0 °: 45 degree a (0 degree direction lighting, 45 degree ring shaped receiving)
(9) 45 ° x: 0 ° (45 degree directional lighting, 0 degree directional acceptance)
(10) 0 °: 45 ° x (0 degree directional lighting, 45 degree directional acceptance)
As can be seen from the above description, when the multispectral imager of the invention is operating, its optical system receives the light radiation of a distant target and the aperture angle is small (long angle of the target to the lens), so the geometry is normal incidence and the illumination condition is solar diffusion, i.e. close to the CIE recommended d: 0 geometric condition. The measurement result of which shall be strictly referred to as spectral radiance factor. But are generally referred to in some literature as spectral reflectivities.
Such problems often arise in practice: given a collection of discrete sample points, it is desirable to make a smooth curve through the points in order to meet design requirements or to perform processing. The problem is summarized to the mathematical method: knowing the value of the function at some point, an analytical expression of it is sought. The above problem is the interpolation problem of the spectral data cube that the multispectral imager of the present invention needs to solve.
In the existing numerical analysis methods, there are two main types of approaches to solve the above problems: one is to give some sample values of the function f (x), select a functional form that is easy to calculate, such as polynomial, fractional linear function, trigonometric polynomial, etc., and require it to pass through the known sample points, thereby determining the function (x) as an approximation of f (x), which is called interpolation. Another class of methods, which after choosing the form of the approximation function, does not require the approximation function to cross known sample points, but only requires that its total deviation over these points be minimal in a sense, is known as the curve (data) fitting method. The interpolation algorithms commonly used in the current numerical analysis include lagrangian interpolation, newton interpolation, piecewise linear interpolation, hermitian interpolation, Spline interpolation (Spline) and the like. The spline interpolation can obtain a smooth curve, and the smooth curve accords with the actual situation of the multispectral imager, so that the method is the interpolation algorithm selected by the invention.
The eight-channel multispectral imaging data processing method provided by the invention provides a complete and comprehensive data processing step, gives a very detailed and specific guidance to technical personnel, selects a corresponding method according to the characteristics of the eight-channel multispectral imaging data, and achieves a good effect.
It should be understood that equivalents and modifications of the technical solution and inventive concept thereof may occur to those skilled in the art, and all such modifications and alterations should fall within the scope of the appended claims.
Claims (2)
1. An eight-channel multispectral imaging data processing method is characterized by comprising the following steps:
step one, multispectral image preprocessing;
secondly, pre-registering the multispectral image;
step three, wavelength calibration;
step four, radiation calibration;
constructing a spectrum curve;
sixthly, calculating the color of the multispectral image;
seventhly, extracting multispectral image features;
and step eight, matching the spectrum with the color.
2. An eight-channel multispectral imaging data processing method is characterized by comprising the following steps:
step one, multispectral image preprocessing, including spectral image noise removal, CCD uniformity correction, CCD nonlinear correction and the like;
the spectral image noise removal is to remove salt and pepper noise by using a temporary domain averaging method and a median filtering method;
the CCD uniformity correction is implemented by imaging a plane target with uniform brightness distribution, calculating and extracting uniformity correction coefficients of all pixels on the surface of the CCD according to response values of all pixels of the CCD, and storing the coefficients into a calibration file of a software system;
the CCD nonlinear correction is realized by measuring targets with different brightness levels and then acquiring a nonlinear correction coefficient by adopting curve fitting or other mathematical methods according to response data of each brightness level;
secondly, pre-registering the multispectral image, including image edge extraction and image transformation according to the feature points after the edge extraction;
the image edge extraction is carried out by the following method:
for a two-dimensional image signal, smoothing is performed using the following Gauss function:
g (x, y, σ) is a circularly symmetric function, the smoothing effect of which can be controlled by σ, and since the image is linearly smoothed, let G (x, y) be the smoothed image, we get: g (x, y) ═ G (x, y, σ) × f (x, y), where f (x, y) is the image before smoothing;
since the edge point is a place where the gray value in the image changes drastically, such abrupt change in image intensity will produce a peak in the first derivative, or equivalently a zero crossing in the second derivative, while the second derivative along the gradient direction is non-linear, so the Laplacian operator is used instead, namely:
as an edge point, whereIn order to be a LOG filter, the LOG filter,
the LOG operator is approximate to retina ganglion receptive field spatial tissue, can be regarded as an excited central area and an inhibitory peripheral area, appears in a mode of a template with the size of 62 sigma multiplied by 62 sigma, when the sigma takes different values, the operator can be used for detecting the image edges under different scales, usually, the sigma is not less than 1, and the LOG edge detection operator is used for carrying out edge detection on the image, namely, extracting the image edges;
the image transformation uses rigid transformation, and the transformation formula is as follows:
in the formula: (x, y) is a point of a first image, namely a reference image, and corresponds to (x ', y') in a second image after transformation, and k, theta, deltax and deltay are respectively a scale factor, a rotation factor and a coordinate translation amount of the first image and the second image, and the parameters are obtained by manual operation;
step three, wavelength calibration, namely selecting a cubic spline interpolation method, establishing a spline curve of the known wavelength reflectivity data of corresponding points according to the number of the optical filters, namely a spectral reflectivity function curve, and dividing the wavelength between every two known wavelengths according to uniform step length according to the obtained function curve to obtain spectral reflectivity data of other wavelengths;
step four, radiation calibration, namely establishing quantitative relations between spectral radiance values at the entrance pupil of the imaging spectrometer and digital quantization values output by the imaging spectrometer in different spectral bands through various standard radiation sources, selecting a target light source with known spectral radiance characteristics as a measuring object of the multispectral imager, and then simulating the working state and environment of the multispectral imager;
constructing a spectral curve, and establishing a spectral reflectivity curve by adopting a cubic spline interpolation function;
calculating the color of the multispectral image, including calculating the tristimulus value of the image color, calculating the RGB of the color image, calculating the color difference of the image and calculating the beta value of the near-infrared brightness factor;
seventhly, extracting multispectral image features, including feature selection and feature extraction;
and step eight, matching the spectrum with the color, wherein the matching comprises color matching based on the tristimulus value, color matching based on the spectrum and spectrum matching.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410539191.9A CN104318550A (en) | 2014-09-27 | 2014-09-27 | Eight-channel multi-spectral imaging data processing method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410539191.9A CN104318550A (en) | 2014-09-27 | 2014-09-27 | Eight-channel multi-spectral imaging data processing method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN104318550A true CN104318550A (en) | 2015-01-28 |
Family
ID=52373776
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410539191.9A Pending CN104318550A (en) | 2014-09-27 | 2014-09-27 | Eight-channel multi-spectral imaging data processing method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN104318550A (en) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104992411A (en) * | 2015-06-03 | 2015-10-21 | 陕西省地质矿产勘查开发总公司 | Infrared multispectral original image processing method |
| CN108053415A (en) * | 2017-12-14 | 2018-05-18 | 广西科技大学 | Based on the bionical profile testing method for improving non-classical receptive field |
| CN108051087A (en) * | 2017-12-14 | 2018-05-18 | 西安理工大学 | A kind of eight passage multispectral camera design methods for fast imaging |
| WO2018137085A1 (en) * | 2017-01-24 | 2018-08-02 | 深圳企管加企业服务有限公司 | Image analysis and processing system |
| CN109146904A (en) * | 2018-08-13 | 2019-01-04 | 合肥英睿系统技术有限公司 | The method and apparatus of infrared image object profile is shown in visible images |
| CN109186761A (en) * | 2018-09-14 | 2019-01-11 | 中国科学院西安光学精密机械研究所 | Calibration method and coding template of coding spectral imaging system |
| CN109521415A (en) * | 2018-12-19 | 2019-03-26 | 上海同繁勘测工程科技有限公司 | Radiant correction apparatus and system |
| CN111866318A (en) * | 2019-04-29 | 2020-10-30 | 北京小米移动软件有限公司 | Multispectral imaging method, multispectral imaging device, mobile terminal and storage medium |
| CN114972125A (en) * | 2022-07-29 | 2022-08-30 | 中国科学院国家天文台 | True color image recovery method and device for deep space detection multispectral image |
| CN116256319A (en) * | 2022-12-29 | 2023-06-13 | 苏州欧普照明有限公司 | Spectral data calibration method, device, electronic equipment and storage medium |
| CN116630963A (en) * | 2023-06-01 | 2023-08-22 | 广州光信科技有限公司 | Tea classification method, device, equipment and storage medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6455908B1 (en) * | 2001-03-09 | 2002-09-24 | Applied Optoelectronics, Inc. | Multispectral radiation detectors using strain-compensating superlattices |
| CN102829868A (en) * | 2012-08-23 | 2012-12-19 | 中国兵器工业第二0五研究所 | Imaging spectrometer absolute radiation calibration method |
| CN103268618A (en) * | 2013-05-10 | 2013-08-28 | 中国科学院光电研究院 | A True Color Calibration Method for Multispectral Remote Sensing Data |
| CN103279948A (en) * | 2013-05-10 | 2013-09-04 | 中国科学院光电研究院 | Data processing method for true color synthesis of hyper-spectral remote sensing data |
| CN103383348A (en) * | 2013-05-28 | 2013-11-06 | 吉林大学 | Method for extracting altered mineral at vegetation-covered areas by hyperspectral remote sensing |
-
2014
- 2014-09-27 CN CN201410539191.9A patent/CN104318550A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6455908B1 (en) * | 2001-03-09 | 2002-09-24 | Applied Optoelectronics, Inc. | Multispectral radiation detectors using strain-compensating superlattices |
| CN102829868A (en) * | 2012-08-23 | 2012-12-19 | 中国兵器工业第二0五研究所 | Imaging spectrometer absolute radiation calibration method |
| CN103268618A (en) * | 2013-05-10 | 2013-08-28 | 中国科学院光电研究院 | A True Color Calibration Method for Multispectral Remote Sensing Data |
| CN103279948A (en) * | 2013-05-10 | 2013-09-04 | 中国科学院光电研究院 | Data processing method for true color synthesis of hyper-spectral remote sensing data |
| CN103383348A (en) * | 2013-05-28 | 2013-11-06 | 吉林大学 | Method for extracting altered mineral at vegetation-covered areas by hyperspectral remote sensing |
Non-Patent Citations (1)
| Title |
|---|
| 屈亮 等: "八通道并行多光谱成像仪的辐射定标", 《光学技术》 * |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104992411A (en) * | 2015-06-03 | 2015-10-21 | 陕西省地质矿产勘查开发总公司 | Infrared multispectral original image processing method |
| WO2018137085A1 (en) * | 2017-01-24 | 2018-08-02 | 深圳企管加企业服务有限公司 | Image analysis and processing system |
| CN108053415A (en) * | 2017-12-14 | 2018-05-18 | 广西科技大学 | Based on the bionical profile testing method for improving non-classical receptive field |
| CN108051087A (en) * | 2017-12-14 | 2018-05-18 | 西安理工大学 | A kind of eight passage multispectral camera design methods for fast imaging |
| CN108051087B (en) * | 2017-12-14 | 2020-02-18 | 西安理工大学 | A Design Method of Eight-Channel Multispectral Camera for Fast Imaging |
| CN109146904A (en) * | 2018-08-13 | 2019-01-04 | 合肥英睿系统技术有限公司 | The method and apparatus of infrared image object profile is shown in visible images |
| CN109186761B (en) * | 2018-09-14 | 2024-01-30 | 中国科学院西安光学精密机械研究所 | Calibration method and coding template of coding spectrum imaging system |
| CN109186761A (en) * | 2018-09-14 | 2019-01-11 | 中国科学院西安光学精密机械研究所 | Calibration method and coding template of coding spectral imaging system |
| CN109521415A (en) * | 2018-12-19 | 2019-03-26 | 上海同繁勘测工程科技有限公司 | Radiant correction apparatus and system |
| CN111866318A (en) * | 2019-04-29 | 2020-10-30 | 北京小米移动软件有限公司 | Multispectral imaging method, multispectral imaging device, mobile terminal and storage medium |
| CN114972125A (en) * | 2022-07-29 | 2022-08-30 | 中国科学院国家天文台 | True color image recovery method and device for deep space detection multispectral image |
| CN114972125B (en) * | 2022-07-29 | 2022-12-06 | 中国科学院国家天文台 | True color image recovery method and device for deep space detection multispectral image |
| CN116256319A (en) * | 2022-12-29 | 2023-06-13 | 苏州欧普照明有限公司 | Spectral data calibration method, device, electronic equipment and storage medium |
| CN116630963A (en) * | 2023-06-01 | 2023-08-22 | 广州光信科技有限公司 | Tea classification method, device, equipment and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104318550A (en) | Eight-channel multi-spectral imaging data processing method | |
| Stamford et al. | Development of an accurate low cost NDVI imaging system for assessing plant health | |
| CN106896069B (en) | A Spectral Reconstruction Method Based on Single RGB Image of Color Digital Camera | |
| Du et al. | A prism-based system for multispectral video acquisition | |
| US8320661B2 (en) | Apparatus and method for extracting information from electromagnetic energy including target 3D structure and materials | |
| CN103743482B (en) | A kind of optical spectrum imaging device and light spectrum image-forming inversion method | |
| CN112131746B (en) | Chlorophyll a concentration inversion method and system | |
| JP2024521970A (en) | Spectral reflectance measurement method and system | |
| CN106940887B (en) | A method for detecting clouds and shadows under clouds in GF-4 satellite sequence images | |
| CN110702228B (en) | An edge radiation correction method for aerial hyperspectral images | |
| CN108507674B (en) | Calibration data processing method of light field spectral imaging spectrometer | |
| Brunner et al. | A Probabilistic Quantification of Galaxy ClusterMembership | |
| CN113790798A (en) | Seamless spectral imaging device, system and method for dynamic point target tracking measurement | |
| CN117368124A (en) | Radiation calibration method, system, device and medium for hyperspectral camera | |
| Vunckx et al. | Accurate video-rate multi-spectral imaging using imec snapshot sensors | |
| Abed | Pigment identification of paintings based on Kubelka-Munk theory and spectral images | |
| US10395134B2 (en) | Extraction of spectral information | |
| Harris et al. | Radiometric homogenisation of aerial images by calibrating with satellite data | |
| Tominaga et al. | High-resolution imaging system for omnidirectional illuminant estimation | |
| Wu et al. | Recovering sensor spectral sensitivity from raw data | |
| Mouroulis et al. | Spectral response evaluation and computation for pushbroom imaging spectrometers | |
| Lee et al. | A Comprehensive BRF Model for SpectralonⓇ and Application to Hyperspectral Field Imagery | |
| Soszyńska et al. | Feasibility study of hyperspectral line-scanning camera imagery for remote sensing purposes | |
| Krafft et al. | Mitigating illumination-, leaf-, and view-angle dependencies in hyperspectral imaging using polarimetry. | |
| Ekpenyong | Hyperspectral imaging: Calibration and applications with natural scenes |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| EXSB | Decision made by sipo to initiate substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20150128 |