Disclosure of Invention
Aiming at the technical problems, the invention provides a self-adaptive white balance method based on scene perception and multi-feature fusion, which is particularly suitable for real-time color correction in scenes such as monitoring cameras, mobile terminals and the like.
The invention is realized by adopting the following technical scheme:
A self-adaptive white balance method based on scene perception and multi-feature fusion comprises the following steps:
S1, inputting an RGB image for multi-feature calculation, and screening out effective image blocks meeting the conditions of a saturation range, a brightness range, a color variance threshold and an edge density threshold through a dynamic threshold;
step S2, if the number of the effective blocks is lower than a preset threshold, starting a parameter dynamic adjustment mechanism to carry out secondary block screening;
And step S3, if the number of the effective blocks is insufficient after the secondary block screening in the step S2, activating a standby strategy to perform multi-feature fusion correction and output a white balance image, and after the effective blocks are sufficient, transferring to a main strategy to perform processing, and outputting the white balance image through gain calculation and constraint.
Specifically, the step S1 includes the following substeps:
step S11, dividing an input image into 32×32 pixel blocks, wherein the number of the image blocks is as follows:
;
;
wherein M and N represent the height and width of the image, respectively, Representing the size of an image block;
step S12, converting the RGB color model into an HSV color model, and calculating saturation;
step S13, calculating the brightness of each block, sorting, and selecting according to the set brightness range, wherein the brightness calculation formula of each block is as follows:
;
;
The luminance ordering is expressed as:
;
;
where i denotes the row index of the pixel block, j denotes the column index of the pixel block, img [ ] denotes the image range, Representing the extracted image block; The luminance value is represented by a value of, Represents the number of pixels and,The index of the number of pixels is represented,The luminance vector is represented as such,() The vectorization is represented by a vector quantity,Indicating the order of the brightness of the light,Representing the luminance ordering index,() Representing a ranking function;
step S14, calculating color variance and measuring the change degree of pixel colors in the image;
step S15, calculating edge density, describing the degree of intensity of edge information in the image, and representing the degree of intensity as follows:
;
Wherein, the Representing the gradient magnitude of each point x, y, the formula is:
;
Wherein, the AndThe gradient in the horizontal direction and the vertical direction at each pixel point x, y is expressed as:
;
;
Specifically, the step S12 of saturation calculation includes the following sub-steps:
step A1, first, normalize to the [0, 1] range, each channel is expressed as: ,,;
step A2, calculating the maximum value and the minimum value of the three channel colors, which are respectively expressed as:
;
;
Step A3, calculating hue according to the channel where the maximum value is located And using the metric to represent the angle:
If it is Then:;
If it is Then:;
If it is Then:;
Will be Conversion to the degree range [0, 360):;
When (when) I.e., the image color is in gray scale,Not significant, set to 0 or other specific value;
And step A4, calculating saturation, wherein the calculation formula is as follows:
;
step A5, calculating brightness values, wherein in the HSV model, the brightness or maximum component value of the color is expressed as:
。
specifically, the calculating of the color variance in step S14 includes:
the input image block is composed of m×n pixels, and the color of each pixel is represented by an RGB triplet, where the color variance of the whole image is the average of the squares of the differences between the average value of all pixels on each channel and the respective value, and is expressed as:
;
;
;
;
;
;
Wherein, the 、AndRespectively the average value of three color channels of the image block;、 And The variances of the three color channels, respectively.
Specifically, the step S2 parameter dynamic adjustment mechanism specifically includes:
The upper limit of the shrinkage saturation is calculated according to the formula: Performing iterative expansion;
The upper limit of brightness is contracted: ;
After the parameters are adjusted, the screening is performed again.
Specifically, the standby policy multi-feature fusion correction includes:
the edge density is preferential, and the texture rich region is extracted through Canny edge detection while a threshold value is set;
sampling a brightness extremum, and selecting a block with the brightness of 5% as a candidate;
Color diversity weighting, calculating HSV space distribution entropy, and selecting the first 20% diversity blocks;
The weights of the three are distributed as [0.4, 0.3 and 0.3], and the final reference white point is output through the weighted median.
Specifically, the gain calculation and constraint are based on average color values output by the main strategy and the standby strategy, and meanwhile dynamic constraint is set to limit the gain within a certain interval, and the gain calculation formula is expressed as follows:
;
Wherein, the 。
The method has the advantages that the method solves the color cast problem when white/gray areas are lacking in images, realizes automatic parameter adjustment under severe illumination condition change, suppresses noise caused by gain amplification in a low-illumination scene, saves hardware cost of low-end equipment through parallel processing of blocks, improves accuracy in single-color scenes and mixed scenes, improves color restoration accuracy of a camera, and improves imaging quality.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Some embodiments of the present invention are described in detail below with reference to fig. 1. The following embodiments and features of the embodiments may be combined with each other without conflict.
The invention provides a self-adaptive white balance method based on scene perception and multi-feature fusion, which is shown in fig. 1 and comprises the following steps:
S1, inputting an RGB image for multi-feature calculation, and screening out effective image blocks meeting the conditions of a saturation range, a brightness range, a color variance threshold and an edge density threshold through a dynamic threshold;
step S2, if the number of the effective blocks is lower than a preset threshold, starting a parameter dynamic adjustment mechanism to carry out secondary block screening;
And step S3, if the number of the effective blocks is insufficient after the secondary block screening in the step S2, activating a standby strategy to perform multi-feature fusion correction and output a white balance image, and after the effective blocks are sufficient, transferring to a main strategy to perform processing, and outputting the white balance image through gain calculation and constraint.
The following describes the detailed technical scheme of each step by using specific embodiments:
1. And (3) multi-feature calculation, namely screening out effective image blocks meeting the conditions of a saturation range, a brightness range, a color variance threshold, an edge density threshold and the like through a dynamic threshold.
1) Image blocking
Dividing the input image into 32 x 32 pixel blocks, the image block number calculation:
where M and N represent the height and width of the image, respectively.
2) Saturation calculation
The mathematical formula for converting the RGB color model into an HSV (Value) color model is as follows:
a) Normalized to the [0,1] range:
,,。
b) Three channel color maxima and minima are calculated:
;
。
c) Calculating Hue (Hue)
Calculating hue H according to the channel where the maximum value is located, wherein the angle is expressed by using the metric;
If it is Then:;
If it is Then:;
If it is Then:;
Convert H' to the degree range [0, 360): ;
When (when) I.e. the image color is in gray scale, H has no meaning and is typically set to 0 or other specific value.
D) Calculate Saturation (Saturation)
The saturation S can be defined as:
e) Calculating brightness/Value (Value)
In the HSV model, V generally represents the brightness or maximum component value of a color:
;
here, the saturation range is initially set to 0.1, 0.25, excluding disturbances of supersaturation (e.g. highlight region) and low saturation (e.g. grey region).
3) Brightness calculation
Calculating the brightness of each block:
Sorting the luminance values:
the brightness range is selected to be 0.15, 0.9, avoiding dark or overexposed areas.
4) Color variance calculation
The color variance is used to measure the degree of change in the color of pixels in an image. The input image block is composed of m×n pixels, and the color of each pixel can be represented by an RGB triplet, so that the color variance of the whole image is the average of the average value of all pixels on each channel and the square of the difference between the respective values:
;
;
;
;
;
;
Wherein, the 、AndRespectively the three color channel means of the image block.、AndThe variances of the three color channels, respectively.
The variance of RGB channels in each block is required to be more than 0.02, color diversity is ensured, and candidate areas which possibly represent neutral colors are screened out.
5) Edge density calculation
Edge density computation-edge density is typically used to describe how dense edge information is in an image. Edges in an image can be detected by using gradient operators (e.g., sobel or Prewitt), and then calculating the number of edge pixels divided by the total number of pixels:
Calculating gradients in the horizontal direction and the vertical direction at each pixel point x, y:
;
;
the gradient magnitude for each point x, y is:
;
Here the edge density threshold is taken to be 0.3.
2. Dynamic parameter adjustment
When the number of valid blocks is insufficient (e.g., <5 blocks), a parameter dynamic adjustment mechanism is started:
Limited purchase min on saturation, according to the formula Performing iterative expansion;
The upper limit of brightness is contracted: ;
The problem of poor scene adaptability caused by a single threshold value, such as a shadow and highlight coexistence scene, is solved by rescreening after adjusting parameters.
3. Standby policy activation
When the secondary screening is still not satisfied, a multi-feature fusion strategy is directly adopted:
Edge density prioritization, extracting texture rich regions (threshold 0.1) by Canny edge detection, utilizing the edge regions to generally contain neutral color characteristics;
The brightness extremum sampling, namely selecting a block with the front 5% of brightness as a candidate to solve the problem of color distortion of a highlight area;
color diversity weighting, namely calculating HSV space distribution entropy, selecting the first 20% diversity blocks, and avoiding misjudgment of a monochromatic scene;
The weights of the three are distributed as [0.4, 0.3 and 0.3], and the final reference white point is output through the weighted median.
4. Gain calculation and constraint
Average color value based on primary and backup policy output:
Gain formula:
;
Wherein, the ;
Dynamic constraint, namely limiting the gain to be in a [0.7,1.3] interval and preventing overcorrection under extreme color temperature;
The phase is realized through channel separation calculation and matrix multiplication, and color transition naturalness is ensured.
For several of the inventions employed in this solution, the advantages are as follows:
1) Dynamic parameter adjustment mechanism
The method has the effects that the problem of poor scene adaptability caused by fixed threshold values is solved by iteratively shrinking the saturation/brightness threshold value range;
Based on the image information entropy theory, when the effective block is insufficient, gradually relaxing constraint conditions to capture a potential neutral color region;
Compared with other methods, the method has the advantage that the parameter adjustment amplitude is dynamically related to the image characteristics.
2) Multi-feature fusion standby strategy
Edge density detection, namely extracting a high texture region based on a Canny operator, and utilizing the edge region to generally comprise neutral color characteristics (such as an object contour);
Color diversity weighting, namely introducing HSV space distribution entropy calculation, wherein the formula is as follows:
;
Wherein the method comprises the steps of For each color channel distribution probability, the higher the entropy value is, the stronger the color diversity is represented;
the scheme has the advantages that physical characteristics and statistical characteristics are fused, and robustness is higher.
3) Gain constraint and weighted median
Gain limitation, namely preventing color overflow under extreme color temperature through [0.7,1.3] interval constraint;
weighted median calculation, namely, weight distribution (edge 0.4/brightness 0.3/diversity 0.3) is adopted for the fused candidate blocks.
The scheme can be expanded according to actual application, such as:
Dynamic parameter adjustment formula expansion including linear contraction ) Or exponential decay) A plurality of adjustment modes;
adding color temperature curve matching or neural network prediction as a standby strategy;
Gain constraint range expansion, namely dynamically adjusting a constraint interval (such as relaxation of a low-contrast scene to [0.5,1.5 ]) according to the image contrast;
block screening alternatives-non-uniform blocks (e.g., adaptive meshing based on edge density) are employed.
For the foregoing embodiments, a series of combinations of actions are described for simplicity of description, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, it should be understood by those skilled in the art that the embodiments described in the specification are preferred embodiments and that the actions involved are not necessarily required for the present application.
In the above embodiments, the basic principle and main features of the present invention and advantages of the present invention are described. It will be appreciated by persons skilled in the art that the present invention is not limited by the foregoing embodiments, but rather is shown and described in what is considered to be illustrative of the principles of the invention, and that modifications and changes can be made by those skilled in the art without departing from the spirit and scope of the invention, and therefore, is within the scope of the appended claims.