[go: up one dir, main page]

CN120730046A - An adaptive white balance method based on scene perception and multi-feature fusion - Google Patents

An adaptive white balance method based on scene perception and multi-feature fusion

Info

Publication number
CN120730046A
CN120730046A CN202511098364.2A CN202511098364A CN120730046A CN 120730046 A CN120730046 A CN 120730046A CN 202511098364 A CN202511098364 A CN 202511098364A CN 120730046 A CN120730046 A CN 120730046A
Authority
CN
China
Prior art keywords
image
color
brightness
feature fusion
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202511098364.2A
Other languages
Chinese (zh)
Inventor
刘军
鲜燚
方凯
张强
佘培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Guoyi Electronic Technology Co ltd
Original Assignee
Chengdu Guoyi Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Guoyi Electronic Technology Co ltd filed Critical Chengdu Guoyi Electronic Technology Co ltd
Priority to CN202511098364.2A priority Critical patent/CN120730046A/en
Publication of CN120730046A publication Critical patent/CN120730046A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Of Color Television Signals (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于场景感知与多特征融合的自适应白平衡方法,包括:输入RGB图像进行多特征计算,通过动态阈值筛选出满足饱和度范围、亮度范围、颜色方差阈值和边缘密度阈值条件的有效图像块;若有效块数量低于预设阈值,则启动参数动态调整机制进行二次分块筛选;若二次分块筛选后有效块数量仍不足,则激活备用策略进行多特征融合校正,输出白平衡图像;有效块足够后,转入主策略进行处理,通过增益计算与约束,输出白平衡图像;本方案解决了图像中缺乏白色/灰色区域时的色偏问题,实现了光照条件剧烈变化下的参数自动调整,在低照度场景下能够抑制增益放大导致的噪声。

The present invention discloses an adaptive white balance method based on scene perception and multi-feature fusion, comprising: performing multi-feature calculation on an input RGB image, screening out valid image blocks that meet conditions of a saturation range, a brightness range, a color variance threshold, and an edge density threshold through a dynamic threshold; if the number of valid blocks is lower than a preset threshold, starting a dynamic parameter adjustment mechanism to perform secondary block screening; if the number of valid blocks is still insufficient after the secondary block screening, activating a backup strategy to perform multi-feature fusion correction and output a white balanced image; after sufficient valid blocks are obtained, switching to a main strategy for processing, and outputting a white balanced image through gain calculation and constraint; this solution solves the color cast problem when there is a lack of white/gray areas in the image, realizes automatic parameter adjustment under drastic changes in lighting conditions, and can suppress noise caused by gain amplification in low-light scenarios.

Description

Self-adaptive white balance method based on scene perception and multi-feature fusion
Technical Field
The invention relates to the technical field of computer vision and image processing, in particular to a self-adaptive white balance method based on scene perception and multi-feature fusion.
Background
Image white balance has been widely handled at present, and several schemes currently in common use and their drawbacks include:
The gray world method is used for realizing white balance by adjusting the gain of each channel on the assumption that the average value of the whole color of the image approaches gray, and has the defects that the image is dependent on rich colors and fails in a single-color or low-saturation scene;
The perfect reflection method has the defects of easy overflow of high light or interference of artificial light sources and poor night performance, and is characterized in that the brightest area in the image is assumed to be white, and the color temperature of the light source is estimated through the color of the high light area;
The color card-based method has the defects that color calibration is carried out through a preset color code, and the method cannot adapt to a dynamic scene due to the fact that a physical color card is relied on.
The above improved methods have higher complexity, do not solve complex scenes such as a mixed light source, are difficult to adapt to single-color scenes and complex scenes, and meanwhile, the existing technical scheme has the following disadvantages:
1. The scene adaptability is poor, the existing method assumes that a neutral color area exists in an image, and cannot process large-area monochromatic objects (such as red walls and green vegetation) or extreme illumination (such as low night illumination);
2. Parameter solidification, wherein a fixed threshold value is adopted in the traditional algorithm, and cannot be dynamically adjusted according to day and night illumination change;
3. noise sensitivity, namely, dark area noise is easy to amplify during gain correction, and image quality is affected.
Disclosure of Invention
Aiming at the technical problems, the invention provides a self-adaptive white balance method based on scene perception and multi-feature fusion, which is particularly suitable for real-time color correction in scenes such as monitoring cameras, mobile terminals and the like.
The invention is realized by adopting the following technical scheme:
A self-adaptive white balance method based on scene perception and multi-feature fusion comprises the following steps:
S1, inputting an RGB image for multi-feature calculation, and screening out effective image blocks meeting the conditions of a saturation range, a brightness range, a color variance threshold and an edge density threshold through a dynamic threshold;
step S2, if the number of the effective blocks is lower than a preset threshold, starting a parameter dynamic adjustment mechanism to carry out secondary block screening;
And step S3, if the number of the effective blocks is insufficient after the secondary block screening in the step S2, activating a standby strategy to perform multi-feature fusion correction and output a white balance image, and after the effective blocks are sufficient, transferring to a main strategy to perform processing, and outputting the white balance image through gain calculation and constraint.
Specifically, the step S1 includes the following substeps:
step S11, dividing an input image into 32×32 pixel blocks, wherein the number of the image blocks is as follows:
;
;
wherein M and N represent the height and width of the image, respectively, Representing the size of an image block;
step S12, converting the RGB color model into an HSV color model, and calculating saturation;
step S13, calculating the brightness of each block, sorting, and selecting according to the set brightness range, wherein the brightness calculation formula of each block is as follows:
;
;
The luminance ordering is expressed as:
;
;
where i denotes the row index of the pixel block, j denotes the column index of the pixel block, img [ ] denotes the image range, Representing the extracted image block; The luminance value is represented by a value of, Represents the number of pixels and,The index of the number of pixels is represented,The luminance vector is represented as such,() The vectorization is represented by a vector quantity,Indicating the order of the brightness of the light,Representing the luminance ordering index,() Representing a ranking function;
step S14, calculating color variance and measuring the change degree of pixel colors in the image;
step S15, calculating edge density, describing the degree of intensity of edge information in the image, and representing the degree of intensity as follows:
;
Wherein, the Representing the gradient magnitude of each point x, y, the formula is:
;
Wherein, the AndThe gradient in the horizontal direction and the vertical direction at each pixel point x, y is expressed as:
;
;
Specifically, the step S12 of saturation calculation includes the following sub-steps:
step A1, first, normalize to the [0, 1] range, each channel is expressed as: ,,;
step A2, calculating the maximum value and the minimum value of the three channel colors, which are respectively expressed as:
;
;
Step A3, calculating hue according to the channel where the maximum value is located And using the metric to represent the angle:
If it is Then:;
If it is Then:;
If it is Then:;
Will be Conversion to the degree range [0, 360):;
When (when) I.e., the image color is in gray scale,Not significant, set to 0 or other specific value;
And step A4, calculating saturation, wherein the calculation formula is as follows:
;
step A5, calculating brightness values, wherein in the HSV model, the brightness or maximum component value of the color is expressed as:
specifically, the calculating of the color variance in step S14 includes:
the input image block is composed of m×n pixels, and the color of each pixel is represented by an RGB triplet, where the color variance of the whole image is the average of the squares of the differences between the average value of all pixels on each channel and the respective value, and is expressed as:
;
;
;
;
;
;
Wherein, the AndRespectively the average value of three color channels of the image block; And The variances of the three color channels, respectively.
Specifically, the step S2 parameter dynamic adjustment mechanism specifically includes:
The upper limit of the shrinkage saturation is calculated according to the formula: Performing iterative expansion;
The upper limit of brightness is contracted: ;
After the parameters are adjusted, the screening is performed again.
Specifically, the standby policy multi-feature fusion correction includes:
the edge density is preferential, and the texture rich region is extracted through Canny edge detection while a threshold value is set;
sampling a brightness extremum, and selecting a block with the brightness of 5% as a candidate;
Color diversity weighting, calculating HSV space distribution entropy, and selecting the first 20% diversity blocks;
The weights of the three are distributed as [0.4, 0.3 and 0.3], and the final reference white point is output through the weighted median.
Specifically, the gain calculation and constraint are based on average color values output by the main strategy and the standby strategy, and meanwhile dynamic constraint is set to limit the gain within a certain interval, and the gain calculation formula is expressed as follows:
;
Wherein, the
The method has the advantages that the method solves the color cast problem when white/gray areas are lacking in images, realizes automatic parameter adjustment under severe illumination condition change, suppresses noise caused by gain amplification in a low-illumination scene, saves hardware cost of low-end equipment through parallel processing of blocks, improves accuracy in single-color scenes and mixed scenes, improves color restoration accuracy of a camera, and improves imaging quality.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a self-adaptive white balance method based on scene perception and multi-feature fusion in an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Some embodiments of the present invention are described in detail below with reference to fig. 1. The following embodiments and features of the embodiments may be combined with each other without conflict.
The invention provides a self-adaptive white balance method based on scene perception and multi-feature fusion, which is shown in fig. 1 and comprises the following steps:
S1, inputting an RGB image for multi-feature calculation, and screening out effective image blocks meeting the conditions of a saturation range, a brightness range, a color variance threshold and an edge density threshold through a dynamic threshold;
step S2, if the number of the effective blocks is lower than a preset threshold, starting a parameter dynamic adjustment mechanism to carry out secondary block screening;
And step S3, if the number of the effective blocks is insufficient after the secondary block screening in the step S2, activating a standby strategy to perform multi-feature fusion correction and output a white balance image, and after the effective blocks are sufficient, transferring to a main strategy to perform processing, and outputting the white balance image through gain calculation and constraint.
The following describes the detailed technical scheme of each step by using specific embodiments:
1. And (3) multi-feature calculation, namely screening out effective image blocks meeting the conditions of a saturation range, a brightness range, a color variance threshold, an edge density threshold and the like through a dynamic threshold.
1) Image blocking
Dividing the input image into 32 x 32 pixel blocks, the image block number calculation:
where M and N represent the height and width of the image, respectively.
2) Saturation calculation
The mathematical formula for converting the RGB color model into an HSV (Value) color model is as follows:
a) Normalized to the [0,1] range:
,,
b) Three channel color maxima and minima are calculated:
;
c) Calculating Hue (Hue)
Calculating hue H according to the channel where the maximum value is located, wherein the angle is expressed by using the metric;
If it is Then:;
If it is Then:;
If it is Then:;
Convert H' to the degree range [0, 360): ;
When (when) I.e. the image color is in gray scale, H has no meaning and is typically set to 0 or other specific value.
D) Calculate Saturation (Saturation)
The saturation S can be defined as:
e) Calculating brightness/Value (Value)
In the HSV model, V generally represents the brightness or maximum component value of a color:
;
here, the saturation range is initially set to 0.1, 0.25, excluding disturbances of supersaturation (e.g. highlight region) and low saturation (e.g. grey region).
3) Brightness calculation
Calculating the brightness of each block:
Sorting the luminance values:
the brightness range is selected to be 0.15, 0.9, avoiding dark or overexposed areas.
4) Color variance calculation
The color variance is used to measure the degree of change in the color of pixels in an image. The input image block is composed of m×n pixels, and the color of each pixel can be represented by an RGB triplet, so that the color variance of the whole image is the average of the average value of all pixels on each channel and the square of the difference between the respective values:
;
;
;
;
;
;
Wherein, the AndRespectively the three color channel means of the image block.AndThe variances of the three color channels, respectively.
The variance of RGB channels in each block is required to be more than 0.02, color diversity is ensured, and candidate areas which possibly represent neutral colors are screened out.
5) Edge density calculation
Edge density computation-edge density is typically used to describe how dense edge information is in an image. Edges in an image can be detected by using gradient operators (e.g., sobel or Prewitt), and then calculating the number of edge pixels divided by the total number of pixels:
Calculating gradients in the horizontal direction and the vertical direction at each pixel point x, y:
;
;
the gradient magnitude for each point x, y is:
;
Here the edge density threshold is taken to be 0.3.
2. Dynamic parameter adjustment
When the number of valid blocks is insufficient (e.g., <5 blocks), a parameter dynamic adjustment mechanism is started:
Limited purchase min on saturation, according to the formula Performing iterative expansion;
The upper limit of brightness is contracted: ;
The problem of poor scene adaptability caused by a single threshold value, such as a shadow and highlight coexistence scene, is solved by rescreening after adjusting parameters.
3. Standby policy activation
When the secondary screening is still not satisfied, a multi-feature fusion strategy is directly adopted:
Edge density prioritization, extracting texture rich regions (threshold 0.1) by Canny edge detection, utilizing the edge regions to generally contain neutral color characteristics;
The brightness extremum sampling, namely selecting a block with the front 5% of brightness as a candidate to solve the problem of color distortion of a highlight area;
color diversity weighting, namely calculating HSV space distribution entropy, selecting the first 20% diversity blocks, and avoiding misjudgment of a monochromatic scene;
The weights of the three are distributed as [0.4, 0.3 and 0.3], and the final reference white point is output through the weighted median.
4. Gain calculation and constraint
Average color value based on primary and backup policy output:
Gain formula:
;
Wherein, the ;
Dynamic constraint, namely limiting the gain to be in a [0.7,1.3] interval and preventing overcorrection under extreme color temperature;
The phase is realized through channel separation calculation and matrix multiplication, and color transition naturalness is ensured.
For several of the inventions employed in this solution, the advantages are as follows:
1) Dynamic parameter adjustment mechanism
The method has the effects that the problem of poor scene adaptability caused by fixed threshold values is solved by iteratively shrinking the saturation/brightness threshold value range;
Based on the image information entropy theory, when the effective block is insufficient, gradually relaxing constraint conditions to capture a potential neutral color region;
Compared with other methods, the method has the advantage that the parameter adjustment amplitude is dynamically related to the image characteristics.
2) Multi-feature fusion standby strategy
Edge density detection, namely extracting a high texture region based on a Canny operator, and utilizing the edge region to generally comprise neutral color characteristics (such as an object contour);
Color diversity weighting, namely introducing HSV space distribution entropy calculation, wherein the formula is as follows:
;
Wherein the method comprises the steps of For each color channel distribution probability, the higher the entropy value is, the stronger the color diversity is represented;
the scheme has the advantages that physical characteristics and statistical characteristics are fused, and robustness is higher.
3) Gain constraint and weighted median
Gain limitation, namely preventing color overflow under extreme color temperature through [0.7,1.3] interval constraint;
weighted median calculation, namely, weight distribution (edge 0.4/brightness 0.3/diversity 0.3) is adopted for the fused candidate blocks.
The scheme can be expanded according to actual application, such as:
Dynamic parameter adjustment formula expansion including linear contraction ) Or exponential decay) A plurality of adjustment modes;
adding color temperature curve matching or neural network prediction as a standby strategy;
Gain constraint range expansion, namely dynamically adjusting a constraint interval (such as relaxation of a low-contrast scene to [0.5,1.5 ]) according to the image contrast;
block screening alternatives-non-uniform blocks (e.g., adaptive meshing based on edge density) are employed.
For the foregoing embodiments, a series of combinations of actions are described for simplicity of description, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, it should be understood by those skilled in the art that the embodiments described in the specification are preferred embodiments and that the actions involved are not necessarily required for the present application.
In the above embodiments, the basic principle and main features of the present invention and advantages of the present invention are described. It will be appreciated by persons skilled in the art that the present invention is not limited by the foregoing embodiments, but rather is shown and described in what is considered to be illustrative of the principles of the invention, and that modifications and changes can be made by those skilled in the art without departing from the spirit and scope of the invention, and therefore, is within the scope of the appended claims.

Claims (7)

1.一种基于场景感知与多特征融合的自适应白平衡方法,其特征在于,包括以下步骤:1. An adaptive white balance method based on scene perception and multi-feature fusion, characterized by comprising the following steps: 步骤S1:输入RGB图像进行多特征计算,通过动态阈值筛选出满足饱和度范围、亮度范围、颜色方差阈值和边缘密度阈值条件的有效图像块;Step S1: Input RGB image for multi-feature calculation, and filter out valid image blocks that meet the conditions of saturation range, brightness range, color variance threshold and edge density threshold through dynamic threshold; 步骤S2:若有效块数量低于预设阈值,则启动参数动态调整机制进行二次分块筛选;Step S2: If the number of valid blocks is lower than the preset threshold, the dynamic parameter adjustment mechanism is activated to perform secondary block screening; 步骤S3:若步骤S2二次分块筛选后有效块数量仍不足,则激活备用策略进行多特征融合校正,输出白平衡图像;有效块足够后,转入主策略进行处理,通过增益计算与约束,输出白平衡图像。Step S3: If the number of valid blocks is still insufficient after the secondary block screening in step S2, the backup strategy is activated to perform multi-feature fusion correction and output a white-balanced image. When there are enough valid blocks, the main strategy is used for processing, and the white-balanced image is output through gain calculation and constraint. 2.如权利要求1所述的一种基于场景感知与多特征融合的自适应白平衡方法,其特征在于,所述步骤S1包括以下子步骤:2. The adaptive white balance method based on scene perception and multi-feature fusion according to claim 1, wherein step S1 comprises the following sub-steps: 步骤S11:将输入图像划分为32×32像素块,图像块数为:Step S11: Divide the input image into 32×32 pixel blocks. The number of image blocks is: ; ; 其中,M和N分别表示图像的高和宽,表示图像块的大小;Among them, M and N represent the height and width of the image respectively. Indicates the size of the image block; 步骤S12:将RGB颜色模型转换为HSV颜色模型,计算饱和度;Step S12: Convert the RGB color model to the HSV color model and calculate the saturation; 步骤S13:计算每个块的亮度并进行排序,根据设置的亮度范围进行选择;所述每个块亮度计算式为:Step S13: Calculate the brightness of each block and sort them, and select according to the set brightness range; the brightness calculation formula of each block is: ; ; 亮度排序表示为:The brightness ranking is expressed as: ; ; 其中,i表示像素块的行索引,j表示像素块的列索引,img[]表示图像范围,表示提取的图像块;表示亮度值,表示像素数,表示像素数索引,表示亮度向量,()表示向量化,表示亮度排序,表示亮度排序索引,()表示排序函数,表示按最大到小排序的方法;Among them, i represents the row index of the pixel block, j represents the column index of the pixel block, and img[] represents the image range. represents the extracted image patch; Indicates the brightness value, Indicates the number of pixels, Indicates the pixel number index, represents the brightness vector, () indicates vectorization, Indicates brightness sorting, represents the brightness sorting index, () represents the sorting function, Indicates a method of sorting from largest to smallest; 步骤S14:计算颜色方差,衡量图像中像素颜色的变化程度;Step S14: Calculate the color variance to measure the degree of change in pixel color in the image; 步骤S15:计算边缘密度,描述图像中边缘信息的密集程度,表示为:Step S15: Calculate the edge density, which describes the density of edge information in the image and is expressed as: ; 其中,表示每个点x,y的梯度幅值,计算式为:in, Represents the gradient amplitude of each point x, y, and the calculation formula is: ; 其中,表示每个像素点x,y处的水平方向和垂直方向上的梯度,表示为:in, and Represents the horizontal and vertical gradients at each pixel point x,y, expressed as: ; . 3.如权利要求2所述的一种基于场景感知与多特征融合的自适应白平衡方法,其特征在于,所述步骤S12饱和度计算包括以下子步骤:3. The adaptive white balance method based on scene perception and multi-feature fusion according to claim 2, wherein the saturation calculation in step S12 comprises the following sub-steps: 步骤A1:首先,归一化到[0,1]范围,各通道表示为:;Step A1: First, normalize to the range [0,1], and each channel is represented as: , , ; 步骤A2:计算三个通道颜色最大值和最小值,分别表示为:Step A2: Calculate the maximum and minimum values of the three channel colors, expressed as: ; ; 步骤A3:根据最大值所在的通道来计算色相,并使用度数表示角度:Step A3: Calculate the hue based on the channel with the maximum value , and express angles in degrees: 如果,则:if ,but: ; 如果,则:if ,but: ; 如果,则:if ,but: ; 转换为度数范围[0,360):Will Convert to degrees in the range [0,360): ; ,即图像颜色处于灰度时,没有意义,设定为0或者其他特定值;when , that is, when the image color is in grayscale, It has no meaning and is set to 0 or other specific values; 步骤A4:计算饱和度,计算式为:Step A4: Calculate the saturation using the following formula: ; 步骤A5:计算亮度值,在HSV模型中,颜色的亮度或最大分量值表示为:Step A5: Calculate the brightness value. In the HSV model, the brightness or maximum component value of a color is expressed as: . 4.如权利要求3所述的一种基于场景感知与多特征融合的自适应白平衡方法,其特征在于,所述步骤S14颜色方差的计算包括:输入图像块由M*N个像素构成,将每个像素的颜色用RGB三元组来表示,则整个图像的颜色方差为所有像素在各个通道上的均值与各自值之差平方的平均,表示为:4. The adaptive white balance method based on scene perception and multi-feature fusion according to claim 3, wherein the calculation of the color variance in step S14 comprises: assuming that the input image block is composed of M*N pixels, and the color of each pixel is represented by an RGB triplet, the color variance of the entire image is the average of the square of the difference between the mean value of all pixels in each channel and their respective values, expressed as: ; ; ; ; ; ; 其中,分别为图像块的三个颜色通道均值;分别为三个颜色通道的方差。in, and are the mean values of the three color channels of the image block; and are the variances of the three color channels respectively. 5.如权利要求1所述的一种基于场景感知与多特征融合的自适应白平衡方法,其特征在于,所述步骤S2参数动态调整机制具体包括:5. The adaptive white balance method based on scene perception and multi-feature fusion according to claim 1, wherein the parameter dynamic adjustment mechanism in step S2 specifically comprises: 收缩饱和度上限,按公式:;进行迭代扩展;The upper limit of contraction saturation is calculated according to the formula: ; Carry out iterative expansion; 亮度上限收缩:Brightness upper limit shrinkage: ; 调整参数后,重新进行筛选。After adjusting the parameters, re-screen. 6.如权利要求5所述的一种基于场景感知与多特征融合的自适应白平衡方法,其特征在于,所述备用策略多特征融合校正包括:6. The adaptive white balance method based on scene perception and multi-feature fusion according to claim 5, wherein the backup strategy multi-feature fusion correction comprises: 边缘密度优先,通过Canny边缘检测提取纹理丰富区域同时设置阈值;Edge density is prioritized, and texture-rich areas are extracted through Canny edge detection while setting the threshold; 亮度极值采样,选取亮度前5%的区块作为候选;Brightness extreme value sampling, selecting the top 5% of blocks in brightness as candidates; 颜色多样性加权,计算HSV空间分布熵,选取前20%多样性区块;Color diversity is weighted, HSV spatial distribution entropy is calculated, and the top 20% diversity blocks are selected; 三者的权重分配为[0.4,0.3,0.3],通过加权中位数输出最终参考白点。The weight distribution of the three is [0.4, 0.3, 0.3], and the final reference white point is output through the weighted median. 7.如权利要求6所述的一种基于场景感知与多特征融合的自适应白平衡方法,其特征在于,所述增益计算与约束基于主用和备用策略输出的平均颜色值,同时设置动态约束,限制增益在一定区间内;所述增益计算式表示为:7. The adaptive white balance method based on scene perception and multi-feature fusion according to claim 6, wherein the gain calculation and constraint are based on the average color value output by the primary and backup strategies, and a dynamic constraint is set to limit the gain within a certain range; the gain calculation formula is expressed as: ; 其中,in, .
CN202511098364.2A 2025-08-06 2025-08-06 An adaptive white balance method based on scene perception and multi-feature fusion Pending CN120730046A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202511098364.2A CN120730046A (en) 2025-08-06 2025-08-06 An adaptive white balance method based on scene perception and multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202511098364.2A CN120730046A (en) 2025-08-06 2025-08-06 An adaptive white balance method based on scene perception and multi-feature fusion

Publications (1)

Publication Number Publication Date
CN120730046A true CN120730046A (en) 2025-09-30

Family

ID=97154662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202511098364.2A Pending CN120730046A (en) 2025-08-06 2025-08-06 An adaptive white balance method based on scene perception and multi-feature fusion

Country Status (1)

Country Link
CN (1) CN120730046A (en)

Similar Documents

Publication Publication Date Title
US8803994B2 (en) Adaptive spatial sampling using an imaging assembly having a tunable spectral response
KR100983037B1 (en) How to adjust the white balance automatically
US7394930B2 (en) Automatic white balancing of colour gain values
US10021313B1 (en) Image adjustment techniques for multiple-frame images
JP4234195B2 (en) Image segmentation method and image segmentation system
CN104301621B (en) image processing method, device and terminal
CN110519489B (en) Image acquisition method and device
US8629919B2 (en) Image capture with identification of illuminant
US20120127336A1 (en) Imaging apparatus, imaging method and computer program
CN111292246A (en) Image color correction method, storage medium, and endoscope
US20140125836A1 (en) Robust selection and weighting for gray patch automatic white balancing
US7486819B2 (en) Sampling images for color balance information
CN114866754A (en) Automatic white balance method and device, computer readable storage medium and electronic equipment
WO2023110880A1 (en) Image processing methods and systems for low-light image enhancement using machine learning models
JP3870796B2 (en) Image processing apparatus and image processing method
WO2020146118A1 (en) Lens rolloff assisted auto white balance
US9013605B2 (en) Apparatus and method for processing intensity of image in digital camera
EP3363193B1 (en) Device and method for reducing the set of exposure times for high dynamic range video imaging
CN114143420A (en) Double-sensor camera system and privacy protection camera method thereof
CN120730046A (en) An adaptive white balance method based on scene perception and multi-feature fusion
US20200228769A1 (en) Lens rolloff assisted auto white balance
US20070041064A1 (en) Image sampling method for automatic white balance
CN113051979B (en) Identification method, terminal and computer storage medium
CN118071658A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
KR20090102259A (en) Apparatus and method for operation control of camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination