Disclosure of Invention
The invention provides a method, a system, an electronic device and a storage medium for correcting an image sensor seam, which can at least solve one of the technical problems.
In order to achieve the above purpose, the present invention proposes the following technical solutions:
an image sensor seam correction method, comprising:
under a uniform light source, collecting a plurality of original images with different brightness, and recording pixel values of the original images;
calculating a correction coefficient of the original image based on the pixel value;
dividing the original image to enable the effective region to contain a stitching line and sensor images on two sides of the stitching line;
calculating a region correction coefficient corresponding to an effective sub-region based on a pixel value of any effective sub-region image;
constructing a fusion coefficient function, and calculating a fusion correction coefficient based on the correction coefficient and the regional correction coefficient;
and correcting the pixel value based on the fusion correction coefficient, and outputting a corrected image.
Further, the constructing a fusion coefficient function includes: setting left fusion width L in sensor images on the left and right sides of the stitching line position S as an axis 1 Right fusion width L 2 The method comprises the steps of carrying out a first treatment on the surface of the And constructing a fusion coefficient function f (i) around the stitching line position S within a set left and right fusion width range, wherein i represents the number of columns or rows of image pixels.
Further, the fusion coefficient function f (i) satisfies f (S-L 1 )=0,f(S+L 2 ) =0, f (S) =1, and f (i) is at [ S-L 1 ,S]Monotonically increasing in range, f (i) is at [ S, S+L 2 ]Monotonically decreasing in range.
Further, the sum of fusion widths set in any one of the sensor images is smaller than the width of the sensor image.
Further, the correcting the pixel value based on the fusion correction coefficient, before further includes: and storing the fusion correction coefficient corresponding to any effective subarea.
Further, the correcting the pixel value based on the fusion correction coefficient includes: and finding out the effective subarea based on the pixel value and the pixel position of the original image, determining a fusion correction coefficient corresponding to the effective subarea, correcting the pixel value, and outputting a corrected image.
The invention also provides an image sensor seam correction system, which comprises:
the image acquisition unit acquires a plurality of original images with different brightness under a uniform light source and records pixel values of the original images;
an image segmentation unit for segmenting the original image so that the effective region contains a stitching line and sensor images on two sides of the stitching line;
a parameter calculation unit that calculates a correction coefficient of an original image based on pixel values of the original image; calculating a region correction coefficient corresponding to an effective sub-region based on a pixel value of any effective sub-region image; constructing a fusion coefficient function, and calculating a fusion correction coefficient based on the correction coefficient and the regional correction coefficient;
and the correction unit is used for finding out the effective subarea based on the pixel value and the pixel position of the original image, determining a fusion correction coefficient corresponding to the effective subarea, correcting the pixel value and outputting a corrected image.
Further, the method further comprises the following steps:
and the storage unit is used for storing the fusion correction coefficient corresponding to any effective subarea.
The invention also proposes an electronic device comprising a memory in which a computer program is stored and a processor arranged to run the computer program to perform the image sensor seam correction method as described above.
The present invention also proposes a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the image sensor seam correction method as described above.
The beneficial effects of the invention are as follows: the invention provides a segmentation fusion correction method, which adopts different correction coefficients in different areas of a physical space to reduce an imaging seam effect, so that the method can recalibrate the coefficients according to environmental changes and has self-adaption capability.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention.
A horizontal physical stitching line will form a horizontal demarcation line on the image and a vertical physical stitching line will form a vertical demarcation line on the image, and a similar correction algorithm is called either cross-bar correction or vertical-bar correction. The correction thought of the horizontal stitching line is identical to that of the vertical stitching line, and only a vertical dividing line (vertical correction) is taken as an example for illustration in the embodiment of the invention. The correction method provided by the invention is also suitable for the correction of the seam in the horizontal direction, and the correction of the seam in the horizontal direction can be carried out only by exchanging row and column coordinates in the scheme.
Example 1
As shown in fig. 2, the embodiment takes a method of correcting a seam in a vertical direction as an example, and specifically includes:
under the uniform light source, a plurality of original images with different brightness are collected, and pixel values of the original images are recorded, wherein the pixel values comprise row pixel values and column pixel values.
The invention is described by taking a vertical seam correction method as an example, so that the pixel values recorded in the invention are recorded as column pixel values by taking the column coordinates of the image as references. The original image includes several sensor images with a stitching line as a boundary of the sensor images.
In addition, if the horizontal stitching line is corrected, the line coordinate is required to be used as an index, and the pixel value of the original image is recorded as a line pixel value; the specific correction method is the same as the correction method of the vertical seam line, and only column coordinates in the correction method of the seam line in the vertical direction are required to be replaced by row coordinates to be corrected.
In this embodiment, several images with different brightness are collected under a uniform light source, and any pixel (column pixel value) x in the vertical direction of any image is recorded m,i Wherein the subscript m represents the number of steps of the current light source brightness and i represents the ith column of the image.
At the brightness of the first-level light source, the column pixel value x of the original image is recorded 1,i Is denoted as (x) 1,1 、x 1,2 、...x 1,i ) The method comprises the steps of carrying out a first treatment on the surface of the Converting light source brightness, recording column pixel value x of original image under second-level light source brightness 2,i Is denoted as (x) 2,1 、x 2,2 、...x 2,i )。
Based on the pixel values, correction coefficients of the original image are calculated.
In this embodiment, taking the pixel value of the original image column under the two-stage light source brightness as an example, a traditional vertical streak correction algorithm is adopted to calculate the correction coefficient of the original image. Wherein the correction coefficient of the original image comprises a multiplication coefficient k i And addition coefficient b i 。
Respectively calculating column pixel mean value y according to the recorded column pixel values of the original images with different brightness m,i According to any column pixel mean y m,i Calculate the corresponding correction coefficient, including multiplication coefficient k i And addition coefficient b i The calculation formula is as follows:
traditional vertical line schoolIn the positive algorithm, the correction multiplication coefficient k is obtained according to calculation i And addition coefficient b i For any pixel p r,i And correcting, wherein the correction formula is as follows: p is p 0 =(p r,i +b i )×k i . Wherein the subscript r represents the image row r and i represents the image column i; p is p r,i Pixel values representing the ith row and ith column of any sensor image are also pixel input values; p is p 0 Output values for the pixels.
Preferably, in practical application, a plurality of (at least two) light source brightnesses can be set, and column pixel values of corresponding sensor images are acquired. If the pixel values of the sensor image column under the brightness of multiple (more than two) light sources are recorded, other calculation modes such as a least square method can be used to calculate the average value of the column pixels.
The original image is segmented so that the effective region contains stitching lines and sensor images on both sides of the stitching lines.
The number and the direction of the dividing lines can be set freely, a plurality of effective subareas are needed in the dividing result, any effective subarea contains the stitching line and the sensor images at two sides of the stitching line, and all the effective subareas can restore the stitching line and the sensor images at two sides of the stitching line. In the present invention, at least one dividing line is required to divide the entire image into 2 areas.
As shown in fig. 3, in this embodiment, a dividing line is provided along a direction perpendicular to the stitching line, dividing the original image into upper and lower horizontal regions. When the seam is along the vertical direction, the number of the dividing lines is less than half of the number of lines of the sensor image; when the seam is along the horizontal direction, the number of the dividing lines is less than half of the number of the sensor image columns.
And calculating a region correction coefficient corresponding to the effective sub-region based on the pixel value of any effective sub-region image.
In this embodiment, referring to fig. 3, the original image is divided into two upper and lower horizontal areas, and any horizontal area contains a stitching line and two side sensor images, so that the horizontal areas all belong to the effective sub-areas.
Calculating region corrections for each horizontal region jThe specific process is as follows: according to the recorded column pixel value of the original image, finding the column pixel value x corresponding to the horizontal region j m,i,j Wherein the subscript m represents the number of steps of the current light source brightness, i represents the ith column of the image, and j represents the region number.
At the first level of light source brightness, the column pixel value corresponding to the horizontal region j is denoted as (x) 1,1,j 、x 1,2,j 、...x 1,i,j ) The method comprises the steps of carrying out a first treatment on the surface of the The column pixel value corresponding to the horizontal region j is denoted as (x) at the second level light source luminance 2,1,j 、x 2,2,j 、...x 2,i,j )。
Respectively calculating regional column pixel mean y j,i,m And calculates the corresponding region multiplication coefficient k j,i Sum-region addition coefficient b j,i The calculation formula is as follows:
and constructing a fusion coefficient function, and calculating a fusion correction coefficient based on the correction coefficient and the regional correction coefficient.
The method for constructing the fusion coefficient function specifically comprises the following steps: setting left fusion width L in sensor images on the left and right sides of the stitching line position S as an axis 1 Right fusion width L 2 So that the total fusion width is 2L; within the total fusion width, a fusion coefficient function f (i) is constructed around the stitching line position S, where i represents the number of columns or rows of image pixels. Total fusion width 2l=l set in the present invention 1 +L 2 。
The fusion coefficient function satisfies f (S-L 1 )=0,f(S+L 2 ) =0, f (S) =1, and f (i) is at [ S-L 1 ,S]Monotonically increasing in range, f (i) is at [ S, S+L 2 ]Monotonically decreasing in range.
The range of the total fusion width is related to the number of rows and columns of the sensor and the positions of the seams, and the sum of the fusion widths set in any sensor image is smaller than the width of the sensor image.
As shown in fig. 4, the original image is stitched from 4 sensor images, containing 3 physical stitching lines.The width of the sensor image 1 is d 1 I.e. the distance d between the physical splice line 1 and the left boundary 1 The method comprises the steps of carrying out a first treatment on the surface of the The width of the sensor image 2 is d 2 Also the distance d between the physical splice line 1 and the physical splice line 2 2 The method comprises the steps of carrying out a first treatment on the surface of the The width of the sensor image 3 is d 3 I.e. the distance d between the physical splice line 2 and the physical splice line 3 3 The method comprises the steps of carrying out a first treatment on the surface of the The width of the sensor image 4 is d 4 I.e. the distance between the physical splice line 3 and the right boundary.
The total fusion width range is set for the physical splice line 1, and specifically comprises the following steps: setting a left fusion width L in the sensor image 1 1 The right fusion width L is set in the sensor image 2 2 . The total fusion width range is set for the physical splice line 2, and specifically comprises the following steps: setting a left fusion width L in the sensor image 2 3 A right fusion width L is set in the sensor image 3 4 . The total fusion width range is set for the physical splice line 3, and specifically comprises: setting a left fusion width L in the sensor image 3 5 The right fusion width L is set in the sensor image 4 6 。
Wherein the fusion width L set in the sensor image 1 1 <d 1 The method comprises the steps of carrying out a first treatment on the surface of the Fusion width L set in sensor image 2 2 +L 3 <d 2 The method comprises the steps of carrying out a first treatment on the surface of the Fusion width L set in sensor image 3 4 +L 5 <d 3 The method comprises the steps of carrying out a first treatment on the surface of the Fusion width L set in sensor image 4 6 <d 4 。
As shown in fig. 5, the present embodiment proposes that the function f (i) satisfying the above requirements is defined as follows:
the function f (i) proposed in the present embodiment is a linear function, and is only used as an example of a fusion coefficient function, and the specific definition of the fusion coefficient function may be set according to the actual situation and the requirement.
And calculating a fusion correction coefficient based on the determined fusion coefficient function, the correction coefficient of the original image and the region correction coefficient. For the horizontal region j, the calculation formula of the fusion correction coefficient is as follows:
in this embodiment, a nonvolatile memory is further provided, and is configured to store the calculated fusion correction coefficient, so that the fusion correction coefficient is conveniently found in a subsequent correction process, and correction efficiency is improved.
And correcting the pixel value based on the fusion correction coefficient, and outputting a corrected image. The method specifically comprises the following steps: and finding out the effective subarea according to the pixel position (r, i), determining a fusion correction coefficient corresponding to the effective subarea, correcting the pixel value, and outputting a corrected image.
As shown in fig. 6, the region j to which the current pixel belongs is calculated according to the pixel position (r, i), and the corresponding fusion correction coefficient is read from the nonvolatile memoryAnd->For pixel value p r,i The linear correction formula is adopted:performing linear correction; outputting the correction result p out Replacement pixel value p r,i The linearity correction process is completed.
Wherein the subscript r represents the image row r and i represents the image column i; p is p r,i Pixel values representing the sensor's r-th row and i-th column, are also pixel input values; p is p out Output values for the pixels.
The pre-processing image of this embodiment is shown in fig. 7, and the processed image is obtained by the correction method described above, see fig. 8.
Preferably, a memory is not set, and a fusion correction coefficient corresponding to any effective sub-region is not stored. At this time, the correction specific process is: calculating the region j of the current pixel through the pixel position (r, i), repeating the calculation process of the parameters, and calculating the corresponding fusion correctionPositive coefficientAnd->For pixel value p r,i The linear correction formula is adopted:performing linear correction; outputting the correction result p out Replacement pixel value p r,i The linearity correction process is completed.
Example 2
On the basis of embodiment 1, a new fusion coefficient function is proposed in this embodiment. The concrete construction process is as follows:
setting the symmetrical ranges of the left side and the right side as fusion width L by taking the stitching line position S as an axis, and setting the total fusion width as 2L; as a fusion coefficient function f (i), an axisymmetric function around the stitching line position S is used within the fusion width range, where i represents the number of columns or rows of image pixels.
The specific setting of the fusion width in this embodiment is shown in fig. 9.
The fusion coefficient function satisfies f (S-L) =0, f (s+l) =0, f (S) =1, and f (i) monotonically increases in the range of [ S-L, S ].
The present embodiment proposes that the function family f (i) satisfying the above requirements be defined as follows:
where the variable n is used to control the trend of the monotonically increasing function over the range S-L, S, L representing half the fusion width. The function family proposed in the embodiment is only an example of a fusion coefficient function, and the specific definition of the fusion coefficient function can be set according to the actual situation and the requirement.
As shown in fig. 9, the seam line is spaced from the left boundary of the left sensor 1 by a distance d 1 The distance between the stitching line and the right boundary of the right sensor 2 is d 2 The fusion width satisfies L<min(d 1 ,d 2 ). In order to better embody the effect of the fusion effect of the region division, the fusion width L is set in this embodiment<min(d 1 /2,d 2 /2)。
In practical application, the corresponding fusion coefficient function f (i) can be calculated by taking n values for a plurality of times, and the final n value is determined according to the fusion effect of f (i), so as to determine the fusion correction coefficient.
In this embodiment, as shown in fig. 10, taking n=1, a fusion coefficient function f (i) is constructed as follows:
example 3
In this embodiment, different fusion coefficient functions f (i) are set on the basis of embodiment 2. When 0< n <1, the function monotonically increases faster and faster within the [ S-L, S ] range. When n=1/2, the image of the function f (i) is shown in fig. 11, and the specific expression is as follows:
example 4
In this embodiment, different fusion coefficient functions f (i) are set on the basis of embodiment 2. When n >1, the function monotonically increases more and more slowly over the [ S-L, S ] range.
When n=2, the function f (i) image is as shown in fig. 12, and the specific expression is as follows:
the invention provides a segmentation fusion correction method, which adopts different correction coefficients in different areas of a physical space to reduce an imaging seam effect, so that the method can recalibrate the coefficients according to environmental changes and has self-adaption capability.
The invention also provides an image sensor seam correction system, which comprises:
the image acquisition unit acquires a plurality of original images with different brightness under a uniform light source and records pixel values of the original images;
an image segmentation unit for segmenting the original image so that the effective region contains a stitching line and sensor images on two sides of the stitching line;
a parameter calculation unit that calculates a correction coefficient of an original image based on pixel values of the original image; calculating a region correction coefficient corresponding to an effective sub-region based on a pixel value of any effective sub-region image; constructing a fusion coefficient function, and calculating a fusion correction coefficient based on the correction coefficient and the regional correction coefficient;
and the correction unit is used for finding out the effective subarea based on the pixel value and the pixel position of the original image, determining a fusion correction coefficient corresponding to the effective subarea, correcting the pixel value and outputting a corrected image.
Further comprises: and the storage unit is used for storing the fusion correction coefficient corresponding to any effective subarea.
The invention also proposes an electronic device comprising a memory and a processor, the memory storing a computer program, the processor being arranged to run the computer program to perform the above-mentioned image sensor seam correction method.
The present invention also proposes a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the image sensor seam correction method as described above.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.