Detailed Description
The terms in the following description refer to the conventional terms in the field, and some terms are defined or explained in the specification, and are to be interpreted according to the description or the definition of the specification.
The invention discloses a bad pixel compensation method and a bad pixel compensation device, which are used for compensating bad pixels of a sensor, wherein the sensor comprises four types of pixels, such as a red pixel, a green pixel, a blue pixel and an infrared pixel. For ease of understanding, the following description refers to the sensor being a red, green, blue, infrared (RGBIR) sensor; however, the present invention can also be used to compensate for defective pixels of other types of sensors, where implementation is possible.
The rgb Ir sensor comprises a plurality of basic photosensitive cells, and a pixel arrangement of the basic photosensitive cells is, for example, 2 × 2 (i.e. each basic photosensitive cell comprises four pixels) in fig. 1 or 4 × 4 (i.e. each basic photosensitive cell comprises sixteen pixels) in fig. 2, wherein red pixels are denoted by R, green pixels are denoted by G, blue pixels are denoted by B, and infrared pixels are denoted by Ir, and the above-mentioned indications are also adopted in other figures. It should be noted that fig. 1 and fig. 2 only show some pixels, and the arrangement of the pixels in fig. 1 and fig. 2 is exemplary and not limiting for the implementation of the present invention. In order to determine whether a target pixel of the RGBIR sensor is a defective pixel, the present invention uses a plurality of pixels adjacent to the target pixel as reference pixels according to a sampling range, and determines whether the target pixel is a defective pixel according to the reference pixels, wherein the type of the reference pixels is the same as that of the target pixel (for example, all the reference pixels are red pixels, green pixels, blue pixels, or infrared pixels); however, the type of the one or more reference pixels and the type of the target pixel may be different, provided that the implementation result is acceptable.
To help understanding, the following description mainly takes an RGBIR sensor having a basic photosensitive cell of 2 × 2 type as an example, and takes a 5 × 5 sampling range (i.e., 25 pixels in the sampling range) as an example; however, the above conditions are not limitations of the present invention, and those skilled in the art can understand how to detect other types of RGBIR sensors (e.g., RGBIR sensors having 4 × 04 basic photosensitive cells) and how to use other sampling ranges (e.g., 3 × 13, 3 × 25, 5 × 33, 5 × 47, 7 × 55, 7 × 67, 7 × 79, 9 × 87, or 9 × 9 sampling ranges) according to the disclosure of the present invention. It is noted that when the RGBIR sensor configuration (e.g., a known RGBIR sensor with a 2 × 2 cell (hereinafter referred to as a 2 × 2 sensor) or a known RGBIR sensor with a 4 × 4 cell (hereinafter referred to as a 4 × 4 sensor)) and the position of a target pixel (e.g., coordinates of the target pixel with respect to all pixels of the RGBIR sensor, wherein the coordinates are generated according to conventional techniques in the art) are determined, those pixels within the sampling range are reference pixels. For example, under the setting that a target pixel is located at the center of a 5 × 5 sampling range and the type of the target pixel is the same as that of a reference pixel, when the sensor is a 2 × 2 type sensor, the reference pixel is as shown in fig. 3 (the reference pixel is denoted by Ref in fig. 3, and the target pixel is denoted by T), wherein the target pixel and the reference pixel are both a red pixel, a green pixel, a blue pixel or an infrared pixel; when the sensor is a 4 × 4 sensor and the target pixel is a blue or red pixel, the reference pixel is shown in FIG. 4 (the reference pixel in FIG. 4 is labeled as B/R)REFThe target pixel is marked as T); when the sensor is a 4 × 4 sensor and the target pixel is a green pixel, the reference pixel is shown in fig. 5 (the reference pixel is labeled G in fig. 5)REFThe target pixel is marked as T); when the sensor is a 4 × 4 sensor and the target pixel is an infrared pixel, the reference pixel is shown in fig. 6 (the reference pixel in fig. 6 is denoted as Ir)REFThe target pixel is denoted T). For another example, only a portion of the pixels of the same type as the target pixel are selected as reference pixels within a sampling range. By way of further example onlyIn other words, under the setting that a target pixel is not located at the center of a sampling range (e.g., 6 × 6 sampling range), the target pixel is one of the pixels closest to the center, and the position of the reference pixel within the sampling range can also be determined according to the type of sensor and the position of the target pixel.
FIG. 7 shows an embodiment of the bad pixel compensation method of the present invention, which can be performed by the bad pixel compensation device of the present invention or its equivalent. The embodiment of fig. 7 comprises the following steps:
step S710: according to at least one sensor form signal, the form of the sensor is determined. The sensor shape signal may be provided by the sensor or other external device, or may be obtained by a user setting a device for performing the defective pixel compensation method; of course, other known or self-developed ways of providing sensor morphology signals may be used in this step.
Step S720: a plurality of sampling positions are determined according to the shape of the sensor and the position of a target pixel. For example, the plurality of sampling positions are positions of the reference pixels shown in one of fig. 3 to 6.
Step S730: according to the sampling positions, the values of a plurality of reference pixels in a sampling range are obtained. For example, the values of the reference pixels are the values of the reference pixels shown in one of fig. 3 to 6.
Step S740: determining an interval and at least one compensation value according to the values of the reference pixels. An example of this step is described below.
Step S750: determining whether a target pixel input value (e.g., an original pixel value of the target pixel or a value of the target pixel output in a previous image processing stage (prior to the execution of the method)) of the target pixel is within the interval, outputting the target pixel input value as a value of the target pixel when the target pixel input value is within the interval, and outputting one of the at least one compensation value as a value of the target pixel when the target pixel input value is outside the interval. An example of this step is described below.
As shown in fig. 8, in an embodiment, the step S740 includes the following steps:
step S810: determining an upper brightness limit according to the values of the reference pixels. For example, the brightness upper limit comprises a brightness reference level (brightness reference level) (e.g., the brightness upper limit is the brightness reference level). For example, the values of the reference pixels include a maximum value, a median value and a minimum value, the luma reference level is between the maximum value and the median value (e.g., the luma reference level is equal to the maximum value or equal to the sub-maximum values of the reference pixels), wherein at least the maximum value and the minimum value are obtained by performing an ascending order or a descending order on the reference pixels. For another example, the brightness ceiling may comprise the brightness reference level and at least one of the following levels: an edge feature reference level (edge feature reference level) for adjusting the upper brightness limit and the lower brightness limit when the target pixel is at an edge, and a bright area shift level (e.g., the upper brightness limit is equal to the sum of the brightness reference level, the edge feature reference level, and the bright area shift level) for adjusting the upper brightness limit according to the brightness of the reference pixels. Examples of the edge feature reference level and the bright-area shift level are described below.
Step S820: determining a lower brightness limit according to the values of the reference pixels. For example, the lower brightness limit may comprise a dark reference level (e.g., the lower brightness limit is the dark reference level). For example, the darkness reference level is between the median and the minimum (e.g., the darkness reference level is equal to the minimum or equal to the next smallest value of the plurality of reference pixels). Also for example, the lower brightness limit may comprise the darkness reference level and at least one of the following levels: the edge feature reference level and a dark area shift level (e.g., the lower luminance limit is equal to the dark reference level minus the edge feature reference level and the dark area shift level), wherein the dark area shift level is used to adjust the lower luminance limit according to the luminances of the reference pixels. Examples of the dark field shift levels described above are described below.
Step S830: determining the interval according to the upper and lower brightness limits. For example, the interval is an interval between the upper brightness limit and the lower brightness limit, and the at least one compensation value includes the brightness reference level and the darkness reference level, so that, in step S750, the brightness reference level is output as the value of the target pixel when the target pixel input value of the target pixel is greater than the upper brightness limit, and the darkness reference level is output as the value of the target pixel when the target pixel input value of the target pixel is less than the lower brightness limit.
In one embodiment, the edge characteristic reference level is generated by calculating the reference pixels according to a predetermined edge level algorithm, for example, the edge characteristic reference level is equal to a coefficient of variation of the reference pixels multiplied by an edge ratio (e.g., a value between 0 and 1), wherein the edge ratio may be a predetermined ratio or determined by the implementing inventor. In one embodiment, the edge feature reference level is generated by detecting the target pixel according to an edge detection algorithm, for example, the edge detection algorithm uses a Sobel Operator (Sobel Operator) to calculate a horizontal gradient approximation and a vertical gradient approximation of the target pixel respectively (for example:
wherein G is
xIs a transverse gradient, G
yA is a pixel matrix of 3 × 3 size centered on the target pixel, and then the gradient of the target pixel is calculated (for example:
) The gradient is then multiplied by the edge proportion to obtain the edge feature reference level.
As shown in fig. 9, in one embodiment, the bright area shift level and the dark area shift level are generated according to at least the following steps:
step S910: an average of the plurality of reference pixels is calculated.
Step S920: it is determined whether the average is less than a low brightness threshold (e.g., 60 or a value between 15 and 112 if each reference pixel value is between 0 and 255) and/or whether the average is greater than a high brightness threshold (e.g., 180 or a value between 142 and 240 if each reference pixel value is between 0 and 255). In this step, if the previous determination is true (e.g., the step of determining whether the average value is less than the low brightness threshold is performed first, and the determination result indicates that the average value is less than the low brightness threshold), the subsequent determination (e.g., the step of determining whether the average value is greater than the high brightness threshold) may be selectively performed.
Step S930: if the average is less than the low brightness threshold, the bright-area shift level is equal to a minimum bright-area shift level (e.g., 4, or values determined by the inventors can help to distinguish whether the target pixel is bright in the dark area), and the dark-area shift level is equal to a minimum dark-area shift level (e.g., 8, or values determined by the inventors can help to distinguish whether the target pixel is dark in the dark area).
Step S940: if the average is greater than the high brightness threshold, the bright area shift level is equal to a maximum bright area shift level (e.g., 8, or a value determined by the inventor can help to distinguish whether the target pixel is bright in the bright area), and the dark area shift level is equal to a maximum dark area shift level (e.g., 4, or a value determined by the inventor can help to distinguish whether the target pixel is dark in the bright area).
Step S950: if the average value is between the low brightness threshold and the high brightness threshold, the bright area translation level and the dark area translation level are calculated according to a preset translation level algorithm. For example, the predetermined panning level algorithm is such that the bright area panning level is a minimum bright area panning level + (maximum bright area panning level-minimum bright area panning level) × (the average value-the low brightness threshold) ÷ 128 ", and the dark area panning level is a minimum dark area panning level + (maximum dark area panning level-minimum dark area panning level) × (the average value-the low brightness threshold) ÷ 128".
An embodiment of the defective pixel compensation device of the present invention is shown in fig. 10. The defective pixel compensation apparatus 1000 of FIG. 10 is an image processor (e.g., an image processing IC) capable of performing the defective pixel compensation method of the present invention. The defective pixel compensation device 1000 includes a reference pixel sampling circuit 1010, a calculating circuit 1020, and a determining and compensating circuit 1030. The reference pixel sampling circuit 1010 is used for determining a plurality of sampling positions according to the position of a target pixel, so as to obtain a plurality of reference pixel values within a sampling range according to the plurality of sampling positions; for example, the reference pixel sampling circuit 1010 receives values of all pixels in the sampling range, and determines the sampling positions according to the position of the target pixel, thereby obtaining values of the reference pixels. The calculating circuit 1020 is used for determining a range and at least one compensation value according to the values of the plurality of reference pixels. The determining and compensating circuit 1030 is configured to determine whether the input value of the target pixel is within the interval, and compensate the input value of the target pixel according to the at least one compensation value when the input value of the target pixel is within the interval. Those skilled in the art can implement the bad pixel compensation device 1000 according to the present disclosure by using known circuits and techniques.
Since the details and variations of the above-described embodiments of the apparatus can be understood by those skilled in the art with reference to the above-described embodiments of the method, that is, the technical features of the above-described embodiments of the method can be reasonably applied to the embodiments of the apparatus, the repeated and redundant description is omitted here without affecting the disclosure requirements and the feasibility of the embodiments of the apparatus.
It should be noted that, when the implementation is possible, a person skilled in the art may selectively implement part or all of the technical features of any one of the foregoing embodiments, or selectively implement a combination of part or all of the technical features of the foregoing embodiments, thereby increasing the flexibility in implementing the invention. It should be noted that the values and algorithms mentioned in the above embodiments are exemplary, and those skilled in the art can set values and select/develop appropriate algorithms according to the disclosure and the requirements of the present invention.
In summary, the present invention can generate main information (e.g., the luminance reference level and the darkness reference level) to determine whether a target pixel of an RGBIr sensor is a defective pixel according to the main information, and compensate the defective pixel accordingly; the present invention can also selectively generate auxiliary information (e.g., the edge feature reference level, the bright-area shifting level, and the dark-area shifting level) to assist in determining whether the target pixel is a defective pixel.
Although the embodiments of the present invention have been described above, the embodiments are not intended to limit the present invention, and those skilled in the art can make variations on the technical features of the present invention according to the explicit or implicit contents of the present invention, and all such variations may fall within the scope of the patent protection sought by the present invention.