HK1170880B - Interpolation for four-channel color filter array - Google Patents
Interpolation for four-channel color filter array Download PDFInfo
- Publication number
- HK1170880B HK1170880B HK12111537.3A HK12111537A HK1170880B HK 1170880 B HK1170880 B HK 1170880B HK 12111537 A HK12111537 A HK 12111537A HK 1170880 B HK1170880 B HK 1170880B
- Authority
- HK
- Hong Kong
- Prior art keywords
- color
- image
- pixels
- panchromatic
- interpolated
- Prior art date
Links
Description
Technical Field
The present invention relates to producing full color images with improved spatial resolution from color filter array images having color channels and panchromatic channels.
Background
Single sensor digital cameras use a Color Filter Array (CFA) to capture full color information from a single two-dimensional array of photosensitive pixels. The CFA consists of an array of color filters that filter the light being detected by each pixel. Thus, each pixel receives light from only one color, or in the case of a full color or "clear" filter, all colors. To reproduce a full-color image from a CFA image, three color values must be generated at each pixel location. This is achieved by interpolating missing color values from neighboring pixel values.
The best known CFA pattern uses three color channels as described by Bayer (Bayer) (U.S. patent No. 3,971,065) and shown in fig. 2. A bayer CFA has three color channels, which enables full color reproduction capability. However, the exact spectral responsivities ("colors") of the three channels represent a compromise. To improve color fidelity, and widen the range of colors that can be captured by a CFA (i.e., the full range of colors), it is desirable to make the spectral responsivity more sensitive ("narrow"). This has the side effect of reducing the total amount of light reaching the pixel and thus reducing its sensitivity to light. As a result, the pixel values become more sensitive to noise (e.g., thermal noise) from non-imaging sources. One solution to the noise problem is to make the CFA spectral responsivity less sensitive ("wider") to increase the total amount of light reaching the pixel. However, this is accompanied by side effects that reduce color fidelity.
A solution to this three-channel CFA limitation is to use a four-channel CFA consisting of three colors with "narrow" spectral sensitivity and one color with "wide" spectral sensitivity. This "widest" channel will be full color or "clear" which will be sensitive to the full spectrum. Three "narrow-band" color channels will produce images with higher color fidelity and lower spatial resolution, while a fourth "wide-band" panchromatic channel will produce images with lower noise and higher spatial resolution. These high color fidelity, low spatial resolution, and low noise, high spatial resolution images will then be merged into the final high color fidelity, low noise, high spatial resolution image.
In order to produce a high spatial resolution panchromatic image while maintaining high color fidelity from the color pixels, the number and arrangement of panchromatic pixels within the CFA, and the corresponding interpolation algorithm, must be properly selected. There are a variety of examples in the prior art that have one or more disadvantages in this regard. Forryan (Frame) (U.S. patent No. 7012643) teaches a CFA as shown in fig. 19 having only a single red (R), green (G), and blue (B) pixel within a 9 x 9 square of full-color (P) pixels. A problem with forryan is that the resulting color spatial resolution is too low to produce all but the lowest frequency color detail in the image.
Yamagami et al (U.S. patent No. 5,323,233) describe two CFA patterns as shown in fig. 20A and 20B with equal amounts of panchromatic and color pixels, thereby avoiding the disadvantages of forryan. Yamagami et al continue to teach using simple bilinear interpolation as a means for interpolating missing panchromatic values. The use of linear-only interpolation methods, such as bi-linear interpolation, strongly limits the spatial resolution of the interpolated image. Non-linear methods, such as those described in Adams (Adams) et al (U.S. patent No. 5,506,619), produce interpolated images of higher spatial resolution, provided that CFA patterns allow their use. Fig. 21A illustrates a pattern used in the green (G) pixel of adans et al, which provides high spatial frequency resolution in the three channel system shown in fig. 2, alternating with color (C) pixels in both the horizontal and vertical directions with respect to the central color pixel. It is important to note that these color pixels all have the same color, e.g., red pixels. Fig. 21B shows a similar pattern using full-color (P) pixels instead of green pixels. At this point it should be noted that for a four channel system, it is not possible to arrange all four channels (R, G, B and P) in such a way that the pattern as shown in 21B occurs at all colored (R, G, B) pixel locations on the sensor. Thus, any possible arrangement would have some compromise in this way. With regard to yamagami et al, fig. 20A has green and panchromatic pixels arranged as in fig. 21B, but red and blue pixels are not so arranged. After fig. 21B, an arrangement such as in fig. 21C is preferable, but fig. 20A has no such arrangement with respect to either the red pixels or the blue pixels. Fig. 20B does not have the pattern of fig. 21B or 21C for any color pixel. Tanaka et al (U.S. patent No. 4,437,112) describe several CFA patterns, with one pattern most relevant to this discussion being fig. 22. In fig. 22, cyan (C), yellow (Y), green (G), and full-color (P) pixels are arranged such that the green pixels are surrounded by the neighborhood regions shown in fig. 21C. However, the yellow and cyan pixels do not conform to the pattern of fig. 21B or 21C. The same difficulties exist with other patterns taught by tanaka et al.
Hamilton et al (U.S. patent application No. 2007/0024879) teach a larger number of CFA patterns, two of which are shown in FIGS. 23A and 23B. A disadvantage of these patterns, as well as all other patterns disclosed by hamilton et al, is the lack of the pixel arrangement of fig. 21B and 21C.
The woodisland (Kijima) et al (U.S. patent application No. 2007/0177236) describe a larger number of CFA patterns, with the most relevant CFA patterns shown in fig. 24. While the two rows of panchromatic pixels provide the arrangement of fig. 21C in the vertical direction, there is no such horizontal arrangement of side-by-side panchromatic values in fig. 24.
Therefore, there is a need for a four-channel CFA pattern having three narrow-band color channels and one wide-band panchromatic channel with sufficient color pixels to provide sufficient color spatial resolution and arranged in a manner that allows for efficient nonlinear interpolation of missing panchromatic values.
Disclosure of Invention
In accordance with the present invention, there is provided a method of forming a full-color output image from a color filter array image having a plurality of color pixels (which have at least two different color responses) and panchromatic pixels, comprising one or more processors for providing the steps of:
a) capturing a color filter array image using an image sensor comprising a two-dimensional array of photosensitive pixels including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a rectangular minimal repeating unit having at least eight pixels and having at least two rows and two columns, wherein for a first color response, color pixels having the first color response alternate with panchromatic pixels in at least two directions, and for each of the other color responses, there is a repeating pattern of color pixels having only a given color response and at least one row, column or diagonal of panchromatic pixels;
b) computing an interpolated panchromatic image from the color filter array image;
c) computing an interpolated color image from the color filter array image; and
d) a full color output image is formed from the interpolated panchromatic image and the interpolated color image.
An advantage of the present invention is that the color spatial resolution of an image is improved without increasing the percentage of color pixels relative to panchromatic pixels within the sensor.
Another advantage of the present invention is that a reduction in color noise in an image is achieved without increasing the spectral bandwidth of the color pixels by a corresponding reduction in the color fidelity of the image.
This and other aspects, objects, features and advantages of the present invention will be more clearly understood and appreciated from a review of the following detailed description of the preferred embodiments and appended claims, and by reference to the accompanying drawings.
Drawings
FIG. 1 is a block diagram of a digital camera for implementing the present invention;
FIG. 2 is a minimal repeating unit from the prior art;
FIG. 3 is a minimal repeating unit for use in a preferred embodiment of the present invention;
FIGS. 4A and 4B are minimal repeating units for use in alternative embodiments of the present invention;
FIGS. 5A and 5B are minimal repeating units for use in alternative embodiments of the present invention;
FIG. 6 is an overview of an image processing chain for a preferred embodiment of the present invention;
FIG. 7 is a neighborhood of pixels for interpolating panchromatic image values;
FIG. 8 is a neighborhood of pixels for interpolating panchromatic image values;
FIGS. 9A and 9B are pixel neighborhoods for interpolating panchromatic image values;
FIG. 10 is a neighborhood of pixels for interpolating panchromatic image values;
FIGS. 11A and 11B are neighborhood of pixels for interpolating panchromatic image values;
FIGS. 12A and 12B are neighborhood of pixels for interpolating panchromatic image values;
FIG. 13 is a neighborhood of pixels for interpolating color difference values;
FIG. 14 is a neighborhood of pixels for interpolating color difference values;
15A and 15B are pixel neighborhoods for interpolating color difference values;
16A and 16B are pixel neighborhoods for interpolating color difference values;
FIG. 17 is a block diagram showing a detailed view of a fused image box for a preferred embodiment of the present invention;
FIG. 18 is a block diagram showing a detailed view of a fused image box for an alternate embodiment of the present invention;
FIG. 19 is a minimal repeating unit from the prior art;
FIGS. 20A and 20B are minimal repeating units from the prior art;
21A, 21B and 21C are pixel neighborhoods for interpolating green and panchromatic image values from the prior art;
FIG. 22 is a minimal repeating unit from the prior art;
FIGS. 23A and 23B are minimal repeating units from the prior art;
FIG. 24 is a minimal repeating unit from the prior art; and
fig. 25 is a minimal repeating unit for an alternate embodiment of the invention.
Detailed Description
In the following description, preferred embodiments of the present invention will be described in terms that would be readily implemented as software programs. Those skilled in the art will readily recognize that the equivalent of such software may also be implemented in hardware. Because image manipulation algorithms and systems are well known, the present description will be directed in particular to algorithms and systems forming part of, or cooperating more directly with, the system and method in accordance with the present invention. Such algorithms and systems, as well as other aspects of hardware or software for generating and otherwise processing image signals related thereto, not specifically shown or described herein, may be selected from such systems, algorithms, components, and elements known in the art. In view of the system in accordance with the present invention as described in the following material, software not specifically shown, suggested, or described herein that is useful for the practice of the present invention is conventional and within the ordinary skill of such art.
Still further, as used herein, a computer program for performing the methods of the present disclosure may be stored in a computer readable storage medium, which may include, for example: magnetic storage media such as a magnetic disk (e.g., a hard drive or floppy disk) or magnetic tape; optical storage media such as optical disks, optical tapes, or machine-readable bar codes; solid state electronic storage devices, such as Random Access Memory (RAM) or Read Only Memory (ROM); or any other physical device or medium employed to store a computer program.
Because digital cameras using imaging devices and related circuitry for signal capture and correction and for exposure control are well known, the present description will be directed in particular to elements forming part of, or cooperating more directly with, the method and apparatus in accordance with the present invention. Elements not specifically shown or described herein are selected from those known in the art. Certain aspects of the embodiments to be described are provided in software. In view of the system shown and described in accordance with the invention in the following material, software not specifically shown, described, or suggested herein that is useful for implementation of the invention is conventional and within the ordinary skill of such technologies.
Turning now to FIG. 1, a block diagram of an image capture device embodying the present invention is shown. In this example, the image capture device is shown as a digital camera. However, although a digital camera will now be explained, the present invention is certainly also applicable to other types of image capturing devices. In the disclosed camera, light from a subject scene 10 is input to an imaging stage 11, where the light is focused by a lens 12 to form an image on a solid state color filter array image sensor 20. The color filter array image sensor 20 converts incident light into an electrical signal for each picture element (pixel). The color filter array image sensor 20 of the preferred embodiment is of the Charge Coupled Device (CCD) type or the Active Pixel Sensor (APS) type. (APS devices are often referred to as CMOS sensors because they can be fabricated in a complementary metal oxide semiconductor process). Other types of image sensors having two-dimensional pixel arrays may also be used provided they use the patterns of the present invention. The color filter array image sensor 20 used in the present invention includes a two-dimensional array of color and panchromatic pixels, as will become clear later in this specification after the description of fig. 1.
The amount of light reaching the color filter array image sensor 20 is regulated by a variable aperture iris block 14 and a Neutral Density (ND) filter block 13, the ND filter block 13 including one or more ND filters interpolated in the optical path. The time that the shutter 18 is open also adjusts the overall light level. The exposure controller 40 responds to the amount of light available in the scene as metered by the brightness sensor block 16 and controls all three of these adjustment functions.
This description of a particular camera configuration will be familiar to those skilled in the art, and many variations and additional features will be apparent. For example, an auto-focus system may be added, or the lens may be removable or interchangeable. It will be understood that the present invention is applicable to any type of digital camera where similar functionality is provided by alternative components. For example, the digital camera may be a relatively simple point-and-shoot digital camera, in which the shutter 18 is a relatively simple movable blade shutter or the like, rather than a more complex focal plane arrangement. The invention may also be practiced using imaging components that are included in non-camera devices such as mobile phones and automobiles.
The analog signals from the color filter array image sensor 20 are processed by an analog signal processor 22 and applied to an analog-to-digital (A/D) converter 24. Timing generator 26 generates various timing signals to select rows and pixels and synchronizes the operation of analog signal processor 22 and a/D converter 24. Image sensor stage 28 includes color filter array image sensor 20, analog signal processor 22, A/D converter 24, and timing generator 26. The components of the image sensor stage 28 may be separately fabricated integrated circuits, or they may be fabricated as a single integrated circuit, as is commonly done with CMOS image sensors. The resulting stream of digital pixel values from the a/D converter 24 is stored in a Digital Signal Processor (DSP) memory 32 associated with a Digital Signal Processor (DSP) 36.
DSP36 is one of the three processors or controllers in this embodiment, except for system controller 50 and exposure controller 40. While this division of camera function control among multiple controllers and processors is typical, these controllers or processors can be mixed in various ways without affecting the functional operation of the camera and application of the present invention. These controllers or processors may include one or more digital signal processor devices, microcontrollers, programmable logic devices, or other digital logic circuitry. While a combination of such controllers or processors has been described, it is to be understood that one controller or processor may be designated to perform all of the required functions. All such variations may perform the same function and are within the scope of the invention, and the term "processing stage" will be used as needed to encompass all such functionality within one phrase, such as in processing stage 38 in FIG. 1.
In the illustrated embodiment, DSP36 manipulates the digital image data in DSP memory 32 according to a software program permanently stored in program memory 54 and copied to DSP memory 32 for execution during image capture. DSP36 executes the software necessary to practice the image processing shown in fig. 18. The DSP memory 32 may be any type of random access memory, such as SDRAM. A bus 30, which includes paths for address and data signals, connects the DSP36 to its associated DSP memory 32, a/D converter 24, and other associated devices.
The system controller 50 controls the overall operation of the camera based on a software program stored in a program memory 54, which program memory 54 may comprise flash EEPROM or other non-volatile memory. This memory may also be used to store image sensor calibration data, user setting selections, and other data that must be saved when the camera is turned off. The system controller 50 controls the image capture sequence by directing the exposure controller 40 to operate the lens 12, ND filter block 13, iris block 14, and shutter 18 as previously described, directing the timing generator 26 to operate the color filter array image sensor 20 and associated elements, and directing the DSP36 to process the captured image data. After the image is captured and processed, the final image file stored in the DSP memory 32 is transferred to a host computer via the host interface 57, stored on the removable memory card 64 or other storage device, or displayed for the user on the image display 88.
The system controller bus 52 includes paths for address, data, and control signals, and connects the system controller 50 to the DSP36, program memory 54, system memory 56, host interface 57, memory card interface 60, and other related devices. The host interface 57 provides a high speed connection to a Personal Computer (PC) or other host computer for transferring image data for display, storage, manipulation, or printing. This interface may be an IEEE1394 or USB2.0 serial interface, or any other suitable digital interface. The memory card 64 is typically a Compact Flash (CF) card that is inserted into the memory card socket 62 and connected to the system controller 50 via the memory card interface 60. Other types of storage devices that may be utilized include, but are not limited to, PC cards, multimedia cards (MMC), or Secure Digital (SD) cards.
The processed images are copied to a display buffer in system memory 56 and read out continuously via video encoder 80 to generate a video signal. This signal is output directly from the camera for display on an external monitor or processed by the display controller 82 and presented on the image display 88. This display is typically an active matrix color Liquid Crystal Display (LCD), but other types of displays may be used.
The user interface 68 (including all or any combination of the viewfinder display 70, the exposure display 72, the status display 76, the image display 88, and the user input 74) is controlled by a combination of software programs executing on the exposure controller 40 and the system controller 50. User input 74 typically comprises some combination of buttons, rocker switches, joysticks, rotary dials, or touch screens. The exposure controller 40 operates light metering, exposure mode, auto focus, and other exposure functions. The system controller 50 manages a Graphical User Interface (GUI) presented on one or more of the displays, such as on the image display 88. The GUI typically contains menus for making various option selections and for review mode for review of captured images.
The exposure controller 40 accepts user input selecting an exposure mode, lens aperture, exposure time (shutter speed), and exposure index or ISO speed rating, and directs the lens 12 and shutter 18 accordingly for subsequent capture. The brightness sensor block 16 is used to measure the brightness of the scene and provide an exposure metering function for reference by the user in manually setting the ISO speed level, aperture and shutter speed. In this case, when the user changes one or more settings, a light gauge indicator presented on the viewfinder display 70 tells the user how much the image will be overexposed or underexposed. In the automatic exposure mode, the user changes one setting and the exposure controller 40 automatically changes another setting to maintain the correct exposure, e.g., for a given ISO speed level at which the user decreases the lens aperture, the exposure controller 40 automatically increases the exposure time to maintain the same total exposure.
The ISO speed class is an important attribute of the digital still camera. The exposure time, lens aperture, lens transmittance, level and spectral distribution of scene illumination, and scene reflectance determine the exposure level of the digital still camera. When an image from a digital still camera is obtained using underexposure, proper tone reproduction can typically be maintained by increasing electronic or digital gain, but the resulting image will typically contain an unacceptable amount of noise. As the exposure increases, the gain decreases and thus the image noise can normally be reduced to an acceptable level. If the exposure is increased excessively, the resulting signal in the bright areas of the image may exceed the maximum signal level capability of the image sensor or camera signal processing. This can result in the image highlights being clipped to form uniformly bright areas, or "blooming" into surrounding areas of the image. Therefore, it is important to guide the user to set an appropriate exposure. The ISO speed class is intended to be used as this guide. For the photographer to easily understand, the ISO speed class for a digital still camera should be directly related to the ISO speed class for a photographic film camera. For example, if the digital still camera has an ISO speed class of ISO 200, then the same exposure time and aperture should fit into the film/processing system specified by ISO 200.
The ISO speed rating is intended to be consistent with the film ISO speed rating. However, there are differences between electronic imaging systems and film-based imaging systems that prevent exact equality. Digital still cameras may include variable gain and may provide digital processing after image data has been captured, enabling tone reproduction over the camera exposure range. Thus, it is possible for a digital still camera to have a certain range of speed levels. This range may be defined as the ISO speed latitude. To prevent confusion, a single value is assigned to an inherent ISO speed class, where the upper and lower limits of the ISO speed latitude indicate a speed range, i.e., a range that contains an effective speed class different from the inherent ISO speed class. In view of this, the intrinsic ISO speed is a value calculated from the exposure provided at the focal plane of the digital still camera to produce a specified camera output signal characteristic. Intrinsic speed is typically an exposure index value that produces the peak image quality of a given camera system for normal scenes, where the exposure index is a value that is inversely proportional to the exposure provided to the image sensor.
The foregoing description of a digital camera will be familiar to those skilled in the art. It will be apparent that there are many variations of this embodiment that are possible and selected to reduce cost, add features, or improve the performance of the camera. The following description will disclose in detail the operation of this camera for capturing images in accordance with the present invention. Although this description refers to a digital camera, it will be understood that the present invention is applicable for use with any type of image capture device having an image sensor with color and panchromatic pixels.
The color filter array image sensor 20 shown in fig. 1 typically comprises a two-dimensional array of light-sensitive pixels fabricated on a silicon substrate that provides a way to convert incoming light at each pixel into a measured electrical signal. When the color filter array image sensor 20 is exposed to light, free electrons are generated and captured within the electronic structure at each pixel. Capturing these free electrons for a period of time and then measuring the number of electrons captured, or measuring the rate at which the free electrons are generated, can measure the light level at each pixel. In the former case, the accumulated charge is shifted out of the pixel array to a charge-to-voltage measurement circuit (as in a charge-coupled device (CCD)), or the area near each pixel may contain elements of the charge-to-voltage measurement circuit (as in an active pixel sensor (APS or CMOS sensor)).
Whenever reference is generally made to an image sensor in the following description, it is understood to represent the color filter array image sensor 20 from fig. 1. It is further understood that all examples of the image sensor architectures and pixel patterns of the present invention disclosed in this specification, and their equivalents, are used for the color filter array image sensor 20.
In the context of an image sensor, a pixel (a acronym for "picture element") refers to a discrete light sensing area and a charge shifting or charge measurement circuit associated with the light sensing area. In the context of a digital color image, the term pixel generally refers to a particular location in the image that has an associated color value.
Fig. 2 is an example of the minimum repeating unit of a well-known color filter array pattern described by bayer in U.S. patent No. 3,971,065. The minimal repeating unit repeats over the surface of the color filter array sensor 20 (fig. 1), producing a red pixel, a green pixel, or a blue pixel at each pixel location. The data generated by the color filter array sensor 20 (fig. 1) having the color filter array pattern of fig. 2 can be used to generate a full color image in many ways known to those skilled in the art. One example is described in U.S. patent No. 5,506,619 to adanes et al.
Fig. 3 is a minimal repeating unit for use in a preferred embodiment of the present invention. The minimal repeating unit is a 4 x 4 square array of pixels in which green pixels alternate horizontally and vertically with panchromatic pixels, and red and blue pixels each have three adjacent panchromatic pixels in each of four directions (left, right, above and below). This minimal repeating unit of fig. 3 repeats over the surface of the color filter array sensor 20 (fig. 1), producing a red pixel, a green pixel, a blue pixel, or a panchromatic pixel at each pixel location. Thus, the panchromatic pixels are arranged in a checkerboard pattern on the surface of the sensor. The color pixels are also arranged in a checkerboard pattern on the surface of the sensor.
FIG. 4A shows a minimal repeating unit for an alternative embodiment of the present invention. The minimal repeating unit is a 2 x 4 rectangular array of pixels with green pixels alternating horizontally and vertically with panchromatic pixels and red and blue pixels alternating vertically with panchromatic pixels. This arrangement may be transposed to obtain the pattern of fig. 4B, which shows a 4 x 2 rectangular pixel array with green pixels alternating horizontally and vertically with panchromatic pixels, and red and blue pixels alternating horizontally with panchromatic pixels. The minimal repeating unit of fig. 4A or 4B is tiled over the surface of the color filter array sensor 20 (fig. 1), producing a red pixel, a green pixel, a blue pixel, or a panchromatic pixel at each pixel location. Thus, the panchromatic pixels are arranged in a checkerboard pattern on the surface of the sensor. The color pixels are also arranged in a checkerboard pattern on the surface of the sensor.
Fig. 5A shows a 4 x 4 minimal repeating unit for another alternative embodiment of the present invention. This arrangement is similar to that shown in fig. 3, except that the color pixels alternate with the panchromatic pixels in the diagonal direction rather than the horizontal and vertical directions. In particular, it can be seen that the red, green and blue pixels alternate diagonally with the panchromatic pixels in two diagonal directions. (Note that the full pattern along the diagonal can be best visualized by tiling the minimal repeating unit to see how the lines of pixels wrap from one side of the minimal repeating unit to the other). In this arrangement, the columns of color pixels can be seen to alternate with the columns of panchromatic pixels. This arrangement can be transposed to obtain the pattern of fig. 5B, which is a 4 × 4 square pixel array in which the red, green, and blue pixels are diagonally alternated with the panchromatic pixels in two diagonal directions. In this case, the rows of color pixels alternate with the rows of panchromatic pixels.
The minimal repeating unit of fig. 5A or 5B is tiled over the surface of the color filter array sensor 20 (fig. 1), producing a red pixel, a green pixel, a blue pixel, or a panchromatic pixel at each pixel location. Thus, the panchromatic pixels are arranged in alternating rows or columns on the surface of the sensor. The color pixels are also arranged in alternating rows or columns on the surface of the sensor. This may be advantageous for sensor designs where there may be a small difference in gain for even and odd rows (or columns) of sensors. Having all pixels of a given type on even or odd rows can reduce artifacts that sometimes occur during the CFA interpolation process due to alternating gain values.
The present invention may be generalized to CFA patterns of other sizes and arrangements than those shown in fig. 3, 4A, 4B, 5A, and 5B. In each case, the pixels will be arranged in a rectangular minimal repeating unit having at least eight pixels and having at least two rows and two columns, with color pixels having a first color response alternating with panchromatic pixels in at least two directions for a first color response, and for each of the other color responses there is at least one row, column or diagonal of the repeating pattern of color pixels and panchromatic pixels having only the given color response.
Another way of describing a CFA pattern according to the present invention is to arrange the pixels in a repeating pattern having a rectangular minimal repeating unit having at least eight pixels and having at least two rows and two columns, wherein color pixels for at least one color response alternate with panchromatic pixels in at least two directions, and wherein color pixels for the other color response alternate with panchromatic pixels in at least one direction, or have at least two adjacent panchromatic pixels on both sides of the color pixels in at least two directions.
Fig. 25 shows another example of a minimal repeating unit with a 2 x 8 rectangular pixel array that meets these criteria. In this case, the green pixels alternate horizontally and vertically with the panchromatic pixels, and the red and blue pixels alternate vertically with the panchromatic pixels, and flank horizontally the three panchromatic pixels on the left and right sides.
A desirable characteristic of CFA patterns that meet the above criteria is the fact that each color pixel is surrounded by four panchromatic pixels (horizontally/vertically or diagonally). Thus, highly accurate interpolated panchromatic values can be readily determined at the location of the color pixels by interpolating the color pixels between the surrounding panchromatic pixels. Further, color pixels for at least one color response alternate with panchromatic pixels in at least two directions. Thus, the colored pixels for at least one color response (e.g., green) will be arranged on a regular grid, enabling easy interpolation of the corresponding color differences. Color pixels for other color responses (e.g., red and blue) will also appear on the regular grid, but the repetition period may be greater in one or both directions than for the one color response. Larger periods will be associated with correspondingly larger interpolation errors for the interpolated color differences. However, these color responses are less visually important, so that the visibility of any artifacts will be less objectionable.
The color pixels in the example CFA patterns shown in fig. 3, 4A, 4B, 5A, 5B, and 25 are red, green, and blue. Those skilled in the art will appreciate that other types of color pixels may also be used in accordance with the present invention. For example, in an alternative embodiment of the present invention, the color pixels may be cyan, magenta, and yellow. In another embodiment of the present invention, the color pixels may be cyan, yellow, and green. In yet another embodiment of the present invention, the color pixels may be cyan, magenta, yellow, and green. Many other types and combinations of color pixels may also be used.
FIG. 6 is a high level diagram of an algorithm for producing a full color output image from data produced from minimal repeating units, such as those shown in FIG. 3, FIG. 4A, FIG. 4B, FIG. 5A, FIG. 5B, or FIG. 25, in accordance with a preferred embodiment of the present invention. The image sensor 20 (fig. 1) produces a color filter array image 100. In the color filter array image 100, each pixel location is a red, green, blue, or panchromatic pixel as determined by a minimal repeating unit, such as the minimal repeating unit shown in fig. 3, 4A, 4B, 5A, 5B, or 25. An interpolated panchromatic image frame 102 produces an interpolated panchromatic image 104 from the color filter array image 100. A generate color difference block 108 generates color difference values 110 from the color filter array image 100 and the interpolated panchromatic image 104. An interpolated color difference image box 112 generates an interpolated color difference image 114 from the color difference values 110. Generate interpolated color image frame 106 generates interpolated color image 120 from interpolated panchromatic image 104 and interpolated color difference image 114. Finally, a fused image box 118 generates a full color output image 116 from the interpolated panchromatic image 104 and the interpolated color image 120.
Each of the steps of the method shown in fig. 6 will now be described in more detail. Fig. 7 is a detailed diagram of a neighborhood of pixels used to interpolate the full color image frame 102 (fig. 6) to determine interpolated panchromatic pixel values at the location of a green pixel in the CFA pattern shown in fig. 3. In FIG. 7, C2、C5、C7、C9And CCRefers to green pixel values from the color filter array image 100 (FIG. 6), and P1、P3、P4、P6、P8、PA、PBAnd PDRefers to panchromatic pixel values from the image. To generate interpolated panchromatic values P'7The following calculation is performed.
h=2|P6-P8|+α|C5-2C7+C9|
v=2|P3-PB|+α|C2-2C7+CC|
Where α is a constant and H, V, H and V are intermediate variables. In a preferred embodiment of the invention, the value of α is zero. In an alternative embodiment of the invention, α has a value of one. Those skilled in the art will appreciate that other values of α may also be used. Changing the value of α has the effect of determining the interpolated panchromatic value P'7The effect of weighting the degree of color pixel values. These calculations are repeated by the interpolated panchromatic image frame 102 (FIG. 6) at each green pixel location in the color filter array image 100 (FIG. 6) to produce a corresponding interpolated panchromatic value P'7。
Fig. 8 is a detailed diagram of a neighborhood of pixels used to interpolate the panchromatic pixel values at the positions of the red and blue pixels in the CFA pattern shown in fig. 3 in the full color image box 102 (fig. 6). In FIG. 8, C1、C5、C9、CDAnd CHRefers to color pixels of the same color (red or blue), and P2、P3、P4、P6、P7、P8、PA、PB、PC、PE、PFAnd PGRefers to filtering from colorPanchromatic pixels of the patch array image 100 (fig. 6). In FIG. 8, at C9There are three adjacent panchromatic pixel values above, below, to the left, and to the right. To generate interpolated panchromatic values P'9The following calculation is performed.
h=|P7-P8|+|P8-PA|+|PA-PB|+α|C5-2C9+CD|
v=|P3-P4|+|P4-PE|+|PE-PF|+α|C1-2C9+CH|
Where α is a constant and H, V, H and V are intermediate variables. In a preferred embodiment of the invention, the value of α is zero. In an alternative embodiment of the invention, α has a value of one. Those skilled in the art will appreciate that other values of α may also be used. Changing the value of α has the effect of determining the interpolated panchromatic value P'9The effect of weighting the degree of color pixel values. These calculationsThis is repeated by the interpolated panchromatic image frame 102 (FIG. 6) at each red and blue pixel location in the color filter array image 100 (FIG. 6) to produce a corresponding interpolated panchromatic value P'9. The interpolated panchromatic values (at the red, green and blue pixel locations) combined with the original panchromatic values make up an interpolated panchromatic image 104 (fig. 6).
Similar calculations can be used in interpolating full color image box 102 (fig. 6) for the alternating CFA pattern described earlier. For the CFA pattern shown in fig. 4A and 4B, green pixels alternate with panchromatic pixels in the same manner as in fig. 3. Thus, the interpolation method previously described with respect to fig. 7 may also be applied to the CFA patterns of fig. 4A and 4B. Fig. 9A and 9B are detailed diagrams of neighborhood of pixels in the interpolated full color image frame 102 (fig. 6) that may be used for the red and blue pixels in the CFA pattern shown in fig. 4A and 4B. Fig. 9A corresponds to the CFA pattern of fig. 4A, and fig. 9B corresponds to the CFA pattern of fig. 4B. In FIG. 9A, C2、CBAnd CKIs a red pixel, and D9And DDIs a blue pixel, or C2、CBAnd CKIs a blue pixel, and D9And DDA red pixel. G4、G6、GGAnd GIIs a green pixel, and P1、P3、P5、P7、P8、PA、PC、PE、PF、PH、PJAnd PLA panchromatic pixel. To generate interpolated panchromatic values P'BThe following calculation is performed.
h=4|PA-PC|+|P3-2P5+P7|+|PF-2PH+PJ|
v=2|P5-PH|+α|C2-2CB+CK|
Wherein α is a constant and H, V, H and V are intermediate variables. In a preferred embodiment of the invention, the value of α is zero. In an alternative embodiment of the invention, α has a value of one. Those skilled in the art will appreciate that other values of α may also be used. Changing the value of alpha has control over determining the interpolated panchromatic value P'BThe effect of weighting the degree of color pixel values. In FIG. 9B, C9、CBAnd CDIs a red pixel, and D3And DJIs a blue pixel, or C9、CBAnd CDIs a blue pixel, and D3And DJA red pixel. G5、G7、GFAnd GHIs a green pixel, and P1、P3、P5、P7、P8、PA、PC、PE、PF、PH、PJAnd PLA panchromatic pixel. To generate interpolated panchromatic values P'BThe following calculation is performed.
h=2|PA-PC|+α|C9-2CB+CD|
v=4|P6-PG|+|P2-2PA+PI|+|P4-2PC+PK|
Where α is a constant and H, V, H and V are intermediate variables. In a preferred embodiment of the invention, the value of α is zero. In an alternative embodiment of the invention, α has a value of one. Those skilled in the art will appreciate that other values of α may also be used. Changing the value of alpha has control over determining the interpolated panchromatic value P'BThe effect of weighting the degree of color pixel values.
Fig. 10 is a detailed diagram of a neighborhood of pixels for the CFA pattern shown in fig. 5A and 5B in the interpolated full color image frame 102 (fig. 6). In FIG. 10, C1、C2、C5、C8And C9From colorColor pixels of the same color of filter array image 100 (fig. 6). P3、P4、P6And P7Refers to a panchromatic pixel. In fig. 10, panchromatic pixels alternate with color pixels in two diagonal directions. To generate interpolated panchromatic values P'5The following calculation is performed.
s=2|P4-P6|+α|C2-2C5+C8|
b=2|P3-P7|+α|C1-2C5+C9|
Where α is a constant and S, B, S and B are intermediate variables. In a preferred embodiment of the invention, the value of α is zero. In an alternative embodiment of the invention, α has a value of one. Those skilled in the art will appreciate that other values of α may also be used. Changing the value of α has the effect of determining the interpolated panchromatic value P'5The effect of weighting the degree of color pixel values. These calculations are repeated by the interpolated panchromatic image frame 102 (FIG. 6) at each color pixel location in the color filter array image 100 (FIG. 6) to produce a corresponding interpolated panchromatic value P'5. With the original panchromatic value (P in FIG. 10)3、P4、P6And P7) The combined interpolated panchromatic values constitute an interpolated panchromatic image 104 (fig. 6).
Fig. 11A and 11B are detailed diagrams of pixel neighborhoods that may be used for the CFA pattern shown in fig. 5A by an alternative embodiment of the interpolated full color image frame 102 (fig. 6). In FIG. 11A, C3Is a red or blue pixel from the color filter array image 100 (FIG. 6), and DBIs a blue or red pixel from the image. G1、G5、G7、G9And GDIs a green imageAnd P is2、P4、P6、P8、PAAnd PCA panchromatic pixel. In fig. 11A, panchromatic pixels alternate with green pixels in the horizontal direction. To generate interpolated panchromatic values P'7The following calculation is performed.
h=2|P6-P8|+α|G5-2G7+G9|
v=|P2-P6|+|P6-PA|+|P4-P8|+|P8-PC|
V=βC3+γG7+DB
Where the sum of α, β, γ is a constant and H, V, H and V are intermediate variables. In a preferred embodiment of the invention, the value of α is zero. In an alternative embodiment of the invention, α has a value of one. Those skilled in the art will appreciate that other values of α may also be used. Changing the value of alpha has control over determining the interpolated panchromatic value P'7The effect of weighting the degree of color pixel values. In a preferred embodiment of the invention, the sum of β, γ is one third. Other values of β, γ, and may also be used as will be apparent to those skilled in the art. Varying the value of the sum of β, γ has the effect of controlling the interpolated panchromatic value V and the measured panchromatic value (e.g. P)6And P8) The effect of how closely the color response of (c) matches.
In FIG. 11B, CAIs a red or blue pixel from the color filter array image 100 (FIG. 6), and D1、D8、DCAnd DJIs a blue or red pixel from the image. G2、G4、G6、GE、GGAnd GIIs a green pixel, and P3、P5、P7、P9、PB、PFAnd PHA panchromatic pixel. To generate interpolated panchromatic values P'AThe following calculation is performed.
h=|P3-P5|+2|P9-PB|+|PF-PH|
v=|P3-P9|+|P9-PF|+|P5-PB|+|PB-PH|
Where β, γ and are constants, and H, V, H and V are intermediate variables. In a preferred embodiment of the invention, the sum of β, γ is one third. Other values of β, γ, and may also be used as will be apparent to those skilled in the art. Change beta,The value of the sum of gamma has the function of controlling the interpolated panchromatic value V and the measured panchromatic value (e.g. P)9And PB) The effect of how closely the color response of (c) matches.
Fig. 12A and 12B are detailed diagrams of pixel neighborhoods that may be used for the CFA pattern shown in fig. 5B by an alternative embodiment of the interpolated full color image frame 102 (fig. 6). In FIG. 12A, C6Is a red or blue pixel from the color filter array image 100 (FIG. 6), and D8Is a blue or red pixel from the image. G1、G5、G7、G9And GDIs a green pixel, and P2、P3、P4、PA、PBAnd PCA panchromatic pixel. In fig. 12A, panchromatic pixels alternate with green pixels in the vertical direction. To generate interpolated panchromatic values P'7The following calculation is performed.
h=|P2-P3|+|P3-P4|+|PA-PB|+|PB-PC|
v=2|P3-PB|+α|G1-2G7+GD|
H=βC6+γG7+D8
Where the sum of α, β, γ is a constant and H, V, H and V are intermediate variables. In a preferred embodiment of the invention, the value of α is zero. In an alternative embodiment of the invention, α has a value of one. Those skilled in the art will appreciate that other values of α may also be used. Changing the value of alpha has control over determining the interpolated panchromatic value P'7The effect of weighting the degree of color pixel values. In a preferred embodiment of the invention, the sum of β, γ is one third. Other values of β, γ, and may also be used as will be apparent to those skilled in the art. Varying the value of the sum of β, γ has the effect of controlling the interpolated panchromatic value V and the measured panchromatic value (e.g. P)3And PB) The effect of how closely the color response of (c) matches.
In FIG. 12B, CAIs a red or blue pixel from the color filter array image 100 (FIG. 6), and D3、D8、DCAnd DHIs a blue or red pixel from the image. G2、G4、G9、GB、GGAnd GIIs a green pixel, and P1、P5、P6、P7、PD、PE、PFAnd PjA panchromatic pixel. To generate interpolated panchromatic values P'AThe following calculation is performed.
h=|P5-P6|+|P6-P7|+|PD-PE|+|PE-PF|
v=|P5-PD|+2|P6-PE|+|P7-PF|
Where β, γ and are constants, and H, V, H and V are intermediate variables. In a preferred embodiment of the invention, the sum of β, γ is one third. Other values of β, γ, and may also be used as will be apparent to those skilled in the art. Varying the value of the sum of β, γ has the effect of controlling the interpolated panchromatic value V and the measured panchromatic value (e.g. P)6And PE) The effect of how closely the color response of (c) matches.
Fig. 13 is a detailed diagram of a neighborhood of pixels used in the interpolated color difference image box 112 (fig. 6) to determine green difference values for the preferred embodiment of the CFA pattern shown in fig. 13. In FIG. 13, D1、D3、D7And D9Is the color difference value 110 (fig. 6) generated for the green pixel by the generate color difference block 108 (fig. 6). Calculate D as given below1、D3、D7And D9The value of (c).
In these calculations, G refers to the original green pixel values from color filter array image 100 (fig. 6), and P' refers to the corresponding interpolated panchromatic values from interpolated panchromatic image 104 (fig. 6). The subscripts correspond to the pixel locations shown in fig. 13.
Interpolated color difference image block 112 (fig. 6) generates interpolated color difference values D' at pixel locations in fig. 13 without existing color difference values D. Color difference value D1、D3、D7And D9The standard bilinear interpolation of (a) yields an interpolated color difference value D'. The following equation shows an explicit calculation that can be used to determine the interpolated color difference value D':
and a color value (D)1、D3、D7And D9) Combined set of interpolated color difference valuesInto an interpolated color difference image 114 (fig. 6).
Fig. 14 is a detailed diagram of the neighborhood of pixels used in the interpolated color difference image box 112 (fig. 6) to determine the red and blue difference values for the preferred embodiment of the CFA pattern shown in fig. 3. In FIG. 14, D0、D4、DKAnd DPIs the color difference value 110 (fig. 6) generated for the red and blue pixels by the generate color difference block 108 (fig. 6). In the following discussion, a red pixel will be assumed, but it will be understood that the same method can be applied to a blue pixel. Calculate D as given below0、D4、DKAnd DPThe value of (c).
In these calculations, R refers to the original red pixel values from color filter array image 100 (fig. 6), and P' refers to the corresponding interpolated panchromatic values from interpolated panchromatic image 104 (fig. 6). The subscripts correspond to the pixel locations shown in fig. 14. The interpolated color difference image block 112 (fig. 6) generates an interpolated color difference value D' at pixel locations in fig. 14 that do not have an existing color difference value D. Color difference value D0、D4、DKAnd DPThe standard bilinear interpolation of (a) yields an interpolated color difference value D'. The following equation shows an explicit calculation that can be used to determine the interpolated color difference value D':
and a color value (D)1、D3、D7And D9) The combined interpolated color difference values constitute an interpolated color difference image 114 (fig. 6).
Generating interpolated color image frame 106 (fig. 5) generates interpolated color values R ', G', and B 'from the interpolated color difference value D' and the corresponding panchromatic value (original or interpolated). Referring again to fig. 13, the following calculations are performed.
The original color value G along with the interpolated color value G' produces a green value of the interpolated color image 120 (fig. 6). The foregoing set of operations may be generalized with respect to fig. 14 for red and blue pixel values to complete the generation of interpolated color image 120 (fig. 6).
Similar strategies may be used to determine interpolated color images 120 for other CFA pattern changes according to the methods of the present invention, such as the methods shown in fig. 4A, 4B, 5A, 5B, or 25. For a given color, the first step is to calculate the color difference values for the color at the pixel locations where the color appears in the CFA pattern. Next, an interpolation step is used to determine color difference values for the remaining pixels in the pixel by interpolating between the color difference values found in the first step. This step is repeated for each of the colors (i.e., red, green, and blue). Finally, the interpolated color value is found by adding the color difference value to the interpolated panchromatic image.
Fig. 15A and 15B are detailed views of the neighborhood of pixels for the CFA pattern shown in the alternative embodiment of fig. 4A and 4B in the interpolated color difference image box 112 (fig. 6). Fig. 15A corresponds to fig. 4A, and fig. 15B corresponds to fig. 4B. In FIG. 15A, D0、D4、DAAnd DEIs the color difference value 110 (fig. 6) generated by the generate color difference block 108 (fig. 6). D0、D4、DAAnd DECorresponds to all red pixels, all green pixels, or all blue pixels in the color filter array image 100 (fig. 6). The following example will assume a red pixel; however, this process can be similarly applied to green and blue pixels. Calculate D as follows0、D4、DAAnd DEThe value of (c):
in these calculations, R refers to the original red pixel values from color filter array image 100 (fig. 6), and P' refers to the corresponding interpolated panchromatic values from interpolated panchromatic image 104 (fig. 6).
Returning to fig. 15A, the interpolated color difference image block 112 (fig. 6) generates an interpolated color difference value D' at pixel locations that do not have an existing color difference value D. Color difference value D0、D4、DAAnd DEGenerates an interpolated color difference value D' according to the following explicit calculation:
interpolated color difference value and color value (D)0、D4、DAAnd DE) An interpolated color difference image 114 (fig. 6) is composed.
Generating interpolated color image frame 106 (fig. 6) generates interpolated color values R 'from the interpolated color difference values D' and the corresponding panchromatic values (original or interpolated). Referring again to fig. 15A, the following calculations are performed:
the original color value R and the interpolated color value R' constitute the red value of the interpolated color image 120 (fig. 6). The foregoing process is repeated for the green and blue pixels.
Similarly, in FIG. 15B, D0、D2、DCAnd DEIs the color difference value 110 (fig. 6) generated by the generate color difference block 108 (fig. 6). D0、D2、DCAnd DECorresponds to all red pixels, all green pixels, or all blue pixels in the color filter array image 100 (fig. 6). The following example will assume a red pixel; however, this process can be similarly applied to green and blue pixels. Calculate D as follows0、D2、DCAnd DEThe value of (c):
in these calculations, R refers to the original red pixel values from color filter array image 100 (fig. 6), and P' refers to the corresponding interpolated panchromatic values from interpolated panchromatic image 104 (fig. 6).
Returning to fig. 15B, the interpolated color difference image box 112 (fig. 6) generates an interpolated color difference value D' at pixel locations that do not have an existing color difference value D. Color difference value D0、D2、DCAnd DEGenerates an interpolated color difference value D' according to the following explicit calculation:
interpolated color difference value and color value (D)0、D2、DCAnd DE) An interpolated color difference image 114 (fig. 6) is composed.
Generating interpolated color image frame 106 (fig. 6) generates interpolated color values R 'from the interpolated color difference values D' and the corresponding panchromatic values (original or interpolated). Referring again to fig. 15B, the following calculations are performed:
the original color value R and the interpolated color value R' constitute the red value of the interpolated color image 120 (fig. 6). The foregoing process is repeated for the green and blue pixels.
Fig. 16A and 16B are detailed diagrams of neighborhood of pixels that can be used to interpolate color difference values for the alternative CFA pattern embodiment of fig. 5A and 5B in interpolation color difference image box 112 (fig. 6). Fig. 16A corresponds to fig. 5A, and fig. 16B corresponds to fig. 5B. In FIG. 16A, D0、D4、D8And DCIs the color difference value 110 (fig. 6) generated by the generate color difference block 108 (fig. 6). D0、D4、D8And DCCorresponds to all red pixels, all green pixels, or all blue pixels in the color filter array image 100 (fig. 6). The following example will assume a red pixel; however, this process can be similarly applied to green and blue pixels. Calculate D as follows0、D4、D8And DCThe value of (c):
in these calculations, R refers to the original red pixel values from color filter array image 100 (fig. 6), and P' refers to the corresponding interpolated panchromatic values from interpolated panchromatic image 104 (fig. 6).
Returning to fig. 16A, the interpolated color difference image block 112 (fig. 6) generates an interpolated color difference value D' at pixel locations that do not have an existing color difference value D. Color difference value D0、D4、D8And DCThe standard bilinear interpolation of (a) yields an interpolated color difference value D'. Explicitly calculated as follows:
interpolated color difference values along with color values (D)0、D4、D8And DC) Together, make up an interpolated color difference image 114 (fig. 6).
Generating interpolated color image frame 106 (fig. 6) generates interpolated color values R 'from the interpolated color difference values D' and the corresponding panchromatic values (original or interpolated). Referring again to fig. 16A, the following calculations are performed:
the original color value R along with the interpolated color value R' constitute the red value of the interpolated color image 120 (fig. 6). The foregoing set of operations are repeated for the green and blue pixel values to complete the generation of the interpolated color image 120 (fig. 6).
In FIG. 16B, D0、D4、D8And DCIs the color difference value 110 (fig. 6) generated by the generate color difference block 108 (fig. 6). D0、D4、D8And DCCorresponds to all red pixels, all green pixels, or all blue pixels in the color filter array image 100 (fig. 6). The following example will assume a red pixel; however, this process can be similarly applied to green and blue pixels. Calculate D as follows0、D4、D8And DCThe value of (c):
in these calculations, R refers to the original red pixel values from color filter array image 100 (fig. 6), and P' refers to the corresponding interpolated panchromatic values from interpolated panchromatic image 104 (fig. 6).
Returning to fig. 16B, the interpolated color difference image box 112 (fig. 6) generates an interpolated color difference value D' at pixel locations that do not have an existing color difference value D. Color difference value D0、D4、D8And DCThe standard bilinear interpolation of (a) yields an interpolated color difference value D'. Explicitly calculated as follows:
interpolated color difference values along with color values (D)0、D4、D8And DC) Together, make up an interpolated color difference image 114 (fig. 6).
Generating interpolated color image frame 106 (fig. 6) generates interpolated color values R 'from the interpolated color difference values D' and the corresponding panchromatic values (original or interpolated). Referring again to fig. 16B, the following calculations are performed:
the original color value R along with the interpolated color value R' constitute the red value of the interpolated color image 120 (fig. 6). The foregoing set of operations are repeated for the green and blue pixel values to complete the generation of the interpolated color image 120 (fig. 6).
FIG. 17 is a block diagram of a preferred embodiment of the fused image box 118 (FIG. 6). The low pass filter block 204 generates a low frequency color image 206 from the interpolated color image 120 (fig. 6). High-pass filter block 200 generates high-frequency panchromatic image 202 from interpolated panchromatic image 104 (fig. 6). Finally, a merge image block 208 generates the full color output image 116 (fig. 6) by combining the low frequency color image 206 with the high frequency panchromatic image 202.
The low pass filter block 204 uses low pass filters to perform convolution of the interpolated color image 120 (fig. 6). In a preferred embodiment of the invention, the following convolution kernels are used:
in the arithmetic, CLC g, where C is the interpolated color image 120 (fig. 6), CLIs a low frequency color image 206 and "x" represents a convolution operator. It will be clear to those skilled in the art that other convolution kernels may be used in accordance with the present invention.
The high-pass filter block 200 uses high-pass filters to perform the convolution of the interpolated panchromatic image 104 (fig. 6). In a preferred embodiment of the invention, the following convolution kernels are used:
arithmetically, PHP, where P is the interpolated panchromatic image 104 (fig. 6), and PHIs a high frequency full color image 202. It will be clear to those skilled in the art that other convolution kernels may be used in accordance with the present invention.
The merge image block 208 combines the high frequency panchromatic image 202 with the low frequency color image 206 to produce the full color output image 116 (fig. 6). In a preferred embodiment of the present invention, this is accomplished by simply adding the high frequency panchromatic image 202 to the low frequency color image 206. Mathematically, C ═ CL+PHWhere C' is the full color output image 116 (fig. 6), other terms are as previously defined.
FIG. 18 is a block diagram of an alternative embodiment of the fused image box 118 (FIG. 6). The pyramid decomposition block 300 generates a full-color image pyramid 302 from the interpolated full-color image 104 (fig. 6). Pyramid decomposition block 304 generates color image pyramid 306 from interpolated color image 120 (fig. 6). The merged pyramid reconstruction block 308 generates the full color output image 116 (fig. 6) by combining the full color image pyramid 302 with the color image pyramid 306. The pyramid decomposition block 300 produces a standard Gaussian-Laplacian image pyramid by methods that will be known to those skilled in the art. Briefly, the following calculations are performed.
P0Is an interpolated panchromatic image 104 (fig. 6). P0Convolved with the low pass filter convolution kernel g, as described above. The result of the convolution operation is subsampled by factor 2 in the horizontal and vertical directions (↓2). The result of the subsampling is P1Which is full-color image goldThe first level component of the corresponding gaussian pyramid of the word tower 302. This process continues to produce P2To PNWhere N is the desired number of pyramid levels. In one embodiment of the present invention, N-4.
Q1Is the first-level component of the corresponding Laplacian (Laplacian) pyramid of the full-color image pyramid 302. The components are first-order components P by taking the Gaussian pyramid1And calculated by upsampling (× 2) by a factor of 2 in the horizontal and vertical directions and then subtracting the result from the interpolated panchromatic image 104 (fig. 6). The upsampling operation may be performed in any manner known to those skilled in the art. In one embodiment of the present invention, upsampling is performed using well known bi-linear interpolation. This process is continued to produce Q2To QN. Pyramid component { P1,...,PN,Q1,...,QNCombine them together to form pyramid image pyramid 302.
Pyramid decomposition block 304 performs in the same manner as pyramid decomposition block 300, except that each color of interpolated color image 120 (fig. 6) is separately processed to produce red, green, and blue pyramids, which together make up color image pyramid 306. To create the token, the calculations performed by pyramid decomposition block 304 are as follows:
pyramid component { C1,...,CNCombine them into a color Gaussian pyramid, and the pyramid component { H }1,...,HNTogether, constitute a color laplacian pyramid.
The merged pyramid reconstruction block 308 performs the following calculations, which are a modification of the standard gaussian-laplacian pyramid reconstruction that will be known to those skilled in the art:
in each set of three calculations, the gaussian color pyramid component C or the merged gaussian color pyramid component C ″ is up-sampled by a factor of 2 and added to the laplacian color pyramid component H. The gaussian panchromatic pyramid component P or P' is then upsampled by 2 and added to the laplacian panchromatic pyramid component Q. The resulting gaussian color pyramid component C 'is convolved with the previously described low-pass convolution kernel g, the resulting gaussian panchromatic pyramid component P' is convolved with the previously described high-pass convolution kernel h, and the results are added together to produce the combined gaussian color pyramid component C ". These calculations are repeated until a full color output image 116 (FIG. 6) C ″, is produced0Until now.
The algorithm for computing a full color output image as disclosed in the preferred embodiment of the present invention can be used in a variety of user scenarios and environments. Exemplary scenarios and environments include, but are not limited to, in-camera processing (reading sensor images, digital processing, saving processed images on digital media), large-scale digital photo processing (which involves exemplary process steps or stages such as submitting digital images for large-scale implementation, digital processing, and digital printing), retail digital photo processing (submitting digital images for retail implementation, digital processing, and digital printing), home printing (inputting home digital images, digital processing, and printing on home printers), desktop software (software that applies algorithms to digital images to make them better or even just to change them), digital implementation (inputting digital images from or via a network, digital processing, outputting digital images on media, digital forms via the internet), digital image processing, A kiosk (to input digital images, digital processing, digital printing, or to output digital media), a mobile device (e.g., a PDA or a cell phone that can be used as a processing unit, a display unit, or a unit to give processing instructions), and services provided via the world wide Web.
In each case, the algorithms used to calculate the full color output image may be independent, or may be an integral part of a larger system solution. Further, the interfaces with the algorithms (e.g., input, digital processing, display to a user (if needed), input of user requests or processing instructions (if needed), and output) can each be on the same or different devices and physical locations, and communication between the devices and locations can be via public or private network connections, or media-based communication. Consistent with the foregoing disclosure of the invention, the algorithm itself may be fully automatic, may have user input (i.e., otherwise fully or partially manual), may have user or operator review to accept/reject the result, or may be assisted by metadata (which may be user supplied, supplied by a measurement device (e.g., in a camera), or determined by an algorithm). Further, the algorithm may interface with a variety of workflow user interface schemes.
The computation of a full color output image algorithm according to the present invention disclosed herein may have internal components that utilize various data detection and reduction techniques (e.g., face detection, eye detection, skin detection, glint detection).
Component list
10 light from a subject scene
11 imaging stage
12 lens
13 Neutral Density (ND) filter block
14 iris block
16 luminance sensor block
18 shutter
20 color filter array image sensor
22 analog signal processor
24 analog-to-digital (A/D) converter
26 time sequence generator
28 image sensor stage
30 bus
32 Digital Signal Processor (DSP) memory
36 Digital Signal Processor (DSP)
38 processing stage
40 exposure controller
50 system controller
52 system controller bus
54 program memory
56 system memory
57 host interface
60 memory card interface
62 memory card socket
64 memory card
68 user interface
70 viewfinder display
72 exposure display
74 user input
76 status display
80 video encoder
82 display controller
88 image display
100 color filter array image
102 interpolation full-color picture frame
104 interpolated panchromatic image
106 generate interpolated color image frames
108 generating a color difference frame
110 color difference value
112 interpolating color difference image frame
114 interpolated color difference image
116 full color output image
118 fused image frame
120 interpolated color images
200 high pass filter frame
202 high-frequency full-color image
204 low pass filter frame
206 low frequency color image
208 merge image frames
300 pyramid decomposition frame
302 full color image pyramid
304 pyramid decomposition frame
306 color image pyramid
308 combined pyramid reconstruction frame
Claims (9)
1. An apparatus for forming a full-color output image from a color filter array image having a plurality of color pixels with at least two different color responses and panchromatic pixels, comprising:
a) means for capturing a color filter array image using an image sensor comprising a two-dimensional array of photosensitive pixels, pixels including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a rectangular minimal repeating unit having at least eight pixels and having at least two rows and two columns, wherein for a first color response, the color pixels having the first color response alternate with panchromatic pixels in at least two directions, and for each of the other color responses, there is a repeating pattern of color pixels having only a given color response and at least one row, column or diagonal of panchromatic pixels;
b) means for calculating an interpolated panchromatic image from the color filter array image;
c) means for calculating an interpolated color image from the color filter array image; and
d) means for forming the full color output image from the interpolated panchromatic image and the interpolated color image, including
i) Means for applying a high pass filter to the interpolated panchromatic image to form a high frequency panchromatic image;
ii) means for applying a low pass filter to the interpolated color image to form a low frequency color image; and
iii) means for combining the high frequency panchromatic image and the low frequency color image to form the full color output image.
2. The device of claim 1, wherein the device c) further comprises means for using the interpolated panchromatic image in the process of calculating the interpolated color image.
3. The device of claim 2, wherein the means for calculating the interpolated color image comprises means for forming an interpolated color difference image using color difference values formed from the interpolated panchromatic image and the color filter array image.
4. The device of claim 1, wherein the color pixels are red, green, and blue pixels.
5. The device of claim 1, wherein the color pixels are cyan, magenta, and yellow pixels.
6. The device of claim 1, wherein the minimal repeating unit has four rows or four columns, and wherein the color pixels for each color response alternate with the panchromatic pixels for at least one row, column, or diagonal of the repeating pattern.
7. An apparatus for forming a full-color output image from a color filter array image having a plurality of color pixels with at least two different color responses and panchromatic pixels, comprising:
a) means for capturing a color filter array image using an image sensor comprising a two-dimensional array of photosensitive pixels, pixels including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a rectangular minimal repeating unit having at least eight pixels and having at least two rows and two columns, wherein for a first color response, the color pixels having the first color response alternate with panchromatic pixels in at least two directions, and for each of the other color responses, there is a repeating pattern of color pixels having only a given color response and at least one row, column or diagonal of panchromatic pixels;
b) means for calculating an interpolated panchromatic image from the color filter array image;
c) means for calculating an interpolated color image from the color filter array image; and
d) means for forming the full color output image from the interpolated panchromatic image and the interpolated color image, including
i) Means for performing a pyramid decomposition on the interpolated panchromatic image to form a panchromatic image pyramid;
ii) means for performing a pyramid decomposition on the interpolated color image to form a color image pyramid;
iii) means for merging the panchromatic image pyramid with the color image pyramid to form an output image pyramid; and
iv) means for forming the full color output image from the output image pyramid.
8. The device of claim 7, wherein the device iii) comprises: means for applying a high pass filter to each stage of the full-color image pyramid to form a high frequency full-color image pyramid; means for applying a low pass filter to each level of the color image pyramid to form a low frequency color image pyramid; and means for merging each level of the high frequency panchromatic image pyramid and the low frequency color image pyramid to form the output image pyramid.
9. An apparatus for forming a full-color output image from a color filter array image having a plurality of color pixels with at least two different color responses and panchromatic pixels, comprising:
a) means for capturing a color filter array image using an image sensor comprising a two-dimensional array of photosensitive pixels, pixels including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a rectangular minimal repeating unit having at least eight pixels and having at least two rows and two columns, wherein for a first color response, the color pixels having the first color response alternate with panchromatic pixels in at least two directions, and for each of the other color responses, there is a repeating pattern of color pixels having only a given color response and at least one row, column or diagonal of panchromatic pixels;
b) means for calculating an interpolated panchromatic image from the color filter array image;
c) means for calculating an interpolated color image from the color filter array image; and
d) means for forming the full color output image from the interpolated panchromatic image and the interpolated color image, wherein the minimal repeating unit has four rows and four columns, and wherein the first and third rows of the minimal repeating unit have a sequence of pixels that are green, panchromatic, the second row of the minimal repeating unit has a sequence of pixels that are panchromatic, red, panchromatic, and the fourth row of the minimal repeating unit has a sequence of pixels that are panchromatic, blue.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/480,820 US8253832B2 (en) | 2009-06-09 | 2009-06-09 | Interpolation for four-channel color filter array |
| US12/480,820 | 2009-06-09 | ||
| PCT/US2010/001640 WO2010144124A1 (en) | 2009-06-09 | 2010-06-07 | Interpolation for four-channel color filter array |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1170880A1 HK1170880A1 (en) | 2013-03-08 |
| HK1170880B true HK1170880B (en) | 2015-12-24 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8253832B2 (en) | Interpolation for four-channel color filter array | |
| US8125546B2 (en) | Color filter array pattern having four-channels | |
| EP2436187B1 (en) | Four-channel color filter array pattern | |
| US8237831B2 (en) | Four-channel color filter array interpolation | |
| EP2087725B1 (en) | Improved light sensitivity in image sensors | |
| US8452082B2 (en) | Pattern conversion for interpolation | |
| EP2529555B1 (en) | Denoising cfa images using weighted pixel differences | |
| US8295631B2 (en) | Iteratively denoising color filter array images | |
| WO2009025825A1 (en) | Image sensor having a color filter array with panchromatic checkerboard pattern | |
| EP2502422A1 (en) | Sparse color pixel array with pixel substitutes | |
| HK1170880B (en) | Interpolation for four-channel color filter array | |
| HK1170878B (en) | Color filter array pattern having four-channels | |
| HK1171309B (en) | Four-channel color filter array pattern | |
| HK1170877B (en) | Four-channel color filter array interpolation |