[go: up one dir, main page]

HK1204412B - Method and imaging system for producing digital images - Google Patents

Method and imaging system for producing digital images Download PDF

Info

Publication number
HK1204412B
HK1204412B HK15104838.1A HK15104838A HK1204412B HK 1204412 B HK1204412 B HK 1204412B HK 15104838 A HK15104838 A HK 15104838A HK 1204412 B HK1204412 B HK 1204412B
Authority
HK
Hong Kong
Prior art keywords
pixels
pixel
color
panchromatic
image
Prior art date
Application number
HK15104838.1A
Other languages
Chinese (zh)
Other versions
HK1204412A1 (en
Inventor
A‧T‧迪弗尔
B‧H‧皮尔曼
J‧T‧康普顿
A‧D‧恩格
Original Assignee
豪威科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/416,172 external-priority patent/US8218068B2/en
Application filed by 豪威科技股份有限公司 filed Critical 豪威科技股份有限公司
Publication of HK1204412A1 publication Critical patent/HK1204412A1/en
Publication of HK1204412B publication Critical patent/HK1204412B/en

Links

Description

Method for generating digital image and imaging system
The patent application of the invention is a divisional application of the patent application with the international application number of PCT/US2010/000942, the international application date of 30/3/2010, the application number of 201080015712.0 in the Chinese national phase, and the name of 'exposing pixel groups when generating digital images'.
Technical Field
The present invention relates to an image capture device that produces digital images using multiple exposures and readouts of a two-dimensional image sensor array.
Background
In digital imaging, it is desirable to capture a sequence of images with high image quality, high spatial resolution, and high temporal resolution (also referred to as frame rate). However, such high quality image sequences are not possible with many current image sequence capturing devices. In many cases, one of the desired image sequence properties is obtained at the expense of (sacrificing) the other image sequence properties. For example, in known image sequence capture devices, the exposure duration of a given image is limited by the frame rate. The higher the frame rate, the shorter each image exposure must be. In low light environments, individual image captures within a sequence of images may receive insufficient light and produce noisy images. The quality of a given image with respect to noise can be improved by utilizing a longer exposure duration for each image, but at the expense of a lower frame rate. Alternatively, the image quality with respect to noise may be improved by combining several pixels by a binning technique, however this improvement is at the expense of lower spatial resolution. In many cases, the spatial and temporal resolution of the image sequence is limited by the readout capability of the sensor. The sensor is capable of reading a certain number of pixels per second. This readout capability strikes a balance between the spatial and temporal resolution of the readout. Increasing one resolution must be done at the expense of another to keep the total number of pixels read within the reach of the sensor.
Many solutions have been proposed to allow digital image sequence capture devices to capture image sequences with improved quality and resolution. One method of reducing noise in a digital image sequence is through temporal noise clean-up. An example of such a technique is given in us patent No.7,330,218. Temporal noise reduction techniques exploit the high temporal correlation between neighboring images to achieve noise reduction. In static scenes, multiple readouts of the same image scene content may be obtained in successive images, allowing for effective noise reduction. Disadvantages of temporal noise reduction include memory requirements to buffer multiple images and computational requirements to filter the images, especially if motion estimation and compensation are used to align local or global motion regions. In addition, temporal noise reduction does not appreciably improve the spatial or temporal resolution of the image sequence.
One method of improving temporal resolution is temporal frame interpolation. However, those skilled in the art will appreciate that such techniques are computationally complex, memory intensive, and often produce artifacts in the interpolated frame.
One way to improve the spatial resolution of an image sequence is by super resolution techniques. Examples of super-resolution algorithms are provided in U.S. Pat. Nos. 7,215,831 and 7,379,612. Video super-resolution techniques use neighboring frames to estimate each high-resolution video frame. Disadvantages of spatial video super-resolution include computational complexity and memory requirements. Spatial super-resolution algorithms are also difficult to handle dynamic scenes.
Another way to improve the quality of a sequence of digital images is by using a dual sensor camera. Such a system is set forth in U.S. patent application No.2008/0211941 entitled "digital camera Using Multiple Image Sensors to Provide Improved Temporal Sampling". Improved temporal resolution can be achieved by interleaving the exposures of the dual sensors. By exposing both sensors equally and then combining the resulting images, improved image quality and noise reduction is possible. Disadvantages of this solution include the cost associated with a dual sensor camera. In addition, in a two-lens arrangement, it is necessary to spatially align images captured from different lens systems.
Another method of improving spatial resolution is by capturing intermittent high resolution images along with a sequence of low resolution images, then processing) to generate the entire sequence of high resolution images from aggregated data. Examples of such solutions are us patent nos. 7,110,025 and 7,372,504. Disadvantages of this solution include the need for additional sensors and other hardware in some cases to capture high resolution images without interrupting the image sequence capture process. Other disadvantages include the need to buffer multiple images depending on the frequency and usage of these high resolution images when generating the final sequence of high resolution images.
Another approach for improving the quality of an image sequence is by using an image sensor with improved light sensitivity. As described in U.S. patent No.3,971,065, many image sensors use a combination of red, green, and blue filters arranged in a familiar Bayer (Bayer) pattern. As a solution for improving image capture under varying light conditions and for improving the overall sensitivity of an imaging sensor, several modifications to this familiar bayer pattern have been disclosed. For example, both commonly assigned U.S. patent application publication No.2007/0046807 to Hamilton et al entitled "Capturing Images Under Varying illumination conditions" and U.S. patent application publication No.2007/0024931 to Compton et al entitled "Image Sensor with Improved Light Sensitivity" describe alternative Sensor configurations that combine color filter and panchromatic filter elements in some manner with interleaving. With this type of solution, some portion of the image sensor detects color and other full color portions are optimized to detect light across the visible band for improved dynamic range and sensitivity. This solution thus provides a pattern of pixels, some with color filters (providing a narrow-band spectral response) and some without (unfiltered or filtered to provide a broad-band spectral response).
Using a combination of both narrow-and broad-spectral band pixel responses, the image sensor can be used at lower light levels or provide shorter exposure durations. See Sato et al in U.S. Pat. No.4,390,895, Yamagami et al in U.S. Pat. No.5,323,233, and Gindele et al in U.S. Pat. No.6,476,865 for heat generation. Such sensors may provide improved image quality at low light levels, but additional techniques are needed to address the need for generating image sequences with improved spatial and temporal resolution.
In digital imaging, it is also desirable to capture a sequence of images with a high dynamic range. In photography and imaging, the illuminance is expressed as candela/square meter (cd/m)2) In the case of (2), the dynamic range represents the ratio of two luminance values. The range of illumination that can be handled by human vision is quite large. Although the illuminance of the starlight is about 0.001cd/m2But the illumination of a sunny scene is about 100,000cd/m2It is one hundred million times higher than the former. The sun's own illuminance is about 1,000,000,000cd/m2. The human eye can accommodate a dynamic range of approximately 10,000:1 in a single view. The dynamic range of a camera is defined as the ratio of the intensity that just saturates the camera to the intensity that just raises the camera response by one standard deviation above the camera noise. In most commercial sensors today, the maximum signal-to-noise ratio of a pixel is about 100: 1. This in turn represents the maximum dynamic range of the pixel.
Because most digital cameras are only capable of capturing a limited dynamic range (the exposure setting determines which part of the overall dynamic range will be captured), high dynamic range images are typically built from several captures of the same scene taken at different exposure levels. For most daytime outdoor scenes excluding the sun, three exposures separated by two exposure values are often sufficient to properly cover the dynamic range. However, this approach requires a scene that does not change between captures in the series.
Jones (US6,924,841B2) discloses a method for extending the dynamic range of a sensor by having two sets of pixels with different sensitivities. However, Jones requires that the sensitivity of the first set of pixels overlap with the sensitivity of the second set of pixels to have some common dynamic range. This approach is undesirable because it does not provide substantial dynamic range for real-world scenes. This approach also requires dedicated sensors with pixels of different sensitivities.
Kindt et al in U.S. patent No.6,348,681 discloses a method and circuit for setting breakpoints to enable a sensor to achieve a user-selected piecewise linear transfer function.
Ando et al in U.S. patent No.7,446,812 disclose a method of using dual integration periods and readouts during the same frame to increase the dynamic range of the capture. This approach does not utilize every photon that reaches the sensor because the pixel with the shorter integration time does not capture a photon between the time that the pixel with the shorter integration time is read out and the pixel with the longer integration time.
Thus, there is a need for: digital image sequences are generated with improved image quality, spatial resolution and temporal resolution without generating spatial or temporal artifacts and without requiring significant memory costs, computational costs or hardware costs.
There is also a need for: a high dynamic range image is generated from an image sensor without substantially increasing the complexity or composition of individual pixels in the sensor.
Disclosure of Invention
It is an object of the present invention to produce a digital image from pixel signals captured by an image sensor array. This object is achieved by a method for generating a digital image from pixel signals captured by an image sensor array. The method comprises the following steps:
a) providing an image sensor array having at least two sets of pixels, wherein the number of pixels of any one set has no less than one quarter of the number of pixels of a portion of the entire sensor that produces a digital image, and wherein the sets of pixels are evenly distributed across the sensor;
b) exposing the image sensor array to scene light and reading pixel charge from only the first set of pixels to produce a first set of pixel signals;
c) after generating the first set of pixel signals, exposing the image sensor array, then reading pixel charges from the second set of pixels and again reading pixels from the first set to generate a second set of pixel signals; and
d) a digital image is generated using the first set of pixel signals and the second set of pixel signals.
One advantage of the present invention is that a sequence of color images can be produced with increased spatial resolution, temporal resolution, and image quality without the need for additional lenses and image sensor arrays.
Another advantage of the present invention is that a sequence of color images with increased spatial resolution, temporal resolution and image quality can be generated without the need for computationally complex and memory intensive algorithms.
Another advantage of the present invention is that several combinations of sequences with low spatial resolution, high temporal resolution, color images and sequences with high spatial resolution, low temporal resolution, color images can be generated without additional lenses and image sensor arrays.
Another advantage of the present invention is that an extended dynamic range image can be generated without the need for additional lenses and image sensor arrays.
Another advantage of the present invention is that an extended dynamic range image can be generated using lower buffering and without the need for computationally complex and memory intensive algorithms.
These and other aspects, objects, features and advantages of the present invention will be more clearly understood and appreciated from a review of the following detailed description of the preferred embodiments and appended claims, and by reference to the accompanying drawings.
Brief description of the drawings
FIG. 1 is a block diagram of a conventional still digital camera system that may use conventional sensors and processing methods or the sensors and processing methods of the present invention;
FIG. 2 (Prior Art) is an array pattern of conventional Bayer color filters showing minimal repeating units and non-minimal repeating units;
FIGS. 3A and 3B (Prior Art) show timing diagrams of rolling shutter operation under various light conditions;
FIG. 4 (prior art) provides representative spectral quantum efficiency curves for red, green, and blue pixels and a broader-spectrum panchromatic quantum efficiency curve, all multiplied by the transmission characteristics of an infrared cut filter;
FIG. 5 is a timing diagram illustrating an embodiment of the present invention;
FIG. 6 is a flow chart illustrating an embodiment of the present invention;
FIG. 7 (Prior Art) is a diagram showing an example color filter array pattern containing panchromatic and color pixels;
FIG. 8 (Prior Art) is a schematic diagram showing how pixel pixels in adjacent rows can be merged together to share the same floating diffusion component;
FIG. 9 is a timing diagram illustrating rolling shutter operations for full color and color pixels in one embodiment of the invention;
FIG. 10 is a diagram showing that the readout from panchromatic pixels and the readout from pixilated panchromatic pixels and pixilated color pixels results in a digital image at approximately the spatial resolution of the sensor;
FIG. 11 is a diagram showing that the readout from panchromatic pixels and the readout from pixelized panchromatic pixels and pixelized color pixels produces a digital image at approximately half the horizontal spatial resolution and half the vertical spatial resolution of the sensor;
FIG. 12 is a diagram showing that the readout from panchromatic pixels and the readout from pixilated panchromatic pixels and pixilated color pixels results in a digital image at approximately one-half the horizontal spatial resolution and one-half the vertical spatial resolution of the sensor, and a digital image at approximately the spatial resolution of the sensor;
FIG. 13 is a flowchart showing the generation of a residual image;
FIG. 14 is a diagram showing the readout from the binned panchromatic pixels and the readout from the binned panchromatic and color pixels producing a digital image at approximately half the horizontal spatial resolution and half the vertical spatial resolution of the sensor;
FIG. 15 is a diagram showing an example color filter array pattern containing panchromatic pixels;
FIG. 16 is a diagram showing the generation of a digital image from the readout of panchromatic pixels;
FIG. 17 is a diagram illustrating an example filter array pattern containing panchromatic pixels;
FIG. 18 is a diagram illustrating the generation of a digital image of panchromatic pixels in an extended dynamic range embodiment of the present invention;
FIG. 19 is a diagram showing an example filter array pattern containing panchromatic pixels and color pixels; and
fig. 20 is a timing diagram illustrating a rolling shutter operation for a full-color pixel in an embodiment of the present invention.
Detailed description of the preferred embodiments
Since digital cameras using imaging devices and related circuitry for signal capture and correction and for exposure control are well known, the present description will be directed in particular to elements forming part of, or cooperating more directly with, the method and apparatus in accordance with the present invention. Elements not specifically shown or described herein are selected from elements known in the art. Certain aspects of the embodiments to be described are provided in software. Given the system as shown and described in accordance with the present invention in the following materials, software not specifically shown, described, or suggested herein as being useful for embodiments of the present invention is conventional and within the ordinary skill of the art.
Turning now to FIG. 1, there is shown a block diagram of an image capture device shown as a digital camera embodying the invention. Although a digital camera will now be explained, the invention is clearly applicable to other types of image capture devices, such as imaging subsystems included in non-camera devices, such as mobile phones and automobiles. Light 10 from a subject scene is input to an imaging stage 11 where it is focused by a lens 12 to form an image on a solid-state image sensor 20. The image sensor 20 converts incident light into an electric signal by integrating charges for each picture element (pixel). The image sensor 20 of the preferred embodiment is of the Charge Coupled Device (CCD) type or the Active Pixel Sensor (APS) type. (APS devices are often referred to as CMOS sensors because of their ability to be fabricated in a Complementary Metal Oxide Semiconductor (CMOS) process). The sensor includes a color filter arrangement as described in more detail later.
The amount of light reaching the sensor 20 is adjusted by the iris block 14, which changes the aperture, and the Neutral Density (ND) filter block 13, which includes one or more ND filters inserted in the optical path. The time that the shutter block 18 is open also adjusts the overall light level. The exposure controller block 40 responds to the amount of light available in the scene as metered by the brightness sensor block 16 and controls all three of these adjustment functions.
Analog signals from the image sensor 20 are processed by an analog signal processor 22 and applied to an analog-to-digital (a/D) converter 24 to digitize the sensor signals. Timing generator 26 generates various clock signals to select rows and pixels and synchronize the operation of analog signal processor 22 and a/D converter 24. The image sensor stage 28 includes an image sensor 20, an analog signal processor 22, an A/D converter 24, and a timing generator 26. The functional elements of the image sensor stage 28 are separately fabricated integrated circuits or they may be fabricated as a single integrated circuit as is typically done in the case of CMOS image sensors. The stream of digital pixel values obtained from the a/D converter 24 is stored in a memory 32 associated with a Digital Signal Processor (DSP) 36.
The digital signal processor 36 is one of three processors or controllers in the present embodiment, except for the system controller 50 and the exposure controller 40. While this distribution of camera function control among multiple controllers and processors is typical, these controllers or processors are combined in various ways without affecting the functional operation of the camera and application of the present invention. These controllers or processors may include one or more digital signal processor devices, microcontrollers, programmable logic devices, or other digital logic circuitry. While combinations of such controllers or processors have been described, it should be apparent that one controller or processor is designated to perform all the required functions. All of these variations may perform the same function and fall within the scope of the invention, and the term "processing stage" will be used as necessary to encompass all of this function within one phrase, such as in processing stage 38 in fig. 1.
In the depicted embodiment, the DSP36 manipulates the digital image data in its memory 32 in accordance with a software program that is permanently stored in the program memory 54 and that is copied to memory 32 for execution during image capture. The DSP36 executes the software necessary to practice the image processing shown in fig. 18. The memory 32 comprises any type of random access memory, such as SDRAM. The bus 30, which includes a path for address and data signals, connects the DSP36 to its associated memory 32, a/D converter 24, and other associated devices.
The system controller 50 controls the overall operation of the camera based on a software program stored in a program memory 54, which program memory 54 may comprise a flash EEPROM or other non-volatile memory. This memory may also be used to store image sensor calibration data, user setting selections, and other data that must be saved when the camera is turned off. The system controller 50 controls the image capture sequence by directing the exposure controller 40 to operate the lens 12, ND filter 13, iris 14, and shutter 18 as previously described, directing the timing generator 26 to operate the image sensor 20 and associated elements, and directing the DSP36 to process the captured image data. After the image is captured and processed, the final image file stored in memory 32 is transferred to the host computer via interface 57, stored on removable memory card 64 or other storage device, and displayed to the user on image display 88.
The bus 52 contains paths for address, data, and control signals and connects the system controller 50 to the DSP36, program memory 54, system memory 56, host interface 57, memory card interface 60, and other related devices. The host interface 57 provides a high speed connection to a Personal Computer (PC) or other host computer to transfer image data for display, storage, manipulation, or printing. This interface is an IEEE1394 or USB2.0 serial interface or any other suitable digital interface. The memory card 64 is typically a Compact Flash (CF) memory card that is inserted into the socket 62 and connected to the system controller 50 via the memory card interface 60. Other types of storage utilized include, but are not limited to, PC cards, multimedia cards (MMC) or Secure Digital (SD) cards.
The processed images are copied to a display buffer in system memory 56 and read out continuously via video encoder 80 to generate a video signal. This signal is output directly from the camera for display on an external monitor or processed by the display controller 82 and presented on the image display 88. The display is typically an active matrix color Liquid Crystal Display (LCD), although other types of displays may be used.
The user interface 68, which contains all or any combination of the viewfinder display 70, exposure display 72, status display 76 and image display 88, as well as user inputs 74, is controlled by a combination of software programs executing on the exposure controller 40 and system controller 50. User input 74 typically comprises some combination of buttons, rocker switches, joysticks, rotary dials, or touch screens. The exposure controller 40 operates light metering, exposure mode, auto focus, and other exposure functions. The system controller 50 manages a Graphical User Interface (GUI) presented at one or more of these displays (e.g., image display 88). The GUI typically includes a menu for making various selectable selections and a viewing mode for examining the captured image.
The exposure controller 40 accepts user input selecting an exposure mode, lens aperture, exposure time (shutter speed), and exposure index or ISO photospeed, and directs the lens and shutter for subsequent capture accordingly. The brightness sensor 16 is used to measure the brightness of the scene and provide an exposure metering function for the user to refer to when manually setting the ISO photospeed, aperture, and shutter speed. In this case, as the user changes one or more settings, a light metering indicator presented on the viewfinder display 70 tells the user to what extent the image will be over-or under-exposed. In the automatic exposure mode, the user changes one setting and the exposure controller 40 automatically changes another setting to maintain the correct exposure, e.g., for a given ISO photospeed, as the user decreases the lens aperture, the exposure controller 40 automatically increases the exposure time to maintain the same overall exposure.
The ISO photospeed is an important attribute of digital still cameras. The exposure time, lens aperture, lens transmittance, energy level and spectral distribution of the scene illumination, and scene reflectance determine the exposure level of the digital still camera. When an image is obtained from a digital still camera using insufficient exposure, proper tone reproduction can be substantially maintained by increasing electronic or digital gain, but the image may contain an unacceptable amount of noise. With increasing exposure, the gain decreases, so image noise can generally be reduced to an acceptable level. If the exposure is increased too much, the resulting signal in a bright area of the image may exceed the maximum signal level capacity of the image sensor or camera signal processing. This may result in the brightest portions of the image being cropped to form regions of uniform brightness, or blooming into surrounding regions of the image. It is important to direct the user to set the appropriate exposure. It is desirable to use the ISO photospeed as such a guide. To be easily understood by photographers, the ISO photospeed of a digital still camera should be directly related to the ISO photospeed of a filmstrip camera. For example, if the digital still camera has an ISO photospeed of ISO200, the same exposure time and aperture should be appropriate for an ISO200 rated film/processing system.
It is desirable to coordinate the ISO photospeed with the film ISO photospeed. However, there are differences between electronic imaging systems and film-based imaging systems that preclude exact equivalence. Digital still cameras may incorporate variable gain and may provide digital processing after image data has been captured, enabling tone reproduction over the entire camera exposure range. Due to this flexibility, the digital still camera can have a range of photospeed. This range is defined as the ISO speed latitude. To prevent confusion, a single value is assigned as the intrinsic ISO photospeed, while the ISO speed latitude upper and lower limits indicate a range of speeds, i.e., a range that includes an effective photospeed that is different from the intrinsic ISO photospeed. With this inscription in the heart, the inherent ISO speed is the digital value calculated from the exposure provided at the focal plane of the digital still camera to produce the specified camera output signal characteristic. Intrinsic speed is typically an exposure index value that produces peak image quality for a given camera system for normal scenes, where the exposure index is a digital value that is inversely proportional to the exposure provided to the image sensor.
Those skilled in the art will be familiar with the above description of digital cameras. It is apparent that there are many variations of this embodiment that can be chosen to reduce cost, add features, or improve camera performance. For example, an auto-focus system is added, or the lens is removable and interchangeable. It should be understood that the present invention is applicable to any type of digital camera or more general digital image capture device in which the replacement module provides similar functionality.
Given the illustrative example of fig. 1, the following description will then describe in detail the operation of this camera for capturing a sequence of images in accordance with the present invention. Whenever an image sensor is referred to in the following description in general, it should be understood to represent the image sensor 20 from fig. 1. The image sensor 20 shown in fig. 1 typically comprises a two-dimensional array of light-sensitive pixels fabricated on a silicon substrate that convert incident light at each pixel into a measured electrical signal. In the context of an image sensor, a pixel (an abbreviation for "image element") refers to a discrete light sensing area and a charge shifting or charge measuring circuit associated with the light sensing area. In the context of digital color images, the term pixel generally refers to a particular location in an image that has an associated color value. The term color pixel will refer to a pixel having a color light response over a relatively narrow spectral band. The terms exposure duration and exposure time are used interchangeably.
When the sensor 20 is exposed to light, free electrons are generated and trapped within the electronic structure at each pixel. Capturing such free electrons for a period of time and then measuring the number of electrons captured or measuring the rate at which free electrons are generated can measure the light level at each pixel. In the former case, the accumulated charge is shifted out of the pixel array to the charge-to-voltage measurement circuit as in a Capacitive Coupled Device (CCD), or the area near each pixel may contain several elements of the charge-to-voltage measurement circuit as in an active pixel sensor (APS or CMOS sensor).
To produce color images, the pixel arrays in an image sensor typically have a color filter pattern disposed over them. Fig. 2 shows a pattern of commonly used red (R), green (G), and blue (B) filters. This particular pattern is commonly referred to as the Bayer Color Filter Array (CFA) under the name of its inventor, brece Bayer, as disclosed in U.S. patent No.3,971,065. This pattern is effectively used in image sensors having a two-dimensional array of color pixels. Thus, each pixel has a particular color light response, which in this case is the dominant sensitivity for red, green or blue light. Another useful color light response is dominant sensitivity to magenta, yellow, or cyan light. In each case, the specific color light response has a high sensitivity to certain portions of the visible spectrum while having a low sensitivity to other portions of the visible spectrum.
The minimum repeating unit is a repeating unit such that no other repeating unit has fewer pixels. For example, the CFA in fig. 2 includes a minimum repeating unit of two pixels by two pixels as shown by pixel block 100 in fig. 2, which can be expressed as:
G R
B G
multiple copies of this minimal repeating unit are tiled to cover the entire pixel array in the image sensor. The green pixel in the upper left corner shows the minimal repeating unit, but three alternative minimal repeating units can be easily discerned by shifting the thick outline region one pixel to the right, one pixel down, or one pixel diagonally to the lower right. Although the pixel block 102 is a repeating unit, it is not a minimal repeating unit because the pixel block 100 is a repeating unit and the block 100 has smaller pixels than the block 102.
An image captured using an image sensor having a two-dimensional array including the CFA of fig. 2 has only one color value at each pixel. To produce a full color image, there are many techniques for inferring or interpolating missing colors at each pixel. Such CFA interpolation techniques are well known in the art and reference is made to the following patents: U.S. Pat. No.5,506,619, U.S. Pat. No.5,629,734, and U.S. Pat. No.5,652,621.
Each pixel of the image sensor 20 has both a photodetector and an active transistor circuit for reading out a pixel signal. The photo-detectors of each pixel in the image sensor array convert photons impinging on the pixel into an electrical charge by the photoelectric effect. The charge is integrated during a time period that is long enough to collect a detectable amount of charge, but short enough to avoid saturating the storage element. This integration time period is similar to the film exposure time (i.e., shutter speed).
The timing of image capture may follow one of two basic patterns. In a global capture sequence, all image pixels are simply read at the same time. However, this type of sequence requires considerable device complexity and can be disadvantageous because it constrains the amount of space on the sensor wafer for photon reception. Alternatively, a line-by-line reading method has been adopted, and it is often the preferred mode to read cmos aps pixels.
In an image sensor array of a CMOS APS device, the integration time is the time between the reset of a given row and the subsequent read of that row. Because only one row may be selected at a time, the reset/read routine is sequential (i.e., line-by-line). This reading technique is referred to as a "rolling electronic shutter", or more simply a "rolling shutter" mode and is well known in the imaging arts. A few examples of variations of the Rolling shutter time series are given in U.S. Pat. No.6,115,065 entitled "Image Sensor Producing at Least Two integration times from Each Sensing Pixel" to Yadid-Pecht et al and U.S. Pat. No.6,809,766 entitled "hook-Ahead Rolling shutter System in CMOS Sensors" (advanced Rolling shutter System in CMOS Sensors) to Krymski et al. The shutter width of the read sequence is the time between integration enable and readout. This may have a variable size depending on the number of neighboring pixels having the same integration time. The shutter width of one or more rows at a time can also be adjusted by a fixed value to control the gain of the exposed area of the sensor array. As a method for scrolling the shutter sequence, the reset pointer is indexed by an amount equal to the shutter width before the read pointer. The time difference between the two pointers corresponds to the pixel integration time. As described above, the shutter width is quite similar to the width of the actual opening between the two curtains of a mechanical focal plane shutter. In the following, the term exposure duration will be used for the corresponding integration time.
Fig. 3A shows a timing sequence for a rolling shutter mode as conventionally used in relatively good lighting conditions. The abscissa (x-axis) represents time. The vertical axis (y-axis) represents the read-out row of the image sensor. Each solid oblique line 302 indicates all lines sequentially reading out the image sensor starting from the highest-numbered line and proceeding to the lowest-numbered line. (alternatively, the lines representing readout may be slanted upward from left to right to indicate that rows are read out from the lowest numbered row to the highest numbered row). Each dashed line 300 represents a sequential reset of all rows of the image sensor starting again with the highest numbered row and proceeding to the lowest numbered row, wherein the entire reset procedure takes exactly the same time as the readout procedure. The delay between the reset procedure 300 and its immediately following readout procedure 302 is the integration time of these pixels 320, as indicated by the solid arrows. Note that the integration time is constant for each row readout, but the integration period for each row is shifted in time with respect to the preceding and following rows.
As can be seen from the timing diagram of fig. 3A, this simple rolling shutter sequence allows periods of unavailable photons, particularly between the read 302 and its immediate reset 300. While this is acceptable under good lighting, this configuration may not perform well under low light conditions. This is because longer pixel integration times may be required as the light intensity decreases. The timing diagram of fig. 3B shows a timing sequence for a low light condition, where reset 300 is performed immediately following read 302 or reset 300 is performed concurrently with read 302. Thus, the pixel integration time 320 has increased to fill the time between successive reads and waste very few photons.
However, even with the adoption of rolling shutter technology, the task of efficiently reading the image sensor still has its drawbacks. Shear motion artifacts are a class of problems. Relative motion between a scene (or an element of the scene) and an image sensor causes objects in the scene to appear distorted in an image captured by the image sensor. This effect, referred to as image "cropping," is a characteristic of the rolling shutter configuration. For example, if such a so-called rolling shutter or electronic focal plane shutter image sensor is used to capture an image of a horizontally moving car, the car moves relative to the image sensor as the rows of the captured image are exposed and read out, so the rows of the captured image show the car at different positions. This can result in the round tire of the car appearing elliptical and the rectangular window of the car appearing parallelogram. This distortion is a direct consequence of the amount of time required to read out all the rows of the image sensor. The low light performance can still be improved and the image dynamic range can still be smaller than desired.
One type of solution that has been proposed is to use some portion of the sensor array pixels as panchromatic pixels. For example, commonly assigned U.S. patent application No.2007/0024931, entitled "Image Sensor with Improved Light Sensitivity", to Compton et al, discloses an Image Sensor having both color and panchromatic pixels. In the context of the present invention, the term panchromatic pixel refers to a pixel having a substantially panchromatic light response, the panchromatic pixel having a spectral sensitivity that is broader than the narrower spectral sensitivities represented in the color light responses of the selected set. That is, a panchromatic pixel may have a high degree of sensitivity to light across the entire visible spectrum. While these panchromatic pixels typically have a broader spectral sensitivity than the aggregate color photoresponse, each panchromatic pixel may also have an associated filter. Such filters may be neutral density filters or color or bandwidth filters.
Referring to the graph of fig. 4, the relative spectral sensitivities of pixels having red, green, and blue filters in a typical camera application are shown. The X-axis in fig. 4 represents the wavelength of light in nanometers, the wavelength is approximately from near ultraviolet light across to near infrared light, and the Y-axis represents the efficiency (normalized). In fig. 4, curve 110 represents the spectral transmission characteristics of a typical bandwidth filter used to block infrared and ultraviolet light from reaching the image sensor. Such filters are needed because these color filters for image sensors typically do not block infrared light, and therefore these pixels cannot distinguish infrared light from light within the pass band of their associated color filter. The infrared blocking characteristic shown by curve 110 thus prevents infrared light from corrupting the visible light signal. The spectral quantum efficiency (i.e., the proportion of incident photons that are captured and converted into a measurable electrical signal) of a typical silicon sensor employing red, green, and blue filters is multiplied by the spectral transmission characteristic of the infrared light blocking filter represented by curve 110 to produce a combined system quantum efficiency represented by curve 114 for red, curve 116 for green, and curve 118 for blue. It should be understood from this graph that the photon response of each color is sensitive to only a portion of the visible spectrum. In contrast, curve 112 shows the light response of the same silicon sensor without the filter applied (but containing infrared light blocking filter characteristics), which is an example of a full color light response. By comparing the color photoresponse curves 114, 116, and 118 with the full-color photon response curve 112, it is apparent that the full-color photoresponse can be three to four times more sensitive to broad spectrum light than either of these color photoresponses.
In the rolling shutter mode, the image sequence is typically read as shown in fig. 3A and 3B. The entire image sensor is read and this constitutes one image in the image sequence. Subsequently, the entire image sensor is read again and this constitutes the next image in the image sequence. Alternatively, as described in U.S. patent application Ser. No.11/780,523, the image sensors are divided into separate subsets and the subsets are read in some relative order.
FIG. 5 shows a timing sequence for a rolling shutter mode of a preferred embodiment of the present invention, wherein the image sensor array has at least two groups of pixels, wherein the number of pixels of any one group has no less than one quarter of the number of pixels of a portion of the entire sensor that produces a digital image, and wherein the groups of pixels are evenly distributed across the sensor. The rolling shutter read time given at 404 represents a readout of the pixel signals 422 for all pixels of the image sensor, where the first set of pixels (X1) and the second set of pixels (X2) can be read out individually or merged for the first set of pixels (X1) and the second set of pixels (X2). After the pixels are read, the pixels may be reset 404. The X1 pixels are read out and reset according to the rolling shutter read time given at 402. Line 410 shows the overall exposure of each X1 pixel, which represents the time between resetting and reading these X1 pixels for each row of the image sensor. The pixel signal 420 represents the readout of all of these X1 pixels. The read pattern repeats at this stage because the entire image sensor is then read out according to the rolling shutter read time given at 404. These X1 pixels have been read out multiple times. Some of the pixel readouts at 404 have a shorter exposure given by line 412 while other pixels have a longer exposure given by line 414. Thus, every photon that can reach the sensor can be utilized.
Those skilled in the art will appreciate that there are many alternative methods for the present invention. These pixels may be pixilated or read out individually. More than two sets of pixels may be used to have multiple exposure times. There may be a delay between the readout and reset procedures for any set of pixels. Some of these alternatives are described in more detail in the preferred embodiments below.
FIG. 6 is a high level flow chart of the preferred embodiment of the present invention. Initially, a digital camera such as that depicted in fig. 1 initiates an image capture sequence 502 by exposing the image sensor array 20 to scene light. The image sensor stage 28 in fig. 1 controls the process of selecting and reading pixels. First temporal pixel signals are read from the sensor 504 by the analog signal processor 22, which generate a first set 510 of pixel signals. The first set of pixel signals 510 may be from a first group of pixels, or the first set of pixel signals 510 may be from both the first group of pixels and a second group of pixels. If the first read is a first group of pixels, then all of the first group of pixels are read from the sensor 506. If the first read is a first set of pixels and a second set of pixels, then all of the second set of pixels are read and the first set of pixels are read from image sensor 508. Exposure of the image sensor continues until the capture process is terminated. For subsequent readouts from the sensor, the readout process continues in a similar manner as the first read. For example, second temporal pixel signals are read from the sensor, which generate a second set of pixel signals 510. The second set of pixel signals 510 can be from the first set of pixels or from both the first set of pixels and the second set of pixels. If the second read is from the first group of pixels, then all of the first group of pixels are read again from the sensor 506. If the second read is from both the first set of pixels and the second set of pixels, then all of the second set of pixels are read and the first set of pixels are read again from the image sensor 508.
After reading these pixel signals 510 from the sensor, an image processing step 512 operates on the available pixel signals to produce a digital image 514. The digital signal processor 36 in fig. 1 may perform this processing. The image processing step 512 may utilize pixel signals from the current readout as well as data buffered in memory 32 from previous sensor readouts. The generated digital image 514 may correspond to a current sensor readout, or a previous sensor readout, or any combination of current and previous sensor readouts. Digital image 514 may have higher spatial resolution, improved image quality, improved color information, or other enhancements relative to pixel signals obtained from a given sensor readout. The digital image sequence may also have a temporal resolution higher than that which is simply achievable by reading the sensor at the target spatial resolution. The digital image 514 is passed to a digital image utilization function 516. This function may represent an encoding process by which the digital image 514 is encoded into a video bitstream. This function may represent a display function by which a digital image 514 is generated on a display. This function may represent a print function by which the digital image 514 is printed. This function may also represent a sharing function by which the digital image 514 is shared with other devices. The above-mentioned (method) is an example of how the digital image 514 may be utilized, and is not limiting.
If the image capture sequence is complete 518, the capture process 520 terminates. Otherwise, the capture process evaluates whether the next read from the sensor 522 is the first set of pixels or the first and second sets of pixels, and repeats the readout and iterative processing loop until the capture process is completed.
The proposed invention can be used with image sensors having any color filter array pattern. The proposed invention can also be used with image sensors that use only panchromatic pixels. However, in a preferred embodiment, the image sensor has both panchromatic and color pixels. Panchromatic pixels are a first set of pixels and color pixels are a second set of pixels. In the proposed invention, a sequence of images is captured by alternately reading all panchromatic pixels, reading color pixels, and reading the panchromatic pixels again.
The method of the present invention is described with respect to the color filter array pattern shown in fig. 7. FIG. 7 shows an exemplary color filter array pattern 602 according to the preferred embodiment of the invention. In this example, about half of the pixels are full color 604, and the other half are color pixels separated among red (R)606, green (G)608, and blue (B) 610. The color filter array pattern 602 has a minimal repeating unit comprising 16 pixels in the following 4 by 4 array:
P G P R
G P R P
P B P G
B P G P。
those skilled in the art will appreciate that other color filter array configurations and minimal repeating units are possible within the scope of the present invention.
Several pixels may be combined and such combined pixels may be read. In a preferred embodiment, the pixel combination is realized by pixel merging of pixels. As shown in fig. 8, various pixel element binning schemes can be used during readout of the image sensor. Two partial rows 701, 702 of the image sensor are shown in fig. 8. In this example, the underlying readout circuitry of the sensor array uses a floating diffusion 704 switchably connected to one or more surrounding pixels at a time. The implementation and use of floating diffusions is well known to those skilled in the art of digital image acquisition. Fig. 8 illustrates a conventional configuration in which each floating diffusion 704 provides four surrounding pixels, shown in one example as a four-pixel group 706.
The pixel signals can be switched to the floating diffusion 704 in any one of a number of combinations. In readout combination 708, each pixel in the four-pixel group 706 has its charge transferred to the floating diffusion 704 separately and thus read individually. In the readout combination 710, the pixel incorporates the full-color pixel P, i.e., the floating diffusion 704 is shared by simultaneously emptying its stored charge to the floating diffusion 704; similarly, the pixel element merges two color (G) pixels in the four pixel group while switching their signals to the floating diffusion 704. In this binning scheme, panchromatic pixels are combined with only other panchromatic pixels, and color pixels are combined with only other color pixels. In another readout combination 712, the panchromatic pixels P are not merged but read individually; here the picture elements merge the color pixels (G). In another readout combination 714, the pixels are all four pixels connected to a given floating diffusion cell in combination at the same time. In this pixel merging scheme, panchromatic pixels are combined with both color pixels and other panchromatic pixels. Color pixels are combined with both panchromatic pixels and other color pixels.
Fig. 9 shows a readout pattern of an image sensor according to a preferred embodiment of the present invention. This figure is based on an image sensor with the color filter array pattern shown in fig. 7 and is read out with a rolling shutter. Several successive readouts of the pixel signals are shown in fig. 9. This set of pixel signals represents a portion of the entire image capture process. The entire image capture process may contain additional readouts of pixel signals extending in either direction along the time axis. Readout 804 represents readout of all pixels of the image sensor corresponding to a combination of color pixels and panchromatic pixels. For each four-pixel group of pixels connected to a floating diffusion cell, the pixel element combines two color pixels and reads as a single value. The two panchromatic pixels are also merged and read as a single value by the individual pixels. When combined, they give pixel signals 810 corresponding to a merged panchromatic/merged color readout. After reading the pixels, the pixels 804 may be reset. Each floating diffusion cell is accessed twice during the sensor readout process.
The color pixels are also reset according to the rolling shutter reset time given at 806. These panchromatic pixels are read out and reset according to the rolling shutter time given by 802. Line 814 shows the overall exposure of each panchromatic pixel, representing the time between reset and read of each row of panchromatic pixels of the image sensor. These panchromatic pixels are read out without any pixel element combination to produce pixel signals 812, such that each floating diffusion cell is accessed twice during the overall readout of these panchromatic pixels. Thus, the readout of panchromatic pixels 802 and the readout of the combination of panchromatic and color pixels 804 have the same readout rate and the same motion shear properties. This design is advantageous not only for motion clipping, but also for minimizing unused light while maintaining equal exposure duration for all panchromatic pixels and for all color pixels.
The panchromatic pixels are reset again according to the rolling shutter reset time given by 808. The read pattern repeats at this stage because the entire image sensor is then read out according to the rolling shutter read time given at 804. The panchromatic pixel readouts at 804 have a shorter exposure given by 816 while the color pixels have a longer exposure given by 818. U.S. patent application No.12/258,389, filed on 25/10/2008, which is incorporated herein by reference, describes a technique for combining panchromatic pixels having a relatively short exposure with color pixels having a relatively long exposure.
In fig. 9, these panchromatic pixels are exposed for a different duration prior to non-binning readout than they were exposed prior to binning readout. Specifically, prior to being read out by pixel binning, these panchromatic pixels are exposed for a duration that is about half of the duration they were exposed to prior to being read out without pixel binning. This is an advantageous feature of the present invention because the panchromatic pixel signals are exposure balanced so that the charge read from the floating diffusion cells corresponding to the panchromatic pixels is approximately equal regardless of being merged by picture elements at shorter exposure durations or not at longer exposure durations.
In fig. 9, the exposure of a panchromatic pixel prior to readout of the panchromatic pixel overlaps partially with the exposure of a color pixel prior to subsequent readout of the color pixel.
Fig. 10 describes in more detail one method of image processing 512 after completion of each readout of the pixel signals. Pixel signals 810 that are a combination of binned color pixels and binned panchromatic pixels can be color interpolated from this initial color filter array data to produce a quarter-resolution color image 902. The panchromatic pixel signals 812 can be spatially interpolated to produce a sensor-resolution panchromatic image 904. A digital image 906 at the spatial resolution of the image sensor is generated corresponding to each readout. Corresponding to the readout of panchromatic pixel signals 812, a digital image 906 of sensor resolution is generated using data from the sensor resolution panchromatic image 904 and the previously and subsequently read out quarter-resolution color image 902 of the panchromatic and color pixel signals 810. Corresponding to the readout as a combination of panchromatic and color pixel signals 810, a digital image 906 of sensor resolution is generated using data from a quarter-resolution color image 902 and a sensor-resolution panchromatic image 904 from previous and subsequent readouts of panchromatic pixel signals 812. This scheme requires some buffering. Three readouts are used in the formation of each digital image 906.
The proposed invention allows to generate an image sequence of digital images 906 having a high spatial resolution, a high temporal resolution and a high image quality. For image sequence capture according to the previous method, an image sequence with high spatial resolution is generated by repeatedly reading the entire sensor. The time required to read out the entire sensor is longer than the time required to read out the pixel-merged full-color and pixel-merged color pixel signals 810 or full-color pixel signals 812. The temporal resolution (i.e. the frame rate) of such a sequence of images is therefore lower than the temporal resolution achieved using the proposed invention.
Fig. 11 describes in more detail another image processing 512 method after completing each readout of the pixel signals. The pixel signals 810, which are a combination of the binned color pixels and the binned panchromatic pixels, can be color interpolated from this initial color filter array data to produce a quarter-resolution color image 902. The panchromatic pixel signals 812 can be spatially interpolated to produce a sensor-resolution panchromatic image 904. A digital image 1002 at one-fourth of the sensor spatial resolution (half horizontal resolution and half vertical resolution) is produced corresponding to each readout. Corresponding to panchromatic pixel signals 812, an enhanced quarter-resolution digital image 1002 is generated using data from the sensor-resolution panchromatic image 904 and from the quarter-resolution color image 902 of the previous and subsequent readouts of the panchromatic and color pixel signals. Corresponding to the readout as a combination of panchromatic and color pixel signals 810, data from a quarter-resolution color image 902 and a sensor-resolution panchromatic image 904 from previous and subsequent panchromatic pixel signals 812 are used to generate an enhanced quarter-resolution digital image 1002. This scheme requires some buffering. Three readouts are used in the formation of each digital image 1002.
The proposed invention allows to generate an image sequence of digital images 1002 with improved spatial resolution, high temporal resolution and high image quality. For image sequence capture according to the previous method, an image sequence of quarter-resolution images can be generated by repeating pixel merging or sub-sampling and reading the sensor. To read out a sequence of images at high temporal resolution, the individual read-out picture elements are combined or sub-sampled to a quarter of the sensor resolution. Therefore, the spatial resolution of the image sequence is limited. In the proposed invention, the readout of the panchromatic pixel signals 812 has the spatial resolution of the sensor, and thus improved high spatial frequency information in the digital image 1002 can be maintained. In addition, in the proposed method, the color pixels may have an exposure duration longer than the inverse of the frame rate. In image sequence capture according to the previous method, this is not possible because each readout is a full sensor readout. The extended color pixel exposure duration allows for improved signal-to-noise ratios of the color pixels to be obtained and improves the overall image quality of these digital images.
Fig. 12 describes in more detail another method of image processing 512 after completion of each readout of the pixel signals. The pixel signals 810, which are a combination of the binned color pixels and the binned panchromatic pixels, can be color interpolated from this initial filter array data to produce a quarter-resolution color image 902. The panchromatic pixel signals 812 can be spatially interpolated to produce a sensor-resolution full color image 904. Corresponding to panchromatic pixel signals 812, data from a panchromatic image 904 at the sensor resolution and previously and subsequently read out quarter-resolution color images 902 from panchromatic and color pixel signals 810 are used to produce both an enhanced quarter-resolution (half horizontal spatial resolution, half vertical spatial resolution) digital image 1002 and an enhanced sensor resolution digital image 906. Corresponding to the readout of the panchromatic and color pixel signals 810, an enhanced quarter-resolution digital image 1002 is generated using data from the quarter-resolution color image 902 and the sensor-resolution panchromatic image 904 from the previous and subsequent panchromatic pixel signals 812. This scheme requires some buffering. Three readouts are used in the formation of each quarter resolution digital image 906 or sensor resolution digital image 1002.
The proposed invention allows for the simultaneous generation of low spatial resolution, high frame rate image sequences and high spatial resolution, low frame rate image sequences. Thus, it is possible to capture both a low-resolution image sequence and a high-resolution, high-quality still image simultaneously. Previous solutions for simultaneously capturing an image sequence and a still image typically require additional hardware, or have to interrupt the image sequence capture to acquire the still image.
For the image processing 512 described in fig. 12, there are multiple choices of how to process the digital image 514. In one method, the quarter-resolution color images 902 are processed into a first sequence and the sensor resolution color images are processed into a separate second sequence, and the two sequences are stored as separate sequences. Fig. 13 illustrates an alternative method for handling a quarter-resolution and sensor-resolution digital image 514. The quarter-resolution color image 1202 is upsampled in upsampling block 1206 to produce an upsampled color image 1208 at the sensor resolution. A residual image 1212 is formed in residual image forming block 1210 by subtracting the sensor resolution upsampled color image 1208 from the sensor resolution color image 1204. The residual image 1212 and the quarter-resolution color image 1202 are stored. The two images may be stored separately or in combination. In an example, a quarter-resolution color image 1202 may be stored using a compression syntax and a file format (such as the JPEG compression standard and TIFF file format), and a residual image 1212 may be stored as metadata within the file. In another example, a quarter-resolution color image 1202 may be stored using compression syntax and file formats (such as the MPEG compression standard and quicktime. mov file format) and the residual image 1212 may be stored as metadata within a file. The file reader may ignore the data metadata and encode only a quarter resolution color image. The smart file reader may also extract the metadata and reconstruct color images at the resolution of the sensors.
Fig. 14 shows a readout pattern and image generation of an image sensor according to another preferred embodiment of the present invention. This figure is based on an image sensor with the color filter pattern shown in fig. 7 and read out with a rolling shutter. Several consecutive readouts are shown in fig. 14. This set of pixel signals represents a portion of the entire image capture process. The entire image capture process may contain additional readouts of pixel signals extending in either direction along the time axis. Readout 1302 represents readout of all pixels of the image sensor corresponding to a combination of color pixels and panchromatic pixels. For each four-pixel group of pixels connected to a floating diffusion cell, the pixel element merges two color pixels and two panchromatic pixels and reads as a single value. When combined, it produces a pixel signal 1308 having a diluted bayer pattern. After reading these pixels, they are also reset at 1302. Each floating diffusion cell is accessed once during the sensor readout process.
The color pixels are also reset according to the scroll reset time given at 1304. These panchromatic pixels are read and reset according to a rolling shutter time given at 1306. Line 1316 shows the overall exposure of each panchromatic pixel, representing the time between reset and readout of the panchromatic pixels of each row of the image sensor. Panchromatic pixel signals 1310 are generated by reading the panchromatic pixels in pixel binning such that each floating diffusion cell is accessed once during the entire readout of the panchromatic pixels. Thus, panchromatic pixel readout 1306 and panchromatic pixel and color pixel readout 1302 have the same readout rate and the same motion shear properties. This design is advantageous not only for motion clipping, but also for minimizing unused light while maintaining equal exposure durations for all panchromatic pixels and for all color pixels.
The read pattern repeats at this stage because the entire image sensor is then read according to the rolling shutter reset time given by 1302. The panchromatic pixels read at 1302 have an exposure given by 1318, while the color pixels have an exposure given by 1320. Fig. 14 shows color pixels having exposure durations 1320 that are longer than the exposure durations 1318 of the panchromatic pixels read out at the pixel readout 1302. However, this relationship is not fixed, and in practice, the color pixel exposure duration 1320 may also be shorter than or equal to the exposure duration 1318 of the panchromatic pixels.
Fig. 14 also describes in more detail a method of image processing 512 after completion of each readout of the pixel signals. The readout of the pixel element combined color and panchromatic pixel signals 1308 can be combined with the adjacent readout of the pixel element combined panchromatic pixel signals 1310 to produce an improved panchromatic and color pixel image 1312. Similarly, the readout of the pixel element merged panchromatic pixel signals 1310 can be combined with the adjacent readout of the pixel element merged color and panchromatic pixel signals 1308 to produce an improved panchromatic and color pixel image 1312. The modified panchromatic and color pixel image 1312 may be processed to produce an enhanced quarter-resolution color image 1314. This scheme requires some buffering. Three readouts are used in the formation of each digital image 1314.
Another embodiment of the present invention provides an extended dynamic range image. This embodiment is described in more detail in fig. 15 and 16 with reference to fig. 5. Fig. 15 provides an exemplary pixel array pattern 1402. In this example, pixel (X1)1404 represents a first group of pixels, and pixel (X2)1406 represents a second group of pixels. X1 and X2 represent panchromatic pixels. Fig. 5 illustrates a timing sequence of a rolling shutter mode using the filter array pattern shown in fig. 15.
Fig. 16 describes in more detail one method of image processing 512 of fig. 6 after completion of each readout of pixel signals for the pixel array pattern in fig. 15. Pixel signals 420 may be suitably processed from this initial array data to produce image 430. The pixel signals 422 can be suitably processed from this initial array data to produce an image 432. In this case, images 430 and 432 are half the horizontal resolution of the sensor. The pixel signals for each readout are scaled by the ratio of the longest exposure of all of these readouts to the readout exposure. The sum of these scaled pixel signals is normalized by the number of pixel signals read out to produce the read out image value. Equation 1 shows a simple method for calculating these image values
In the case of the equation 1,
where Pe denotes these extended dynamic range image values, r denotes the readout, g denotes the number of groups within the readout, TfDenotes the longest exposure, Tr,gRepresenting readout and exposure of groups, and pixel signalsr,gRepresenting pixel signals for readout and groups. If the pixel signal is greater than the upper limit or less than the lower limit, the pixel signal is not used in the calculation of the image value and the number of groups read out (g) is adjusted accordingly.
The following description of fig. 16 will clarify the calculations in the step-by-step process. Referring to fig. 5, the exposure shown by line 414 corresponds to the longest exposure (TF 1). The image value Pa of the pixel signal 420 is given from equation 2
Pa-X1 (420)/g (420) (TF1/TS1) equation 2
Where X1(420) is exposed with exposure (TS1) shown by line 410, line 414 shows frame exposure (TF1), and the number of pixel signals read out is g (420). The number of pixel signals read out g is the number of groups of pixels read out for that particular readout. Since the set of pixel signals 420 contains only one set of pixels (X1), the value of g (420) is 1 in equation 2. Similarly, the image value Pb of the pixel signal 422 is given from equation 3
Pb ═ ((TF1/TS2) × X1(422) + X2(422))/g (422) equation 3
Where X1(422) is exposed with exposure (TS2) shown by line 412, X2(422) is exposed with the longest exposure (TF1) shown by line 414, and the number of pixel signals read out is g (422). The value of g (422) in equation 2 is 2 because there are two groups of pixels for calculating Pb. If the value of X1 or X2 is greater than the upper limit or less than the lower limit, then X1 or X2 is not used in the calculation of the image values and the number of sets read out (g) is adjusted accordingly. Half-resolution images 432 and 430 are merged to produce digital image 434 having expanded dynamic range value Pe. In another example, Pa and Pb are added and divided by the number of half-resolution images to produce a half-resolution digital image. In another example, the value Pe is interpolated from Pa and Pb to produce a digital image. Those skilled in the art will appreciate that there are many alternative methods for computing the extended dynamic range image.
Fig. 17 provides an exemplary pixel array pattern 1412 of another preferred embodiment of the present invention. In this example, pixels (X1)1414, (X2)1416, (X3)1418, and (X4)1420 represent four different groups of panchromatic pixels. In another embodiment, one or more of these panchromatic pixels can be replaced with color pixels. The color pixels may provide color information to the final processed image. The color pixels can provide different sensitivity than panchromatic pixels as previously described in fig. 4.
Fig. 18 shows a readout pattern and image generation of an image sensor according to another preferred embodiment of the present invention. This figure is based on an image sensor having the pixel array pattern shown in fig. 17 and read out with a rolling shutter. This set of pixel signals represents a portion of the entire image capture process. The entire image capture process may contain additional readouts of pixel signals extending in either direction along the time axis. The rolling shutter read time given at 1504 represents the reading of the pixel signal 1510 for all pixels (X1, X2, X3, and X4) of the image sensor. After reading the pixels, the pixels 1504 may be reset.
These X1 pixels are read out and reset according to the rolling shutter read time given by 1502. Line 1512 shows the overall exposure of each X1 pixel (TS1), which represents the time between the reset and read of these X1 pixels for each row of the image sensor. Pixel signal 1520 represents the readout of all of these X1 pixels.
These X1 and X2 pixels are again read out and reset according to the rolling shutter read time given at 1506. Line 1515 shows the overall exposure of each X1 pixel (TS2), which represents the time between the reset and read of these X1 pixels for each row of the image sensor. Line 1516 shows the overall exposure of each X2 pixel (TS3), which represents the time between the reset and read of these X2 pixels for each row of the image sensor. The pixel signal 1522 represents readout of all of these X1 and X2 pixels of the image sensor. This design is advantageous because it minimizes unused light while extending the dynamic range of the sensor.
The read pattern repeats at this stage because the subsequent readout is that of the entire image sensor according to the rolling shutter read time given by 1504. Some of these pixels read out at 1504 have a shorter exposure given by 1517 (TS4), while others have a longer exposure given by 1518 (TF 1).
In fig. 18, these pixels are exposed for different durations. This is an advantageous feature of the invention, since these different exposures allow for a greater dynamic range within a group of pixels. Another advantageous feature of the invention is that every photon that can reach the sensor is read and can be used. In addition, some pixel groups may be pixilated with other pixel groups to effectively double the sensitivity. For example, pixels from groups X3 and X4 may be merged for pixel signal 1510 pixels. Pixels from sets X1 and X2 may be merged for pixel signal 1510 pixels. Groups of pixels with equal exposure for a given readout can be binned by pixels.
The image value calculation is performed similarly to the explanation given for fig. 16. Pixel signals 1520 may be suitably processed from this initial array data to produce image 430. The pixel signals 1522 may be suitably processed from this initial array data to generate the image 432. Pixel signals 1510 may be suitably processed from this initial array data to generate image 436. The image value Pa is calculated according to equation 2. Calculating the image value Pb according to equation 4
Pb ═ X1+ (TF1/TS2) × X2)/g equation 4 (TF1/TS3) ×
Where X1(1522) is exposed with exposure (TS2) shown by line 1515, X2(1522) is exposed with exposure (TS3) shown by line 1516, line 1518 shows the longest exposure (TF1), and the number of pixel signals read out is g (1522). Equation 5 gives the image value Pf of the pixel signal 1510
Pf ((TF1/TS4) × (X1+ X2) + X3+ X4)/g equation 5
Where X1(1510) and X2(1510) are exposed with exposure (TS3) shown by line 1517, X3(1510) and X4(1510) are exposed with the longest exposure (TF1) shown by line 1518, and the number of pixel signals read out is g (1510). If for readout of 1510 pel binning groups of pixels, X1 and X2, a scalar is applied to the sum of X1 and X2 to accommodate pel binning. Similarly, if X3 and X4 are merged for readout 1510 pixels, then a scalar is applied to the sum of X3 and X4. Images 430, 432, and 436 are merged to produce a digital image 434 having an expanded dynamic range value Pe.
Those skilled in the art will appreciate that conventional auto-exposure techniques and circuits may be adapted to accommodate the multiple sets of signals provided by several embodiments of the present invention. The charge is integrated during a time period long enough to collect a detectable amount of charge for all pixels, while requiring less constraint to avoid saturating the storage element. By optimizing the respective exposure durations of these pixel sets for a given scene, the cropped brightest and dark regions of the image can typically be properly exposed. Scenes with a high dynamic range will provide a higher dynamic range image than scenes with a lower dynamic range.
FIG. 19 shows an exemplary color filter array pattern 1702 according to a preferred embodiment of the present invention. In this example, the pixels are full color (P)1704, red (R)1706, green (G)1708, and blue (B) 1710. The color filter array pattern 1702 has a minimal repeating unit that contains 16 pixels in the following 4 by 4 array:
G P B P
P P P P
R P G P
P P P P。
those skilled in the art will appreciate that other color filter array configurations and minimal repeating units are possible within the scope of the present invention. FIG. 20 shows a readout pattern and image generation of an image sensor according to another preferred embodiment of the present invention. This figure is based on an image sensor with the color filter array pattern shown in fig. 19 and read out with a rolling shutter. This is a slightly different variation of the readout pattern shown in fig. 18, with the modification that a color pixel has replaced one of these panchromatic pixels X2. This is useful in larger dynamic range applications where certain color information is required. This set of pixel signals represents a portion of the entire image capture process. The entire image capture process may contain additional readouts of pixel signals extending in either direction along the time axis. The rolling shutter read time given at 1804 represents readout of the pixel signal 1810 for all pixels (X1, color, X3, and X4) of the image sensor. After reading the pixels, the pixels 1804 may be reset.
These X1 pixels are read out and reset according to the rolling shutter read time given at 1802. Line 1812 shows the overall exposure of each X1 pixel, which represents the time between the reset and read of these X1 pixels for each row of the image sensor. Pixel signal 1820 represents the readout of all of these X1 pixels.
These X1 and X4 pixels are again read out and reset according to the rolling shutter read time given at 1806. Line 1815 shows the overall exposure of each X1 pixel, which represents the time between the reset and read of these X1 pixels for each row of the image sensor. Line 1816 shows the overall exposure of each X4 pixel, which represents the time between the reset and read of these X4 pixels for each row of the image sensor. The pixel signal 1822 represents the readout of all of these X1 and X4 pixels of the image sensor. This design is advantageous because it minimizes unused light while extending the dynamic range of the sensor.
The read pattern repeats at this stage because the next readout is that of the entire image sensor according to the rolling shutter read time given at 1804. Some of these pixels read out at 1804 have shorter exposures given by 1817, while others have longer exposures given by 1818.
In fig. 18, these pixels are exposed for different durations. This is an advantageous feature of the invention, since these different exposures allow for a greater dynamic range within a group of pixels. Another advantageous feature of the invention is that every photon that can reach the sensor is read and can be used. In addition, some pixel groups may be pixilated with other pixel groups to effectively double the sensitivity. For example, pixels from set X3 and color may be merged for pixel signal 1810 pixels. Pixels from sets X1 and X4 may be merged for pixel signal 1810 pixels. Groups of pixels having the same exposure for a given readout can be pixel merged.
Those skilled in the art will appreciate that there are many alternative methods for the present invention.
Parts list
10 light
11 imaging stage
12 lens
13 Filter block
14 Iris
16 sensor block
18 shutter block
20 image sensor
22 analog signal processor
24A/D converter
26 time sequence generator
28 sensor stage
30 bus
32 DSP memorizer
36 digital signal processor
38 processing stage
40 exposure controller
50 system controller
52 bus
54 program memory
56 system memory
57 host interface
60 memory card interface
62 socket
64 memory card
68 user interface
70 viewfinder display
72 exposure display
74 user input
76 status display
80 video encoder
82 display controller
88 image display
100 block
102 block
110 filter transmission curve
112 full color photoresponse curve
114 color photoresponse curve
116 color photoresponse curve
118 color photoresponse curve
300 reset procedure
302 read program
320 pixel integration time
402 scrolling shutter read time
404 scrolling shutter read time
410 pixel exposure
412 pixel exposure
414 pixel exposure
420 pixel signal
422 pixel signal
430 image
432 image
434 digital image
436 images
502 image capture sequence
504 sensor
506 sensor
508 image sensor
510 pixel signal
512 image processing steps
514 digital image
516 digital image utilization
518 image Capture completion query
520 image capture program termination
522 next reading from the sensor
602 color filter array pattern
604 panchromatic pixel
606 red pixel
608 green pixel
610 blue pixel
Line 701
Line 702
704 floating diffusion
706 pixel four-pixel group
708 read combination
710 readout combination
712 sense combination
714 read-out combination
802 panchromatic pixel readout
804 pixel readout and reset
806 scrolling shutter reset time
808 rolling shutter reset time
810 panchromatic and color pixel signals
812 pixel signal
814 panchromatic pixel exposure
816 panchromatic pixel exposure
818 color pixel exposure
902 quarter-resolution color pixel
904 sensor resolution panchromatic pixels
906 digital image
1002 digital image
1202 quarter resolution color image
1204 sensor resolution color image
1206 upsampling block
1208 upsampled color image at sensor resolution
1210 residual image forming block
1212 residual image
1302 pixel readout and reset
1304 color pixel reset
1306 panchromatic pixel readout and reset
1308 pixel signal
1310 panchromatic pixel signals
1312 colour pixel image
1314 digital image
1316 panchromatic pixel exposure
1318 panchromatic pixel exposure
1320 color pixel exposure
1402 pixel array pattern
1404 first group of pixels
1406 second group of pixels
1412 pixel array pattern
1414 first group of pixels
1416 second group of pixels
1418 third group of pixels
1420 fourth group of pixels
1502 scrolling shutter read time
1504 rolling shutter read time
1506 scrolling shutter read time
1510 pixel signal
1512 pixel exposure
1515 Pixel Exposure
1516 Pixel Exposure
1517 Pixel Exposure
1518 Pixel Exposure
1520 pixel signal
1522 pixel signal
1702 color filter array pattern
1704 first group of pixels
1706 second group of pixels
1708 third group of pixels
1710 fourth group of pixels
1802 pixel readout and reset
1804 pixel readout and reset
1806 Pixel readout and reset
1810 Pixel Signal
1812 Pixel Exposure
1815 Pixel Exposure
1816 pixel exposure
1817 pixel exposure
1818 Pixel Exposure
1820 Pixel signals
1822 Pixel signals

Claims (21)

1. A method for generating a digital image (434) from pixel signals captured by an image sensor array, the method comprising the steps of:
a) providing an image sensor array having at least two groups of pixels (X1, X2), wherein the number of pixels of any one group has no less than one quarter of the number of pixels of a portion of the entire sensor (20) from which a digital image is generated, and wherein each group of pixels is evenly distributed across the sensor (20);
b) exposing the image sensor array to scene light and reading pixel charge from only a first set of pixels (X1) to produce a first set of pixel signals (1520);
c1) after generating the first set of pixel signals (1520), exposing the image sensor array, then reading pixel charges from a second set of pixels (X2) and reading pixels from the first set again to generate a second set of pixel signals (1522); and
c2) after generating the second set of pixel signals (1522), the image sensor array is exposed, then pixel charges are read from each of the following sets of pixels to generate a third set of pixel signals (1510): the first group of pixels (X1), the second group of pixels (X2), a third group of pixels (X3), and a fourth group of pixels (X4);
wherein the exposure of the third set of pixels (X3) and the fourth set of pixels (X4) whose pixel charges are read in step c2) at least partially overlaps the exposure of the first set of pixels (X1) whose pixel charges are read in step b) and further at least partially overlaps the exposure of the first set of pixels (X1) and the second set of pixels (X2) whose pixel charges are read in step c 1);
d) generating the digital image (434) using the first set of pixel signals, the second set of pixel signals, and the third set of pixel signals.
2. The method of claim 1, wherein the second set of pixels comprises at least one color pixel and the first set of pixels comprises at least one panchromatic pixel.
3. The method of claim 1, wherein the second set of pixels comprises at least one color pixel, and the first, third, and fourth sets of pixels comprise at least one panchromatic pixel.
4. The method of claim 1, wherein the second set of pixels are color pixels and the first, third, and fourth sets of pixels are full-color (P) pixels.
5. The method of claim 4, wherein first, second, third and fourth pixel configurations of a pixel pattern each comprise the first set of pixels, the second set of pixels, the third set of pixels and the fourth set of pixels, wherein the color pixels of the first, second, third and fourth pixel modules are red (R), green (G) and blue (B), respectively.
6. The method of claim 5, wherein the pattern of pixels is arranged in the image sensor as:
G P B P
P P P P
R P G P
P P P P。
7. an imaging system, comprising:
a pixel array comprising at least a first set of pixels and a second set of pixels;
control logic coupled to acquire image data from the pixel array; and
a non-transitory machine-accessible storage medium that provides instructions that, when executed by the imaging system, will cause the imaging system to perform operations comprising:
a) providing an image sensor array having at least two groups of pixels (X1, X2), wherein the number of pixels of any one group has no less than one quarter of the number of pixels of a portion of the entire sensor (20) from which a digital image is generated, and wherein each group of pixels is evenly distributed across the sensor (20);
b) exposing the image sensor array to scene light and reading pixel charge from only a first set of pixels (X1) to produce a first set of pixel signals (1520);
c1) after generating the first set of pixel signals (1520), exposing the image sensor array, then reading pixel charges from a second set of pixels (X2) and reading pixels from the first set again to generate a second set of pixel signals (1522); and
c2) after generating the second set of pixel signals (1522), the image sensor array is exposed, then pixel charges are read from each of the following sets of pixels to generate a third set of pixel signals (1510): the first group of pixels (X1), the second group of pixels (X2), a third group of pixels (X3), and a fourth group of pixels (X4);
wherein the exposure of the third set of pixels (X3) and the fourth set of pixels (X4) whose pixel charges are read in step c2) at least partially overlaps the exposure of the first set of pixels (X1) whose pixel charges are read in step b) and further at least partially overlaps the exposure of the first set of pixels (X1) and the second set of pixels (X2) whose pixel charges are read in step c 1);
d) generating the digital image using the first set of pixel signals, the second set of pixel signals, and the third set of pixel signals.
8. The imaging system of claim 7, wherein the second set of pixels comprises at least one color pixel and the first set of pixels comprises at least one panchromatic pixel.
9. The imaging system of claim 7, wherein the second set of pixels comprises at least one color pixel and the first, third, and fourth sets of pixels comprise at least one panchromatic pixel.
10. The imaging system of claim 7, wherein the second set of pixels are color pixels and the first, third, and fourth sets of pixels are full-color (P) pixels.
11. The imaging system of claim 10, wherein first, second, third, and fourth pixel configurations of a pixel pattern each include the first set of pixels, the second set of pixels, the third set of pixels, and the fourth set of pixels, wherein the color pixels of the first, second, third, and fourth pixel modules are red (R), green (G), and blue (B), respectively.
12. The imaging system of claim 11, wherein the pixel pattern is arranged in the pixel array as:
G P B P
P P P P
R P G P
P P P P。
13. a method for producing a digital image, the method comprising:
providing an image sensor having two color pixels and two panchromatic pixels, wherein the two color pixels and the two panchromatic pixels share a floating diffusion;
resetting the two panchromatic pixels;
reading out an uncombined panchromatic signal of the two panchromatic pixels generated after resetting the two panchromatic pixels;
reading out the merged color signals of the two color pixels generated prior to reading out the uncombined panchromatic signals;
reading out the merged panchromatic signals of the two panchromatic pixels generated after reading out the uncombined panchromatic signals; and
generating the digital image using at least the uncombined panchromatic signal, the combined color signal, and the combined panchromatic signal.
14. The method of claim 13, wherein the merged color signal and the merged panchromatic signal are read out during the same shutter.
15. The method of claim 14, wherein the same shutter is a rolling shutter.
16. The method of claim 13, wherein generating the digital image comprises color interpolating the merged color signal and the merged panchromatic signal and spatially interpolating the uncombined panchromatic signal.
17. The method of claim 16, wherein the digital image has a spatial resolution of the image sensor.
18. The method of claim 16, wherein the digital image is an enhanced quarter resolution digital image that is half the vertical resolution of the image sensor and half the horizontal resolution of the image sensor.
19. A method for producing a digital image, the method comprising:
providing an image sensor having a pair of color pixels sharing a floating diffusion with a pair of panchromatic pixels, the pair of color pixels including a first pair of color pixels, a second pair of color pixels, and a third pair of color pixels;
resetting each of the panchromatic pixel pairs;
reading out a combined panchromatic signal from the pair of panchromatic pixels;
resetting each of the panchromatic pixel pairs after reading out the combined panchromatic signal;
resetting each of the color pixel pairs;
reading out a combined color/panchromatic signal, wherein the combined color/panchromatic signal comprises image charge from the pair of color pixels and the pair of panchromatic pixels stored together in each of the floating diffusions; and
generating the digital image using at least the merged panchromatic signal and the merged color/panchromatic signal.
20. The method of claim 19, wherein the first color pixel is a red pixel, the second color pixel is a green pixel, and the third color pixel is a blue pixel.
21. The method of claim 20, wherein the combined color/panchromatic signal comprises a diluted bayer pattern.
HK15104838.1A 2009-04-01 2015-05-21 Method and imaging system for producing digital images HK1204412B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/416,172 US8218068B2 (en) 2009-04-01 2009-04-01 Exposing pixel groups in producing digital images
US12/416,172 2009-04-01

Publications (2)

Publication Number Publication Date
HK1204412A1 HK1204412A1 (en) 2015-11-13
HK1204412B true HK1204412B (en) 2018-12-14

Family

ID=

Similar Documents

Publication Publication Date Title
CN104301635B (en) Produce the method and imaging system of digital picture
US8164651B2 (en) Concentric exposure sequence for image sensor
US9661218B2 (en) Using captured high and low resolution images
US8179445B2 (en) Providing improved high resolution image
US8452082B2 (en) Pattern conversion for interpolation
US7855740B2 (en) Multiple component readout of image sensor
US8068153B2 (en) Producing full-color image using CFA image
TWI496463B (en) Method of forming full-color image
US20090051984A1 (en) Image sensor having checkerboard pattern
HK1204412B (en) Method and imaging system for producing digital images