WO2011150554A1 - 一种多光谱感光器件及其采样方法 - Google Patents
一种多光谱感光器件及其采样方法 Download PDFInfo
- Publication number
- WO2011150554A1 WO2011150554A1 PCT/CN2010/073443 CN2010073443W WO2011150554A1 WO 2011150554 A1 WO2011150554 A1 WO 2011150554A1 CN 2010073443 W CN2010073443 W CN 2010073443W WO 2011150554 A1 WO2011150554 A1 WO 2011150554A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sampling
- pixel
- merging
- pixels
- mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/46—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/135—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
- H04N25/136—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements using complementary colours
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/62—Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/77—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
- H04N25/778—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising amplifiers shared between a plurality of pixels, i.e. at least one part of the amplifier must be on the sensor array itself
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/78—Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
-
- H—ELECTRICITY
- H10—SEMICONDUCTOR DEVICES; ELECTRIC SOLID-STATE DEVICES NOT OTHERWISE PROVIDED FOR
- H10F—INORGANIC SEMICONDUCTOR DEVICES SENSITIVE TO INFRARED RADIATION, LIGHT, ELECTROMAGNETIC RADIATION OF SHORTER WAVELENGTH OR CORPUSCULAR RADIATION
- H10F39/00—Integrated devices, or assemblies of multiple devices, comprising at least one element covered by group H10F30/00, e.g. radiation detectors comprising photodiode arrays
- H10F39/80—Constructional details of image sensors
- H10F39/803—Pixels having integrated switching, control, storage or amplification elements
Definitions
- the present invention relates to reading of photosensitive pixels of a photosensitive chip, In particular, sub-sampled data reading of the photosensitive pixels of the large array of photosensitive chips.
- the present invention relates to a multispectral light sensing device and a sampling method thereof.
- the present invention is a continuation of the "Multispectral Photosensitive Device and Its Manufacturing Method” (PCT/CN2007/071262) and “Multispectral Photosensitive Device and Its Manufacturing Method” (China Application No.: 200810217270.2). It is intended to provide a more specific and preferred implementation of semiconductor circuit and chip level.
- the senor either focused on colored visible light or focused on infrared light, and rarely combined the two.
- semiconductor technology using cadmium indium Silicon Infrared focal plane arrays
- M. Kimata in Handbook of Infrared Detection Technologies, edited by M. Henini and M. Razeghi, pp. 352-392, Elsevier Science Ltd., 2002
- the previous method of obtaining color and infrared sensation is to physically superimpose a color sensor with an infrared sensor (eg "Backside-hybrid” Photodetector for trans-chip detection of NIR light], by T.
- the inventors of the present invention proposes a new multi-spectral photosensitive device capable of simultaneously obtaining color and infrared images.
- This new type of photosensitive device Greatly expand the dynamic range of the photosensitive device to meet the high performance requirements in the automotive, security and other fields.
- the camera used in mobile phones can also greatly improve the image quality.
- they can be fabricated using existing CMOS, CCD, or other semiconductor photosensitive device fabrication techniques.
- each of the techniques can have a very large and effective fabrication method and structural design.
- the present invention mainly provides a few fabrication methods using CMOS/CCD semiconductor technology.
- this new double-layer or multi-layer photosensitive device poses a new problem, that is, the amount of data is twice or more than that of a conventional single-layer photosensitive device.
- the two-layer photosensitive device requires only half the pixels to obtain the same resolution as the single-layer photosensitive device, the high-speed processing of the data of the large-array photosensitive device is still an improvement problem.
- the existing sub-sampling technique is only a requirement for sub-sampling of the photosensitive chip in which the Bayer arrangement or the CYMG four-color pattern arrangement is separately considered, and the subsequent calculation is not simplified.
- a color image of a Bayer pattern after the US Micron Parallel and parallel sampling techniques from Technologies Inc. (US Patent US 7,091,466 B2, US7,319,218B2), still a Bayer pattern, still requires complicated processing to get the YUV image that is preferred in the preview and storage stages.
- Other sub-sampling circuits that improve the signal-to-noise ratio require complex integrating circuits and comparators, resulting in an increase in the auxiliary circuit and an increase in frequency.
- the prior art is cumbersome and mediocre because the double-layer or multi-layer photosensitive device provides a very large array of excellent color patterns, and thus, Signal reading and sub-sampling should be improved for the characteristics of double-layer or multi-layer photosensitive devices.
- the object of the present invention is to propose a superior subsampling principle and an advanced subsampling circuit, and combine subsampling and subsequent image processing to optimize together.
- the invention provides a multi-spectral light-sensing device and a sampling method thereof, so as to overcome the weak defect that the data volume of the double-layer or multi-layer multi-spectral sensor chip is relatively large.
- the sampling method mainly includes sub-sampling, but also covers full-image sampling. It is to be understood that the present invention is not limited to a two-layer or multi-layer multi-spectral photosensitive device, and is equally applicable to a single-layer photosensitive device.
- double layer photosensitive device double side photosensitive device
- bidirectional photosensitive device the double-layer photosensitive device means that the photosensitive pixel is physically divided into two layers (as described by the inventors in the earlier invention application "Multispectral Photosensitive Device and Its Manufacturing Method” (PCT/CN2007/071262) Photosensitive devices), each layer contains photosensitive pixels that sense a specific spectrum.
- a double-sided photosensitive device means that the photosensitive device has two photosensitive surfaces, each of which is sensible in at least one direction.
- a two-way photosensitive device means that the photosensitive member can be sensitized from two directions (usually 180 degrees apart), that is, from both the front and the back of the photosensitive device.
- a photosensitive device can have one, two, and all three features of double layer, double sided, and bidirectional.
- the present invention adopts the following technical solutions:
- a multi-spectral light-sensing device comprising an array of pixels arranged in rows and columns, and
- a first merging unit configured to perform two-two combined sampling between adjacent parallel, hetero-column, or hetero-column pixels in the pixel array to obtain sampling data of the first merged pixel;
- a second merging unit configured to perform merging sampling data of the first merging pixel obtained by the first merging unit to obtain sampling data of the second merging pixel.
- the multi-spectral light-sensing device further includes a third merging unit configured to perform combined sampling on the sampled data of the second merged pixel obtained by the second merging unit to obtain sample data of the third merged pixel.
- the pixel combining manner of the first merging unit or the second merging unit is a charge accumulation manner between pixels of the same or different colors or a signal averaging manner between two different color pixels, wherein
- the method of pixel combination between different color pixels follows the color space transformation method to meet the requirements of color reconstruction.
- the charge accumulation is performed in a reading capacitor (FD).
- the color-based merge sampling mode of the first merging unit or the second merging unit includes a same color merging mode, a heterochromatic merging mode, a hybrid merging mode, or an optional discarding excess color merging mode, and
- the merge mode of the first merge unit and the second merge unit is not the same color merge mode, that is, at least one of the two merge units does not adopt the same color merge mode.
- the location-based combined sampling manner of the first merging unit or the second merging unit includes at least one of the following methods: automatic averaging of signals directly outputted to the bus, skipping or Jump mode, and sample by sample. That is to say, these several location-based combined sampling methods can be used alone or in combination.
- the combined sampling manner of the third merged sampling unit includes at least one of a color space conversion manner and a back-end digital image scaling manner.
- the color space transformation includes a transformation from RGB to CyYeMgX space, and a transformation from RGB to YUV space. Or a transformation from CyYeMgX to YUV space, where X is any one of R (red), G (green), and B (blue).
- the pixel array is composed of a plurality of macro pixels including at least one basic pixel, wherein the basic pixels may be passive pixels or active pixels.
- the basic pixels in the macro pixel are arranged in a square matrix or a honeycomb.
- the macro pixel may be composed of at least one of the following components: a 3T active pixel composition without a read capacitor (FD), and a read capacitor (FD). 4T active pixel composition.
- each macro pixel has a read capacitance (FD) 4T active pixel, which adopts a 4-point sharing mode, a 6-point sharing mode, or an 8-point sharing mode.
- FD read capacitance
- the macro pixel may also include a composition consisting of four square array pixels and two opaque reading capacitors (FD) located between the two rows.
- the pixel of the previous row shares a read capacitance (FD) with the pixel of the next row, charge transfer between the two read capacitors (FD), and a read circuit is connected to at least one of the read capacitors.
- FD read capacitance
- the macro pixel may be composed of basic pixels of 3T or 4T active pixels with two-point sharing, three-point sharing, or four-point shared reading capacitance (FD), using a 4-point bridge sharing method, a 6-point bridge type Sharing mode, or 8-point bridge sharing.
- FD shared reading capacitance
- each macro pixel is composed of basic pixels of 4T active pixels with two-point sharing, three-point sharing, or four-point shared reading capacitance (FD), and is shared by 4-point bridge. Mode, 6-point bridge sharing mode, or 8-point bridge sharing mode.
- the full-image sampling mode of the multi-spectral photosensitive device includes a progressive scan, a progressive read mode, or a progressive scan, an interlaced or an inter-row read mode.
- the invention also discloses a sampling method of a multi-spectral photosensitive device, comprising:
- a first merging process for performing two-two combined sampling between adjacent parallel, hetero-column, or hetero-column pixels in the pixel array to obtain sampling data of the first merged pixel;
- a second merging process configured to perform merging sampling data of the first merging pixel obtained by the first merging process to obtain sampling data of the second merging pixel.
- the sampling method further includes: a third merging process, configured to perform merging sampling data of the second merging pixel obtained by the second merging process to obtain sampling data of the third merging pixel.
- the pixel combining mode of the first merging process or the second merging process is a charge addition manner between pixels of the same or different colors or a signal averaging mode between pixels of different colors, wherein different colors
- the way of pixel merging between pixels follows the color space transformation to meet the requirements of color reconstruction.
- the sampling method, the color-based merge sampling method of the first process or the second merge process includes a same color merge mode, a heterochromatic merge mode, a hybrid merge mode, or a selective discarding excess color mode, and the first merge process At least one of the merge processes in the second merge process is not in the same color merge mode.
- the sampling method, the location-based merge sampling method of the first merging process or the second merging sampling process includes at least one of the following modes: automatic averaging of signals directly outputted to the bus, skipping or jumping Column mode, and sample by sample.
- the merge sampling method performed by the third merge sampling process includes: a color space conversion mode and a back end digital image scaling mode.
- the color space transformation includes a transformation from RGB to CyYeMgX space, a transformation from RGB to YUV space, Or a transformation from CyYeMgX to YUV space, where X is any one of R (red), G (green), and B (blue).
- the sampling method includes a progressive scan, a progressive read mode, or a progressive scan, an interlaced or an inter-row read mode.
- the sub-sampling is divided into at least two processes, namely the aforementioned first combined sampling process and the second combined sampling process.
- the first merge sampling process and the second merge sampling process generally occur between row (combined) sampling and column (combined) sampling of pixels, mainly for analog signals, except that the charge addition portion is usually only in the first combined sampling
- its order and content are usually exchangeable.
- a third merge sampling process may also be included, the third merge sampling process occurring after analog to digital conversion, mainly for digital signals.
- the first merge sampling process two immediately adjacent pixels in the pixel array are taken for merging. On the one hand, the merging of adjacent pixels is completed.
- the merged pixels we will refer to the merged pixels as the first merged pixels. It should be understood that the first merged pixels are only described in the present invention. The concept refers to the pixel after the first merging process, and does not represent that there is a "first merged pixel" in the pixel array; the data after combining the two adjacent pixels is called the first Combine the sampled data of the pixel.
- the immediate situation includes peer-to-peer, hetero-column, or hetero-column.
- the signal will average at least two pixels and the noise will decrease. , therefore, after the merger, at least the signal to noise ratio can be improved Multiple, and this combination can be done between pixels of the same or different colors.
- the two combined colors can be different, that is, the color is added or averaged, it can be known from the principle of the three primary colors of the color that the addition of the two primary colors is the complementary color of the other primary colors, that is, the images of two different primary colors.
- the combination of primes produces another complementary color of the primary color, transforming from the primary color space to the complementary color space. Only the color space transformation occurs, and we can still complete the color reconstruction through different complementary colors.
- the present invention it is possible to combine pixels of different colors to improve the signal-to-noise ratio while enabling color reconstruction.
- the entire sub-sampling process is therefore optimized to accommodate the high speed requirements of large data volume pixel arrays.
- a basic requirement of color space transformation is that the combination of transformed colors can reconstruct (by interpolation, etc.) the required RGB (or YUV, Or CYMK) color.
- the first merged sample simply combines the two pixels.
- the merged first merged pixels also have a plurality of pixels.
- the color combination used may be the same or different.
- the first merge is all done between the same color, we call it the same color combination; when the first merge is all between different colors, we call it the heterochromatic merge; when the first merge Performing between the same color and partly between different colors, we call it the hybrid merge method; when some extra colors in the pixel array are discarded (of course, discarding is optional, for example, it cannot be affected Color reconstruction), such a color combination method is called selective discarding of excess color.
- the second merging process is an operation on a plurality of first merged pixels.
- the first merged pixels of the same color may be merged; or the first merged pixels of different colors may be combined (of course In this case, all of the three primary colors may be added and the color cannot be reconstructed).
- the above-mentioned method of homochromatic merging, heterochromatic merging, hybrid merging, etc. is to perform color-based categorization of combined sampling, and in addition, from the perspective of merging sampling positions, the combined sampling manners of the first merging process and the second merging process include:
- the signal is automatically output to the bus automatically averaged, skipped or skipped, sampled one by one, and two or three of these modes are used simultaneously.
- the first merge process and the second merge process are identical and interchangeable except for the difference in order, except that the charge addition portion can usually only be done during the first merge sampling process.
- the so-called automatic output averaging method for direct output to the bus is that the signals to be combined (the same or different colors) are simultaneously output to the data acquisition bus, and the average of the signals to be combined is obtained by the automatic balancing of the (voltage) signals. value.
- the so-called skip or skip mode is to skip some rows or columns to achieve (merge) sampling by reducing the amount of data.
- the so-called sample-by-sampling method does not actually do any merging, and thus reads the original pixel or the first merged pixel. Some of these three methods can be used simultaneously. For example, the skip or skip mode can be used simultaneously with the automatic averaging or sample-by-sampling method of direct output to the bus.
- the sub-sampling method of the third merge sampling process includes a color space conversion method, a back-end digital image scaling method, and serial use of the two methods.
- the first and second combining processes are mainly performed on the analog signal
- the third sub-sampling process is mainly performed on the digital signal, that is, after the analog-to-digital conversion.
- the present invention also achieves charge addition for the first time in combined sampling.
- the current combined sampling almost always achieves the average of voltage or current signals. In this way, when the N points are combined, the signal-to-noise ratio can be increased at most. Times. This is because the existing combined sampling is a combination of N pixels of the same color sharing one output line. On this output line, the voltage or current signal of each pixel is inevitable (automatic). On average, therefore, the improvement in signal-to-noise ratio is only due to the reduction of noise after merging. , which increases the signal-to-noise ratio by a factor of at most.
- Charge addition is an effective combination of sampling, but it requires that the combined pixels be spatially adjacent.
- the reason why the previous sub-sampling cannot be done is that the previous sub-sampling is only performed between pixels of the same color, but the charge addition cannot be achieved because the pixels to be merged are spaced apart from other pixels.
- it is relatively easy to achieve charge addition because the color pattern is very rich.
- Figure 1 is a CMOS passive pixel read (sampling) circuit.
- Figure 7 is a comparison of the reading mode (a) of the CCD pixel and the reading mode (b) of the CMOS pixel. Note the function of the one-to-one transfer between the CCD pixels in the vertical direction in Fig. 7(a).
- Figure 9 is a schematic diagram depicting the basic principles of U.S. Patent No. 7,319,218 B2. By averaging the same pixels that need to be merged at different times, the relevant signals are recorded completely and then output to the sampling bus at the same time, and then they are balanced to obtain the average value of the combined pixels.
- the basic idea is the same as US Patent No. 7,091,466 B2, but with different circuit implementations.
- Figure 10 summarizes the basic idea of the conventional pixel-to-color merging technique: combining pixels of the same color of adjacent macro pixels (in a signal-averaged manner).
- Fig. 10(a) is a schematic diagram of column merging
- Fig. 10(b) is a schematic diagram of merging rows and columns.
- Figure 11 shows the current better 4-point shared 4T active-sensing pixel read circuit with an average of 1.75 gates per pixel.
- FIG 12 shows a 6-point shared 4T active photosensitive pixel reading circuit, with an average of only 1.5 gates per pixel.
- This 6-point shared active pixel reading circuit is suitable for double-sided double-layer photosensitive devices using honeycomb arrays (see “Multispectral Photosensitive Devices and Their Manufacturing Methods", China Application No. 200810217270.2), That is, the upper and lower photodiodes of all three composite pixels in one macro pixel share the same read capacitor (FD) and 3T read circuit.
- Figure 13 shows an 8-point shared 4T active-sensing pixel read circuit with an average of 1.375 gates per pixel.
- the 8-point shared active photosensitive pixel reading circuit is suitable for a double-sided double-layer photosensitive device arranged in a square matrix based on a four-point macro pixel. That is, the upper and lower photodiodes of all four composite pixels in one macro pixel share the same read capacitor (FD) and 3T read circuit.
- Figure 14 shows the basic idea of the technique of heterochromatic and hybrid merging of the present invention: first combining two pixels of different or identical colors within the same macro pixel (by signal averaging or adding), and then Then merge the adjacent pixels with the same color after merging.
- Fig. 14(a) is a schematic view showing a two-column combination of a Bayer pattern photosensitive device
- Fig. 14(b) is a schematic view showing a two-row and two-row combination of a Bayer pattern photosensitive device.
- G and B alone, or the combination of G and R constitute a heterochromatic merger
- the combination of G and B, the combination of G and R, the combination of B and R, and the combination of G and G When you look up, it constitutes a hybrid merger, because some of the mergers are between the same color (G and G), while others are between different colors.
- the RGB primary color image of the Bayer pattern is converted into a CyYeMgG complementary color image after being mutated.
- the combination of G and B, the combination of G and R, The combination of B and R, and the combination of G and G constitute the first merge process.
- the second merge process is to get Cy, Ye, Mg in different positions after the combination.
- the G value, in the same color combination output to the bus at the same time, or jump or jump, skip some pixels, and read out one by one.
- Figure 15 shows the variegated merging technique of the present invention for more general M-row and N-column merging (5x3 in the figure, i.e., the combination of 5 rows and 3 columns).
- M-row and N-column merging 5x3 in the figure, i.e., the combination of 5 rows and 3 columns.
- the merging of 3 rows and 3 columns can be done by combining the 2 rows and 2 columns by adding jumps and jumps.
- the second merge process is to get Cy, Ye, Mg in different positions after the combination. And the G value, in the same color combination, output to the bus at the same time, and skip some of the unused pixels in the middle (such as the 5th and 10th lines in the figure). The skipped color will no longer participate in the subsequent third merge sampling process if the photosensitive device contains a third combined sampling process.
- Figure 16 shows the extra 2x2 image reduction caused by the color space matrix transformation.
- the image of CyYeMgG is the original image or the method of variegated merging of the present invention from Bayer
- the RGB image is obtained, and when we convert it to a YUV image, we get an extra 2x2 reduction.
- the method of reduction is to convert a CyYeMgG macro pixel into a (Y, U, V) pixel as four pixels of a common point, and then U and the two points adjacent to each other (horizontal direction).
- the average value of V is the desired YUV422 image for preview and JPEG/MPEG compression.
- Figure 17 shows an excellent read circuit of the present invention.
- the photosensitive pixels of odd and even rows will share an opaque read capacitor FD (FD1 and FD2 in the figure).
- Switch TG1 is used to select the capacitor in the Gr photodiode. Switch to FD1.
- switch TG2, TG3, TG4 are used to read the capacitance values of R, B, Gb and so on to FD2, FD1, or FD2 respectively.
- Another switch TG5 is used to read the value in capacitor FD1.
- transfer to FD2 or transfer from FD2 to FD1).
- a four-point bridge shared read circuit as shown in Fig. 18 can be used. In this circuit, the requirement that the reading capacitor FD is opaque is necessary to realize progressive scanning, interlaced or inter-row reading as shown in FIG.
- Figure 18 is a diagram showing a four-point bridge shared read circuit for a four-dot macro pixel square array pattern of the present invention.
- This read circuit uses two transistors per pixel on average. Although it is not the shared read circuit with the fewest number of gates, it has unparalleled advantages in other respects.
- the first advantage is that by sub-sampling, by simultaneously turning on TG1/TG3 or simultaneously turning on TG2/TG4, The pixel value Gr of the odd-numbered lines can be accumulated in the FD1 in an additive manner with the pixel values B of the even-numbered lines, thereby achieving the dual effect of signal addition and noise subtraction.
- Figure 20 is a diagram showing an eight-point shared read circuit for a four-dot macro pixel square array pattern double-layer photosensitive device of the present invention.
- the top four pixels share a read capacitor FD1
- the bottom four pixels share a read capacitor FD2
- the top four points share an amplification and read circuit with the bottom four points.
- the difference from FIG. 13 is that the read capacitances of the top and bottom layers are not shared, thereby facilitating the fabrication of the double-sided photosensitive device.
- the top and bottom four macro pixel points can also adopt the dual FD bridge shared read circuit as shown in FIG. 18, so that the top and bottom read circuits are relatively independent. And each can take the method of interlacing or inter-row reading to speed up the shutter speed when taking pictures.
- the interlaced reading method or the interlaced reading method shown in Fig. 21 is different from the field scanning method in the previous television system.
- the main difference is that the image pickup time of the second half of the image stored in the buffer (FD) area is almost the same as that of the first half frame, so the shutter speed is doubled in effect than the progressive read, but it is avoided.
- Figure 22 shows a simplified processing of a two-layer photosensitive device during subsampling: the first merging process first uses a merge or discard method from the redundant color pixels of the upper and lower layers, leaving only color reconstruction.
- the necessary color components for example, Cy, Mg (which is obtained by combining B and R), G, and Ye.
- the third merging process then converts the adjacent CyYeMgG four pixels into one YUV color by means of the color space conversion shown in FIG. 16, and then the adjacent UV components are sampled in the horizontal direction by 2 molecules. Get the YUYV422 image. This process completes a 2x2 subsampling. If the image is still too large, then before the color space CyYeMgG to YUV conversion, the average of the same color of CyYeMgG can be made in the second merge process instead of using the full sampling mode in the figure.
- Figure 22 is also sufficient to demonstrate the complexity and richness of two or more layers of photosensitive devices during subsampling. Since two or more layers of photosensitive devices have thousands of possibilities in the color distribution of macro pixels, there are more possibilities for subsampling. We can only cite a few methods to illustrate the essence of the invention.
- Figure 23 shows a simplified processing of another two-layer photosensitive device during subsampling: the first combining process adds (or averages) by pixels, which first obtains the macro pixels of CyYeMgB, and then, the third The merging process obtains the YUV color by the relationship of the four points through the color conversion, thereby achieving sub-sampling of 2x2.
- the macro pixels of CyYeMgB can be merged in the same color (in a signal averaging manner) instead of the full sampling mode, thereby obtaining a higher multiple sub-sampling.
- the macro pixels of CyYeMgB in the figure can also be replaced by macro pixels of BRGB similar Bayer pattern.
- CyYeMgB is just a special case of CyYeMgX, where X can be R, G, or B.
- Figure 26 is a specific example (photosensitive pixel shown in Figure 17) to illustrate the various control signals (row selection, row control vector, column selection, column control vector) in Figure 25 and corresponding photosensitive pixels Control signal relationship.
- Fig. 26 is a diagram showing the signal sharing of the Gr pixel and the B pixel in Fig. 17 (TG5 is omitted).
- the row selection signals Row[i] and Col[j] have been clearly marked.
- the reset signal RS1 and the transfer gate control signal RS2 (TG1 or TG3) belong to the row control signal.
- RS1 is shared by two lines, while RS2 has one per line (for example, TG1 belongs to RS[i], TG3 belongs to RS[i+1]).
- TG5 in Figure 17 (omitted in Figure 26), It belongs to the column control signal T[j]. That is to say, as far as possible, the pixel only performs row operations (the pixels of the same row are identical) and the column operations (the pixels of the same column are identical), instead of doing each Different operations of pixels, To reduce complexity.
- different reading and subsampling circuits may be implemented by a circuit similar to that shown in FIG. 25, including: a pixel array including a plurality of macro pixels, and a row address decoding controller , column address decoding controller, sampling control circuit, amplification and analog to digital conversion module, color conversion and subsampling and image processing module, output control module, chip total control module (CC module in Figure 25), and other possible modules.
- first merging unit the second merging unit and the third merging unit to implement the above several combined sampling processes.
- these units are only a module division of the device from the perspective of its implementation function. From the perspective of physical devices, these functional units can be implemented by a physical module or multiple physical ones. Module combinations implement their functions, or these functional units are integrated into one physical module.
- the descriptions of the first merging unit, the second merging unit, and the third merging unit herein are merely described in terms of their functions, and the physical implementation thereof is not specifically limited.
- the row address decoding controller and the column address decoding controller are implemented to implement the required sub-sampling functions.
- the row address decoding controller will output two types of signals, and the row selection signal Row[i] (one line per line) and the line control vector signal RS[i] (one or more lines per line), where i is the label of the line.
- the column address decoding controller will output two types of signals, and the column selection signal Col[j] (one line per column) and column control vector signal T[j] (one or more lines per column), where j is the label of the column.
- RS[i] and T[j] are used to control the reset, clear, sensitization time control, charge transfer, pixel merging, and pixel reading of the photosensitive pixels. Due to the symmetry of the ranks, RS[i] and T[j] have many specific implementations. TG1-TG5, Vb1 as shown in Figure - Signals such as Vb4, as well as the RS, S, and SF signals shown in Figure 18, are all included in RS[i] and T[j]. The invention is not limited by the specific implementation of these signals.
- the sub-sampling after the first combined sampling (total MxN sub-sampling), that is, the second combined sampling process, can be done separately or in combination by various means: automatic averaging of signals directly outputted to the bus, skipping or jumping Column mode, and sample by sample.
- the third merge sampling process if any, can be done separately or in combination by two methods: a color space conversion method and a back end digital image scaling method.
- each macro pixel can be composed of 4T active pixels with two opaque FDs, in which case the read circuit can be in a 4-point bridge sharing mode (Fig. 18).
- the read circuit can be in a 4-point bridge sharing mode (Fig. 18).
- a photosensitive device adopts a charge addition manner when performing color combination at the time of the first two rows or two columns, or the combined sampling subsampling of two rows and two columns.
- This macro pixel provides the possibility for subsequent progressive scans, interlaced or inter-row read full-sample sampling.
- the read circuit can also adopt 4-point bridge sharing mode (Fig. 18) and 6-point bridge sharing mode (Fig. 19). Or eight-point bridge sharing ( Figure 20).
- a photosensitive device adopts a charge addition manner when performing color combination at the time of the first two rows or two columns, or the combined sampling subsampling of two rows and two columns.
- the upper limit of the improvement of the signal-to-noise ratio is N.
- the upper limit of the improvement of the signal-to-noise ratio is Times.
- the row address decoding controller and the column address decoding controller For each supported MxN sampling factor (the line is reduced by M times, the column is reduced by N times), the row address decoding controller and the column address decoding controller, corresponding to each output line according to the MxN sampling factor and image area requirements, All Row[i] and RS[i] values corresponding to the rows to be merged are set to high or low, and corresponding to each output column, and then all Col[j] and T[j] of the columns to be merged will be Set the value to high or low, All pixel (charge/voltage) values that need to be combined can be output to the output bus in the same order of reading (via read and write circuits). At the same time, if necessary, the row address decoding controller and the column address decoding controller will also perform necessary jump and jump operations according to the MxN sampling factor and image region requirements.
- a simple clear control is to zero all of Vb1 and Vb2, which requires Vb1 and Vb2 to be a signal for the row control vector.
- first reset FD1 and FD2 (RS1 in Figure 26 is set to zero), and at the same time TG1 and TG2 are turned on (RS2 is set high in Figure 26), and the charge in Gr and R in the photosensitive pixel is cleared. Drop it. After that, set RS1 high, RS2 is set to zero. Thereafter, under the illumination of light, the photodiodes of Gr and R start to accumulate charges.
- the first is to directly turn on TG1/RS2 and Row[i], transfer the charge in Gr to FD1, and read the charge value in Gr (by charge-to-voltage conversion).
- the second method is to read the charge value in Gr after the last step of the first method, then reset FD1, and read the charge (voltage) of FD1 in the reset state, so as to be used to read just
- the charge value of Gr is correlatedly sampled.
- the third is to reset the FD1 sample before reading the charge value in Gr. This method will interfere with the value in Gr, so it is not comparable to the second.
- the column address decoding controller must open the column selection signal Col[j] corresponding to Gr to output the measurement of Gr (possibly twice, one of which is the measurement in the reset state) to the amplification and analog to digital conversion module. Go in.
- the chip total control module CC can calculate the color of the pixel being read and process it accordingly. Different colors may enter different amplification circuits and perform different analog-to-digital conversion processes to obtain digital signals.
- the digital signal of the photosensitive pixel will be placed in the buffer for further processing by the color conversion and sub-sampling and image processing modules.
- the chip total control module CC will make corresponding control, so that the digital signal of the photosensitive pixel skips the color conversion and sub-sampling module, and directly enters the image processing module.
- the output module After being processed by the image contained in the photosensitive device, the output module outputs the external interface to the photosensitive device.
- interlaced reading when the pixels of the even row (the first row) are all read, the row address decoding controller does not immediately select to read the next row, but instead the next odd row (second The line is transferred to the FD shared by the FD and the even line, and then the third line is read.
- the order of reading the lines of the first field is 0, 3, 4, 7, 8, 11, 12, 15..., and the reading of the latter field The order is 1, 2, 5, 6, 9, 10, 13, 14... Of course, there can be more complicated orders.
- the line that is crossed in the middle is temporarily stored in the FD that has been used once, and waits until the next field is read.
- the difference between the progressive scan, the interlaced or the interlaced mode and the field scan mode in the conventional television is that in the progressive scan, interlaced or interlaced mode of the present invention, the photosensitive timing of the pixels is completely progressive, and the field The scanning method is not.
- the sub-sampling factor MxN can only support a few. Accordingly, the chip master control module CC, the row address decode controller, and the column address decode controller may only consider the supported MxN subsampling factors. For example, a 5 megapixel sensor can only consider four cases that support 2x2, 2x1, 4x4, and 8x8.
- the second merge sampling process usually does not involve charge addition, so the following three methods are generally used: automatic averaging of signals directly outputted to the bus, skipping or skipping, and sampling by sample. These three methods are very traditional and simple, well known to those skilled in the art, and will not be described here.
- the third merge sampling process is done in the digital image space, using a relatively standard digital image scaling technique. Below we only make a detailed description of the signal control flow for the first combined sampling process, so that the method of use of the present invention is easier to understand.
- the row address decoding controller will zero (reset) RS1 corresponding to FD1 as shown in FIG.
- TG1 and TG3 (RS2[i] and RS2[i+1]) are turned on, and the charges of the Gr and B photodiodes (PD) are transferred to FD1.
- PD Gr and B photodiodes
- the first two steps can be performed on all the pixels of the i-th and i-th rows at the same time, and the third and fourth steps sequentially read the merged pixels. Therefore, if no correlation is performed, an average of one clock pulse can read one pixel, and if correlated, an average of two clock pulses can read one pixel. This is done in order of pixel position prioritization.
- This method of merging can also be performed in accordance with the following color precedence.
- Time t0 The row address decoding controller will zero (reset) RS1 corresponding to FD1 and FD2 as shown in FIGS. 17 and 26.
- the first three steps can be performed on all of the Gr and Gb pixels of the i-th and i-th rows simultaneously, and the third and fourth steps sequentially read the merged pixels. Therefore, if no correlation is performed, an average of one clock pulse can read one pixel, and if correlated, an average of two clock pulses can read one pixel. This type of reading destroys the natural ordering of pixels by position and requires backend processing correction. In order to maintain consistency, the first merge method can also be done in color-first mode.
- the second processing method is to perform positional prioritization: first, the first Gr and Gb are combined and sampled, and then the first B and R are combined and sampled, and thus repeated.
- the timing of the control signals in this manner is similar to the first processing described above, but the pixels can only be serially processed and cannot be processed in parallel. That is, the second merged pixel cannot be processed at the time t0-t5 of the first merged pixel. This requires a relatively high system clock. Fortunately, after subsampling, the number of pixels is greatly reduced, so the system clock frequency is not too high.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Color Television Image Signal Generators (AREA)
- Solid State Image Pick-Up Elements (AREA)
- Color Image Communication Systems (AREA)
Description
Claims (23)
- 一种多光谱感光器件,其特征在于,包括以行和列排列的象素阵列,以及第一合并单元,用于对所述象素阵列中的紧邻的同行异列、异行同列、或异行异列的象素间进行两两合并采样,获得第一合并象素的采样数据;第二合并单元,用于对第一合并单元得到的第一合并象素的采样数据进行合并采样,获得第二合并象素的采样数据。
- 如权利要求1所述的多光谱感光器件,其特征在于,还包括第三合并单元,用于对第二合并单元得到的第二合并象素的采样数据进行合并采样,获得第三合并象素的采样数据。
- 如权利要求2所述的多光谱感光器件,其特征在于,所述第一合并单元或第二合并单元的象素合并方式为相同或不同色彩象素间的电荷累加方式或两个不同色彩象素间的信号平均方式,其中不同色彩象素间的象素合并方式遵照色彩空间变换的方式,以满足色彩重建的要求。
- 如权利要求3所述的多光谱感光器件,其特征在于,所述电荷累加方式是在读取电容中完成的。
- 如权利要求1至4任意一项所述的多光谱感光器件,其特征在于,所述第一合并单元或第二合并单元的基于色彩的合并采样方式包括同色合并方式、异色合并方式、混杂合并方式、或选择性抛弃多余色彩合并方式,且第一合并单元和第二合并单元采用的合并采样方式不同时为同色合并方式。
- 如权利要求1至5任意一项所述的多光谱感光器件,其特征在于,所述第一合并单元或第二合并单元的基于位置的合并采样方式包括以下几种方式中的至少一种:直接输出到总线的信号自动平均方式、跳行或跳列方式、和逐个采样方式。
- 如权利要求1至6任意一项所述的多光谱感光器件,其特征在于,所述第三合并采样单元的合并采样方式包括:色彩空间变换方式和后端数字图像缩放方式中的至少一种。
- 如权利要求1或7任意一项所述的多光谱感光器件,其特征在于,所述色彩空间变换包括RGB到CyYeMgX空间的变换、RGB到YUV空间的变换, 或CyYeMgX到YUV空间的变换、其中X为R、G、B中的任一种。
- 如权利要求1至8任意一项所述的多光谱感光器件,其特征在于,所述象素阵列由复数个包含至少一个基本象素的宏象素组成,其中基本象素可以为被动象素或主动象素。
- 如权利要求9所述的多光谱感光器件,其特征在于,所述宏象素中的基本象素按方阵或蜂窝排列。
- 如权利要求9或10所述的多光谱感光器件,其特征在于,所述宏象素的组成方式包括以下组成方式至少一种:不带读取电容的3T主动象素组成方式、带一个读取电容的4T主动象素组成方式。
- 如权利要求11所述的多光谱感光器件,其特征在于,所述带一个读取电容的4T主动象素,采用4点共享方式、6点共享方式、或8点共享方式。
- 如权利要求9所述的多光谱感光器件,其特征在于,所述宏象素包括如下的组成方式:由四个按方阵排列的基本象素和两个位于两行中间的不透光的读取电容组成,上一行的象素与下一行的象素共用一个读取电容,两个读取电容之间可以实现电荷转移,并且在至少一个读取电容上连接有读取电路。
- 根据权利要求9所述的多光谱感光器件,其特征在于,所述宏象素为带两点共享、三点共享、或四点共享读取电容的3T或4T主动象素的基本象素组成,采用4点桥式共享方式、6点桥式共享方式、或8点桥式共享方式。
- 根据权利要求1至14任意一项所述的多光谱感光器件,其特征在于,所述多光谱感光器件的全图采样方式包括逐行扫描、逐行读取方式或逐行扫描、隔行或跨行读取方式。
- 一种多光谱感光器件的采样方法,其特征在于,包括:第一合并过程,用于对所述象素阵列中的紧邻的同行异列、异行同列、或异行异列的象素间进行两两合并采样,获得第一合并象素的采样数据;第二合并过程,用于对第一合并过程得到的第一合并象素的采样数据进行合并采样,获得第二合并象素的采样数据。
- 如权利要求16所述的采样方法,其特征在于,还包括:第三合并过程,用于对第二合并过程得到的第二合并象素的采样数据进行合并采样,获得第三合并象素的采样数据。
- 如权利要求16或17所述的采样方法,其特征在于,所述第一合并过程或第二合并过程的象素合并采样方式为相同或不同色彩象素间的电荷相加方式或不同色彩象素间的信号平均方式,其中不同色彩象素间的象素合并方式遵照色彩空间变换的方式,以满足色彩重建的要求。
- 如权利要求16至18任意一项所述的采样方法,其特征在于,所述第一过程或第二合并过程的基于色彩的合并采样方式包括同色合并方式、异色合并方式、混杂合并方式、或选择性抛弃多余色彩方式,且第一合并过程和第二合并过程中至少一个合并过程不是同色合并方式。
- 如权利要求16至19任意一项所述的采样方法,其特征在于,所述第一合并过程或第二合并采样过程的基于位置的合并采样方式包括以下几种方式中的至少一种:直接输出到总线的信号自动平均方式、跳行或跳列方式、和逐个采样方式。
- 如权利要求16至21任意一项所述的采样方法,其特征在于,所述第三合并采样过程进行的合并采样方式包括:色彩空间变换方式、后端数字图像缩放方式。
- 如权利要求18或21所述的采样方法,其特征在于,所述色彩空间变换包括RGB到CyYeMgX空间的变换、 RGB到YUV空间的变换、 或CyYeMgX到YUV空间的变换、其中X为R、G、B中的任一种。
- 如权利要求18至22任意一项所述的采样方法,其特征在于,其全图采样的方式包括逐行扫描、逐行读取方式或逐行扫描、隔行或跨行读取方式。
Priority Applications (8)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1020127024252A KR101473424B1 (ko) | 2010-06-01 | 2010-06-01 | 다중 스펙트럼 감광소자 및 그 샘플링 방법 |
| CA2787911A CA2787911C (en) | 2010-06-01 | 2010-06-01 | Multi-spectrum photosensitive devices and methods for sampling the same |
| HUE10852353A HUE039688T2 (hu) | 2010-06-01 | 2010-06-01 | Multispektrális fotoreceptoros készülék és annak mérési módszere |
| US13/699,525 US9100597B2 (en) | 2010-06-01 | 2010-06-01 | Multi-spectrum photosensitive devices and methods for sampling the same |
| PCT/CN2010/073443 WO2011150554A1 (zh) | 2010-06-01 | 2010-06-01 | 一种多光谱感光器件及其采样方法 |
| JP2013512715A JP5775570B2 (ja) | 2010-06-01 | 2010-06-01 | マルチスペクトル感光素子及びそのサンプリング方法 |
| RU2012157297/07A RU2534018C2 (ru) | 2010-06-01 | 2010-06-01 | Мультиспектральные фоточувствительные устройства и способы для их дискретизации |
| EP10852353.1A EP2512126B1 (en) | 2010-06-01 | 2010-06-01 | Multispectral photoreceptive device and sampling method thereof |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2010/073443 WO2011150554A1 (zh) | 2010-06-01 | 2010-06-01 | 一种多光谱感光器件及其采样方法 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2011150554A1 true WO2011150554A1 (zh) | 2011-12-08 |
Family
ID=45066114
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2010/073443 Ceased WO2011150554A1 (zh) | 2010-06-01 | 2010-06-01 | 一种多光谱感光器件及其采样方法 |
Country Status (8)
| Country | Link |
|---|---|
| US (1) | US9100597B2 (zh) |
| EP (1) | EP2512126B1 (zh) |
| JP (1) | JP5775570B2 (zh) |
| KR (1) | KR101473424B1 (zh) |
| CA (1) | CA2787911C (zh) |
| HU (1) | HUE039688T2 (zh) |
| RU (1) | RU2534018C2 (zh) |
| WO (1) | WO2011150554A1 (zh) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2014011558A (ja) * | 2012-06-28 | 2014-01-20 | Olympus Corp | 固体撮像装置 |
Families Citing this family (36)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9293500B2 (en) | 2013-03-01 | 2016-03-22 | Apple Inc. | Exposure control for image sensors |
| US9276031B2 (en) | 2013-03-04 | 2016-03-01 | Apple Inc. | Photodiode with different electric potential regions for image sensors |
| US9741754B2 (en) | 2013-03-06 | 2017-08-22 | Apple Inc. | Charge transfer circuit with storage nodes in image sensors |
| US9549099B2 (en) | 2013-03-12 | 2017-01-17 | Apple Inc. | Hybrid image sensor |
| US9319611B2 (en) | 2013-03-14 | 2016-04-19 | Apple Inc. | Image sensor with flexible pixel summing |
| US9596423B1 (en) | 2013-11-21 | 2017-03-14 | Apple Inc. | Charge summing in an image sensor |
| US9596420B2 (en) | 2013-12-05 | 2017-03-14 | Apple Inc. | Image sensor having pixels with different integration periods |
| US9473706B2 (en) | 2013-12-09 | 2016-10-18 | Apple Inc. | Image sensor flicker detection |
| US10285626B1 (en) | 2014-02-14 | 2019-05-14 | Apple Inc. | Activity identification using an optical heart rate monitor |
| US9232150B2 (en) | 2014-03-12 | 2016-01-05 | Apple Inc. | System and method for estimating an ambient light condition using an image sensor |
| US9277144B2 (en) | 2014-03-12 | 2016-03-01 | Apple Inc. | System and method for estimating an ambient light condition using an image sensor and field-of-view compensation |
| US9584743B1 (en) | 2014-03-13 | 2017-02-28 | Apple Inc. | Image sensor with auto-focus and pixel cross-talk compensation |
| US9497397B1 (en) | 2014-04-08 | 2016-11-15 | Apple Inc. | Image sensor with auto-focus and color ratio cross-talk comparison |
| US9538106B2 (en) | 2014-04-25 | 2017-01-03 | Apple Inc. | Image sensor having a uniform digital power signature |
| US9686485B2 (en) * | 2014-05-30 | 2017-06-20 | Apple Inc. | Pixel binning in an image sensor |
| FR3023653B1 (fr) * | 2014-07-09 | 2017-11-24 | Commissariat Energie Atomique | Capteur d'images cmos a echantillonnage multiple correle |
| FR3026877B1 (fr) * | 2014-10-03 | 2018-01-05 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Capteur d'empreintes digitales ou palmaires |
| US9912883B1 (en) | 2016-05-10 | 2018-03-06 | Apple Inc. | Image sensor with calibrated column analog-to-digital converters |
| EP3712945A3 (en) | 2016-09-23 | 2020-12-02 | Apple Inc. | Stacked backside illuminated spad array |
| WO2018140522A2 (en) | 2017-01-25 | 2018-08-02 | Apple Inc. | Spad detector having modulated sensitivity |
| US10656251B1 (en) | 2017-01-25 | 2020-05-19 | Apple Inc. | Signal acquisition in a SPAD detector |
| US10962628B1 (en) | 2017-01-26 | 2021-03-30 | Apple Inc. | Spatial temporal weighting in a SPAD detector |
| FR3064869B1 (fr) * | 2017-03-28 | 2019-05-03 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Capteur d'images |
| US10622538B2 (en) | 2017-07-18 | 2020-04-14 | Apple Inc. | Techniques for providing a haptic output and sensing a haptic input using a piezoelectric body |
| US10440301B2 (en) | 2017-09-08 | 2019-10-08 | Apple Inc. | Image capture device, pixel, and method providing improved phase detection auto-focus performance |
| US10848693B2 (en) | 2018-07-18 | 2020-11-24 | Apple Inc. | Image flare detection using asymmetric pixels |
| US11019294B2 (en) | 2018-07-18 | 2021-05-25 | Apple Inc. | Seamless readout mode transitions in image sensors |
| ES2983837T3 (es) * | 2018-07-31 | 2024-10-24 | Deutsches Krebsforschungszentrum Stiftung Des Oeffentlichen Rechts | Método y sistema para la captación de imágenes aumentadas durante una intervención abierta utilizando información multiespectral |
| US11233966B1 (en) | 2018-11-29 | 2022-01-25 | Apple Inc. | Breakdown voltage monitoring for avalanche diodes |
| JP7757026B2 (ja) * | 2019-11-20 | 2025-10-21 | キヤノン株式会社 | 撮像装置、撮像システム、および移動体 |
| JP2022022121A (ja) | 2020-07-23 | 2022-02-03 | 三星電子株式会社 | イメージセンサ及びイメージ処理方法、並びにイメージセンサを含む電子装置 |
| US11563910B2 (en) | 2020-08-04 | 2023-01-24 | Apple Inc. | Image capture devices having phase detection auto-focus pixels |
| US12356740B2 (en) | 2020-09-25 | 2025-07-08 | Apple Inc. | Transistor integration with stacked single-photon avalanche diode (SPAD) pixel arrays |
| US11546532B1 (en) | 2021-03-16 | 2023-01-03 | Apple Inc. | Dynamic correlated double sampling for noise rejection in image sensors |
| US12192644B2 (en) | 2021-07-29 | 2025-01-07 | Apple Inc. | Pulse-width modulation pixel sensor |
| US12069384B2 (en) | 2021-09-23 | 2024-08-20 | Apple Inc. | Image capture devices having phase detection auto-focus pixels |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6693670B1 (en) | 1999-07-29 | 2004-02-17 | Vision - Sciences, Inc. | Multi-photodetector unit cell |
| US6801258B1 (en) | 1998-03-16 | 2004-10-05 | California Institute Of Technology | CMOS integration sensor with fully differential column readout circuit for light adaptive imaging |
| CN1717003A (zh) * | 2004-02-11 | 2006-01-04 | 三星电子株式会社 | 图像传感装置中具有更高显示质量的子采样 |
| US7091466B2 (en) | 2003-12-19 | 2006-08-15 | Micron Technology, Inc. | Apparatus and method for pixel binning in an image sensor |
| US7319218B2 (en) | 2003-11-13 | 2008-01-15 | Micron Technology, Inc. | Method and apparatus for pixel signal binning and interpolation in column circuits of a sensor circuit |
| CN101223772A (zh) * | 2005-07-20 | 2008-07-16 | 伊斯曼柯达公司 | 基于亮度的像素重新分级和平均 |
| CN101834974A (zh) * | 2009-03-09 | 2010-09-15 | 博立码杰通讯(深圳)有限公司 | 一种多光谱感光器件及其采样方法 |
| CN101853861A (zh) * | 2009-04-01 | 2010-10-06 | 博立码杰通讯(深圳)有限公司 | 一种感光器件及其读取方法、读取电路 |
Family Cites Families (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| RU2152692C1 (ru) * | 1999-09-21 | 2000-07-10 | Бобрышев Владимир Дмитриевич | Преобразователь столбца теплового изображения в электрический сигнал |
| JP3515467B2 (ja) | 2000-02-18 | 2004-04-05 | 三洋電機株式会社 | ディジタルカメラ |
| US7424207B2 (en) | 2000-02-18 | 2008-09-09 | Sanyo Electric Co., Ltd. | Digital camera |
| JP4557540B2 (ja) | 2002-12-20 | 2010-10-06 | パナソニック株式会社 | 固体撮像装置及びその駆動方法、並びにカメラ |
| JP4492250B2 (ja) * | 2004-08-11 | 2010-06-30 | ソニー株式会社 | 固体撮像素子 |
| JP4306603B2 (ja) | 2004-12-20 | 2009-08-05 | ソニー株式会社 | 固体撮像装置および固体撮像装置の駆動方法 |
| US7417670B1 (en) | 2005-01-12 | 2008-08-26 | Ambarella, Inc. | Digital video camera with binning or skipping correction |
| US7705900B2 (en) * | 2005-06-01 | 2010-04-27 | Eastman Kodak Company | CMOS image sensor pixel with selectable binning and conversion gain |
| JP4894275B2 (ja) * | 2006-01-20 | 2012-03-14 | ソニー株式会社 | 固体撮像装置 |
| US7916362B2 (en) | 2006-05-22 | 2011-03-29 | Eastman Kodak Company | Image sensor with improved light sensitivity |
| JP4855192B2 (ja) * | 2006-09-14 | 2012-01-18 | 富士フイルム株式会社 | イメージセンサ及びデジタルカメラ |
| KR100818987B1 (ko) | 2006-09-19 | 2008-04-04 | 삼성전자주식회사 | 이미지 촬상 장치 및 상기 이미지 촬상 장치의 동작 방법 |
| JP4386096B2 (ja) * | 2007-05-18 | 2009-12-16 | ソニー株式会社 | 画像入力処理装置、および、その方法 |
| US7924332B2 (en) | 2007-05-25 | 2011-04-12 | The Trustees Of The University Of Pennsylvania | Current/voltage mode image sensor with switchless active pixels |
| CN101345248B (zh) | 2007-07-09 | 2010-07-14 | 博立码杰通讯(深圳)有限公司 | 多光谱感光器件及其制作方法 |
| JP2009021809A (ja) * | 2007-07-11 | 2009-01-29 | Canon Inc | 撮像装置の駆動方法、撮像装置、及び撮像システム |
| US7755121B2 (en) * | 2007-08-23 | 2010-07-13 | Aptina Imaging Corp. | Imagers, apparatuses and systems utilizing pixels with improved optical resolution and methods of operating the same |
| US8089522B2 (en) * | 2007-09-07 | 2012-01-03 | Regents Of The University Of Minnesota | Spatial-temporal multi-resolution image sensor with adaptive frame rates for tracking movement in a region of interest |
| JP5026951B2 (ja) * | 2007-12-26 | 2012-09-19 | オリンパスイメージング株式会社 | 撮像素子の駆動装置、撮像素子の駆動方法、撮像装置、及び撮像素子 |
| US7667169B2 (en) | 2008-05-22 | 2010-02-23 | Omnivision Technologies, Inc. | Image sensor with simultaneous auto-focus and image preview |
| JP5272634B2 (ja) * | 2008-06-20 | 2013-08-28 | ソニー株式会社 | 固体撮像装置、固体撮像装置の信号処理方法および撮像装置 |
| US8471939B2 (en) | 2008-08-01 | 2013-06-25 | Omnivision Technologies, Inc. | Image sensor having multiple sensing layers |
| US7777171B2 (en) * | 2008-08-26 | 2010-08-17 | Eastman Kodak Company | In-pixel summing of charge generated by two or more pixels having two reset transistors connected in series |
| JP5253956B2 (ja) * | 2008-10-16 | 2013-07-31 | シャープ株式会社 | 固体撮像装置及びその駆動方法、並びに電子情報機器 |
| US8130302B2 (en) * | 2008-11-07 | 2012-03-06 | Aptina Imaging Corporation | Methods and apparatus providing selective binning of pixel circuits |
| JP5029624B2 (ja) * | 2009-01-15 | 2012-09-19 | ソニー株式会社 | 固体撮像装置及び電子機器 |
| US8913166B2 (en) * | 2009-01-21 | 2014-12-16 | Canon Kabushiki Kaisha | Solid-state imaging apparatus |
-
2010
- 2010-06-01 WO PCT/CN2010/073443 patent/WO2011150554A1/zh not_active Ceased
- 2010-06-01 HU HUE10852353A patent/HUE039688T2/hu unknown
- 2010-06-01 CA CA2787911A patent/CA2787911C/en not_active Expired - Fee Related
- 2010-06-01 US US13/699,525 patent/US9100597B2/en not_active Expired - Fee Related
- 2010-06-01 EP EP10852353.1A patent/EP2512126B1/en not_active Not-in-force
- 2010-06-01 RU RU2012157297/07A patent/RU2534018C2/ru not_active IP Right Cessation
- 2010-06-01 KR KR1020127024252A patent/KR101473424B1/ko not_active Expired - Fee Related
- 2010-06-01 JP JP2013512715A patent/JP5775570B2/ja not_active Expired - Fee Related
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6801258B1 (en) | 1998-03-16 | 2004-10-05 | California Institute Of Technology | CMOS integration sensor with fully differential column readout circuit for light adaptive imaging |
| US6693670B1 (en) | 1999-07-29 | 2004-02-17 | Vision - Sciences, Inc. | Multi-photodetector unit cell |
| US7319218B2 (en) | 2003-11-13 | 2008-01-15 | Micron Technology, Inc. | Method and apparatus for pixel signal binning and interpolation in column circuits of a sensor circuit |
| US7091466B2 (en) | 2003-12-19 | 2006-08-15 | Micron Technology, Inc. | Apparatus and method for pixel binning in an image sensor |
| CN1717003A (zh) * | 2004-02-11 | 2006-01-04 | 三星电子株式会社 | 图像传感装置中具有更高显示质量的子采样 |
| CN101223772A (zh) * | 2005-07-20 | 2008-07-16 | 伊斯曼柯达公司 | 基于亮度的像素重新分级和平均 |
| CN101834974A (zh) * | 2009-03-09 | 2010-09-15 | 博立码杰通讯(深圳)有限公司 | 一种多光谱感光器件及其采样方法 |
| CN101853861A (zh) * | 2009-04-01 | 2010-10-06 | 博立码杰通讯(深圳)有限公司 | 一种感光器件及其读取方法、读取电路 |
Non-Patent Citations (4)
| Title |
|---|
| M. KIMATA: "Handbook of Infrared Detection Technologies", 2002, ELSEVIER SCIENCE LTD., article "Silicon infrared focal plane arrays", pages: 352 - 392 |
| See also references of EP2512126A4 |
| T. TOKUDA ET AL.: "A CMOS image sensor with eye-safe detection function backside carrier injection", J. INST IMAGE INFORMATION & TELEVISION ENG., vol. 60, no. 3, March 2006 (2006-03-01), pages 366 - 372 |
| T. TOKUDA ET AL.: "Backside-hybrid Photodetector for trans-chip detection of NIR light", IEEE WORKSHOP ON CHARGE-COUPLED DEVICES & ADVANCED IMAGE SENSORS, ELMAU, GERMANY, May 2003 (2003-05-01) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2014011558A (ja) * | 2012-06-28 | 2014-01-20 | Olympus Corp | 固体撮像装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| US20130068934A1 (en) | 2013-03-21 |
| EP2512126A1 (en) | 2012-10-17 |
| KR20130050911A (ko) | 2013-05-16 |
| RU2534018C2 (ru) | 2014-11-27 |
| EP2512126B1 (en) | 2018-07-25 |
| EP2512126A4 (en) | 2014-01-01 |
| HUE039688T2 (hu) | 2019-01-28 |
| JP2013529035A (ja) | 2013-07-11 |
| CA2787911A1 (en) | 2011-12-08 |
| CA2787911C (en) | 2017-07-11 |
| JP5775570B2 (ja) | 2015-09-09 |
| RU2012157297A (ru) | 2014-07-20 |
| US9100597B2 (en) | 2015-08-04 |
| KR101473424B1 (ko) | 2014-12-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2011150554A1 (zh) | 一种多光谱感光器件及其采样方法 | |
| US6784928B1 (en) | Solid state image pickup device and signal reading method thereof | |
| US5955753A (en) | Solid-state image pickup apparatus and image pickup apparatus | |
| JP5103679B2 (ja) | 固体撮像装置及びその駆動方法 | |
| CN102209207B (zh) | 固态成像装置和成像装置 | |
| US20080218598A1 (en) | Imaging method, imaging apparatus, and driving device | |
| JP5895525B2 (ja) | 撮像素子 | |
| JP2011082768A (ja) | 固体撮像装置および撮像装置 | |
| JP3501682B2 (ja) | カラー撮像装置及びそれを用いた撮像システム | |
| US20240214707A1 (en) | Solid-state imaging device, method for manufacturing solid-state imaging device, and electronic apparatus | |
| TW202147827A (zh) | 攝像裝置及電子機器 | |
| JP2001292453A (ja) | カラー撮像装置及びそれを用いた撮像システム | |
| CN101834974A (zh) | 一种多光谱感光器件及其采样方法 | |
| JP3102557B2 (ja) | 固体撮像素子およびその駆動方法 | |
| Nabeyama et al. | All solid state color camera with single-chip MOS imager | |
| JPWO2010090167A1 (ja) | 固体撮像装置 | |
| JP2000295530A (ja) | 固体撮像装置 | |
| JP3501686B2 (ja) | カラー撮像装置及びそれを用いた撮像システム | |
| US20070085924A1 (en) | Control circuit for reading out signal charges from main and subsidiary pixels of a solid-state image sensor separately from each other in interlace scanning | |
| Miyatake et al. | Transversal-readout architecture for CMOS active pixel image sensors | |
| JP5767320B2 (ja) | 感光素子、感光素子の読み込み方法、及び感光素子の読み込み回路 | |
| JP2725265B2 (ja) | 固体撮像装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10852353 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 1646/MUMNP/2012 Country of ref document: IN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2010852353 Country of ref document: EP |
|
| ENP | Entry into the national phase |
Ref document number: 2787911 Country of ref document: CA |
|
| ENP | Entry into the national phase |
Ref document number: 2013512715 Country of ref document: JP Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 20127024252 Country of ref document: KR Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13699525 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2012157297 Country of ref document: RU Kind code of ref document: A |