HK1216051B - Visible and infrared image sensor - Google Patents
Visible and infrared image sensor Download PDFInfo
- Publication number
- HK1216051B HK1216051B HK16103952.2A HK16103952A HK1216051B HK 1216051 B HK1216051 B HK 1216051B HK 16103952 A HK16103952 A HK 16103952A HK 1216051 B HK1216051 B HK 1216051B
- Authority
- HK
- Hong Kong
- Prior art keywords
- pixels
- semiconductor layer
- layer
- subset
- pixel
- Prior art date
Links
Abstract
The subject invention relates to a visible and infrared image sensor. A pixel array including an SixGey layer disposed on a first semiconductor layer. A plurality of pixels is disposed in the first semiconductor layer. The plurality of pixels includes: (1) a first portion of pixels separated from the SixGey layer by a spacer region and (2) a second portion of pixels including a first doped region in contact with the SixGey layer. The pixel array also includes pinning wells disposed between individual pixels in the plurality of pixels. A first portion of the pinning wells extend through the first semiconductor layer. A second portion of the pinning wells extend through the first semiconductor layer and the SixGey layer.
Description
Technical Field
The present invention relates generally to image sensors, and particularly, but not exclusively, to visible and infrared image sensors.
Background
An image sensor is an electronic device that converts light (in the form of an optical image) into an electronic signal. Semiconductor-based image sensors have become ubiquitous in modern electronic devices, such as cellular telephones, camcorders, and desktop/laptop computers. Modern image sensors are typically semiconductor Charge Coupled Devices (CCDs), active pixel sensors in Complementary Metal Oxide Semiconductor (CMOS) or N-type metal oxide semiconductor (NMOS) technology. These devices are typically used to capture visible light; however, in some applications, it is desirable to detect light outside the visible spectrum.
Infrared (IR) light is a portion of the electromagnetic spectrum. All objects emit a certain amount of black body radiation depending on their temperature. Generally, the higher the temperature of the object, the more IR light is emitted as blackbody radiation. Because ambient light is not required, image sensors fabricated to detect IR function even in full black. Thus, IR image sensors may be helpful in rescue operations, night photography, and other dark conditions.
More useful than an image sensor that can detect only infrared light is an image sensor that can detect both IR and visible light. However, detecting infrared light generally requires low bandgap materials that are difficult to integrate with conventional image sensor fabrication processes. Thus, it has proven challenging to combine infrared imaging technology with visible light imaging technology. This difficulty in fabricating hybrid visible-IR image sensors has led to hybrid sensors that suffer from low IR sensitivity, visible light contamination, semiconductor defects, and the like.
Disclosure of Invention
One embodiment of the present application relates to a pixel array comprising: sixGeyA layer disposed on the first semiconductor layer; a plurality of pixels disposed in the first semiconductor layer, the plurality of pixels including: (1) a first pixel part, wherein the first pixel part is connected with the Si via a spacer regionxGeySeparating layers; and (2) second pixel portions, wherein each of the second pixel portions includes Si and SixGeyA first doped region in contact with the layer; and a pinned well disposed between individual pixels of the plurality of pixels, wherein a first portion of the pinned well extends through the first semiconductor layer and a second portion of the pinned well extends through the first semiconductor layer and the SixGeyAnd (3) a layer.
Another embodiment of the present application relates to an image sensor, comprising: a second semiconductor layer disposed on a backside of the first semiconductor layer; one or more groups of pixels disposed in a front side of the first semiconductor layer, the one or more groups of pixels including: a first pixel portion, wherein the first pixel portion is separated from the second semiconductor layer by a spacing region; a second pixel portion, wherein a first doped region of the second pixel portion is in contact with the second semiconductor layer, and wherein the first doped region has the same majority charge carrier type as the second semiconductor layer; pinning wells separating individual pixels in the group of pixels, wherein the pinning wells extend through the first semiconductor layer; and a deep pinned well separating the one or more groups of pixels, wherein the deep pinned well extends through the first semiconductor layer and the second semiconductor layer.
Yet another embodiment of the present application relates to an image sensor manufacturing method, the method comprising: forming a second semiconductor layer on a backside of the first semiconductor layer; forming one or more groups of pixels disposed in a front side of the first semiconductor layer, the one or more groups of pixels including: a first pixel portion, wherein the first pixel portion is separated from the second semiconductor layer by a spacing region; a second pixel portion, wherein a first doped region of the second pixel portion is in contact with the second semiconductor layer, and wherein the first doped region has the same majority charge carrier type as the second semiconductor layer; a pinning well separating individual pixels of the one or more groups of pixels, wherein the pinning well extends through the first semiconductor layer; and a deep pinned well separating the one or more groups of pixels, wherein the deep pinned well extends through the first semiconductor layer and the second semiconductor layer.
Drawings
Non-limiting and non-exhaustive examples of the present invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.
Figure 1 is a cross-sectional view of an example of a visible and infrared image sensor pixel array according to the teachings of this disclosure.
FIG. 2 is a block diagram illustrating one example of visible and infrared image sensors according to the teachings of this disclosure.
Figure 3 is a flow chart of a process for forming visible and infrared image sensors according to the teachings of this disclosure.
Figures 4A-4C show a process for forming visible and infrared image sensors according to the teachings of this disclosure.
Detailed Description
Examples of systems and methods for forming visible and infrared (hereinafter "IR") image sensors are described herein. In the following description, numerous specific details are set forth to provide a thorough understanding of embodiments. However, those skilled in the art having the benefit of this disclosure will recognize that the techniques described herein can be practiced without one or more of the specific details, or with other methods, components, or materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects.
Reference throughout this specification to "one example" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the example is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one example" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more examples.
Several technical terms are used throughout this specification. Unless specifically defined herein or otherwise clearly implied by the context of its use, these terms have the ordinary meaning in the technical field from which they come. It should be noted that elements and compounds may be referred to interchangeably by their appropriate names or element symbols (e.g., silicon versus Si).
Fig. 1 is a cross-sectional view of an example of a visible and infrared image sensor pixel array 100 (hereinafter "pixel array") according to the teachings of this disclosure. The pixel array 100 includes Si disposed on a first semiconductor layer 131xGeyLayer 109. In one example, the first semiconductor layer 131 includes silicon. A plurality of pixels (e.g., red pixels 161, green pixels 163, IR pixels 165, and blue pixels 167) are disposed proximate to the front side 149 of the first semiconductor layer 131. The plurality of pixels includes a first pixel portion, each of which includes a through spacer region 111 and SixGeyA first doped region 119 separated by layer 109. The plurality of pixels also includes a second pixel portion, wherein each pixel in the second pixel portion has a first doped region 120, the first doped region 120 and Si formed proximate to the backside 151 of the first semiconductor layer 131xGeyLayer 109 is in contact. In one example, the first pixel portion includes red pixels 161, green pixels 163, and blue pixels 167, and the second pixel portion includes infrared pixels 165. In one example, SixGeyLayer 109 is n-type, spacer regions 111 are p-type, and first doped regions 119/120 are n-type. However, in another example, the polarity of the layers/regions may be reversed.
Pixel array 100 may also include pinned well 113/115 disposed between individual pixels (e.g., red pixel 161, green pixel 163, IR pixel 165, and blue pixel 167). A first portion of the pinned well 113 extends through the first semiconductor layer 131 and separates the individual pixels. A second portion of the pinning well 115 extends through the first semiconductor layer 131 and through the SixGeyLayer 109. In one example, a second portion of the pinned well 115 separates a group of pixels including at least red pixels 161, green pixels 163, blue pixels 167, and infrared pixels 165. In another or the same example, the pinned well 113/115 may comprise a p-type semiconductor.
In one example, the pixel array 100 may further include a filter layer 137, which may include a red filter 123, a green filter 125, a blue filter 129, and an infrared filter 127. In one example, the red, green, and blue filters 123, 125, 129 are positioned to transmit visible and infrared light to the first pixel portion (e.g., red, green, and blue pixels 161, 163, 167). The infrared filter 127 is positioned to transmit at least infrared light to the second pixel portion (e.g., the IR pixel 165). Further, the filter layer 137 may be arranged in a Bayer (Bayer) pattern, an X-cross pattern, an EXR pattern, or the like.
In one example, the pixel array 100 may also include a cap layer 107, an oxide layer 105, and a carrier wafer 103. After forming the second semiconductor layer 109, a cap layer 107 may be formed on the second semiconductor layer 109. In one example, the cap layer 107 is a p + Si cap layer. Next, an oxide layer 105 may be formed on the cap layer 107. The oxide layer 105 may be used to secure the carrier wafer 103 to existing layers of the device architecture (e.g., the cap layer 107, the second semiconductor layer 109, and the first semiconductor layer 131). The carrier wafer 103 allows processing of the remaining device architectures.
It should be noted that other elements of the optical device architecture not depicted may also be formed. In one example, a microlens layer (including individual microlenses) is fabricated proximate to the filter layer 137. The microlens layer is positioned to transmit incident light through the filter layer 137, the anti-reflective coating 135, and the isolation layer 133 into the individual pixels.
The pixel array 100 is capable of detecting both visible light and infrared light. As illustrated in the example depicted in fig. 1, both visible light and IR photons are directed through the filter layer 137, the anti-reflective coating 135, the isolation layer 133, and into the first semiconductor layer 131.
Visible light is absorbed in the first semiconductor layer 131, thereby generating charges in the first doped region 119/120. A p-n junction is formed at the interface between the first doped region 119/120 and the second doped region 141. In one example, the first doped region 119/120 is n-type and the second doped region 114 is p-type; however, in another example, the polarity of the first and second doped regions 119/120, 141 may be reversed. The accumulated image charge can be transferred to the floating diffusion 143 by applying a voltage to the transfer gate 145. Subsequently, charge can be read out of the floating diffusion 143 via the conductive interconnect 147.
Conversely, the IR light passes through the first semiconductor layer 131 and into the SixGeyLayer 109. SixGeyLayer 109 has a lower bandgap than silicon that may be used to form first semiconductor layer 131. Thus, SixGeyLayer 109 is able to absorb infrared photons more efficiently. Once in SixGeyAbsorption of photons in layer 109 transfers charge into first doped region 120 due to first doped region 120 and the SixGeyLayer 109 is in contact. A p-n junction is formed at the interface between the first doped region 120 and the second doped region 141. In one example, the first doped region 119/120 is n-type and the second doped region 141 is p-type. The accumulated image charge can be transferred to the floating diffusion 143 by applying a voltage to the transfer gate 145. Subsequently, charge can be read out of the floating diffusion 143 via the conductive interconnect 147. This image charge can be used to form an infrared image or a hybrid visible-infrared image.
It should be noted that in other examples (not depicted), multiple pixels may share a single floating diffusion 143, including examples in which two, four, six, and eight pixels all share the same floating diffusion 143. In an example where multiple pixels share the same floating diffusion 143, each pixel has its own transfer gate 145. Thus, charge can be read out from individual pixels one at a time by applying a voltage to one transfer gate 145 at a time. Conversely, charge can be read out from multiple pixels simultaneously by applying voltages to several transfer gates 145 in unison.
In one example, SixGeyLayer 109 comprises silicon and germanium and is graded such that the germanium content increases in one direction. Large Ge atoms can strain the main silicon lattice (strain). Therefore, in order to reduce the lattice strain,the Ge content is slowly increased in a direction moving away from the first semiconductor layer 131. To realize SixGeyThe graded structure of layer 109, Si may be grown using Atomic Layer Deposition (ALD), Chemical Vapor Deposition (CVD), Molecular Beam Epitaxy (MBE), or the likexGeyLayer 109. Si to be gradedxGeyThe incorporation of layer 109 into pixel array 100 allows for the uniform integration of high sensitivity IR absorbing layers into visible and IR image sensors. This results in the formation of hybrid image sensors with superior performance over conventional IR imaging systems and enhanced versatility of use.
In one example, the second SixGeyLayer 109 may include other elements. For example, to SixGeyDoping layer 109 with boron, nitrogen, phosphorus, arsenic or gallium can introduce different energy levels into the host material bandgap. Electrons can then be knocked off from the dopant at energy levels well below the cut-off wavelength of silicon or germanium, and IR detection at longer wavelengths becomes possible. In another example, other low bandgap semiconductor materials, including other silicon-based alloys, germanium-based alloys, gallium-based alloys, or the like, may be used in place of Si in accordance with the teachings of this disclosurexGeyLayer 109. These other low bandgap semiconductor materials may also be doped with elements including, for example, boron, nitrogen and phosphorus.
Figure 2 is a block diagram illustrating one example of a visible and infrared image sensor 200 (hereinafter "image sensor") according to the teachings of this disclosure. As shown in the depicted example, image sensor 200 includes pixel array 205, readout circuitry 211, functional logic 215, and control circuitry 221. In one example, pixel array 205 is a two-dimensional (2D) array of individual pixels (e.g., pixels P1, P2 …, Pn) including rows (e.g., rows R1-Ry) and columns (e.g., columns C1-Cx). In one example, it should be appreciated that pixels P1, P2 …, Pn can be examples of pixels (e.g., red pixel 161, green pixel 163, IR pixel 165, and blue pixel 167) included in the pixel array 100 discussed in fig. 1. The pixel array 205 may be used to acquire image data of a person, place, object, etc., which may then be used to present a 2D image of the person, place, object, etc. In one example, after each image sensor pixel in pixel array 205 (e.g., red pixel 161, green pixel 163, IR pixel 165, and blue pixel 167) has obtained its image data or image charge, the image charge is then readout by readout circuitry 211 and transferred to functional logic 215. Readout circuitry 211 is coupled to readout image data from individual pixels in pixel array 205, and functional logic 215 is coupled to readout circuitry 211 to perform logical operations on the image data. In various examples, readout circuitry 211 may include amplification circuitry, analog-to-digital (ADC) conversion circuitry, or other circuitry. Function logic 215 may simply store the image data or even manipulate the image data by applying post-image processing effects (e.g., cropping, rotating, removing red-eye, adjusting brightness, adjusting contrast, or otherwise). In one example, readout circuitry 211 may readout a row of image data at a time along readout column lines (illustrated) or may readout the image data using a variety of other techniques (not illustrated), such as serial readout, full parallel readout of all pixels at the same time.
In one example, the control circuitry 221 is coupled to the pixel array 205 to control the operation of individual pixels (e.g., P1, P2, P3, etc.) in the pixel array 205. For example, the control circuit 221 may generate a shutter signal for controlling image acquisition. In one example, the shutter signal is a global shutter signal for simultaneously enabling all pixels within pixel array 205 to simultaneously capture their respective image data during a single acquisition window. In another example, the shutter signal is a rolling shutter signal such that each row of pixels, each column of pixels, or each group of pixels is sequentially enabled during successive acquisition windows. In another example, image acquisition is synchronized with an illumination effect (e.g., a flash).
In one example, the image sensor 200 may be included in a digital video camera, a cellular telephone, a laptop computer, or the like. Moreover, the image sensor 200 may be coupled to other hardware elements, such as a processor, memory elements, outputs (USB port, wireless transmitter, HDMI port, etc.), lighting/flashing, electrical inputs (keyboard, touch display, track pad, mouse, microphone, etc.), and/or a display. Other hardware elements may deliver instructions to the image sensor 200, extract image data from the image sensor 200, or manipulate image data supplied by the image sensor 200.
Figure 3 is a flow diagram of a process 300 for forming visible and infrared image sensors according to the teachings of this disclosure. The order in which some or all of the process blocks appear in each process should not be construed as limiting. In particular, those skilled in the art having the benefit of this disclosure will appreciate that some of the process blocks may be performed in a variety of orders not illustrated, or even performed in parallel.
Process block 301 shows forming a second semiconductor layer (e.g., Si) on a backside (e.g., backside 151) of a first semiconductor layer (e.g., first semiconductor layer 131)xGeyLayer 109). In one example, the first semiconductor layer is predominantly silicon and the second semiconductor layer comprises SiGe. The SiGe layer may be doped with other elements (e.g., boron, nitrogen, phosphorous, or the like). Sometimes, the Ge content in the second semiconductor layer gradually changes such that the Ge concentration increases in a direction moving away from the first semiconductor layer. Because the SiGe lattice constant is higher than that of silicon, if the Ge concentration in solid solution increases excessively rapidly, the SiGe layer may tend to have a higher linear dislocation density. The Ge in the SiGe layer is often increased by only 10% per micron (up to about 30% Ge concentration) to prevent the formation of threading dislocations. Chemical mechanical polishing may be used to reduce defects.
Process block 303 shows forming a cap layer (e.g., p + Si cap layer 107) disposed proximate to the second semiconductor layer such that the second semiconductor layer (e.g., Si)xGeyLayer 109) is disposed between a first semiconductor layer (e.g., first semiconductor layer 131) and a cap layer. The cap layer will also be strained due to the high Ge content in the backside of the second semiconductor layer (the side not in contact with the first semiconductor layer). The cap layer should not be limited to p + Si alone, but rather should benefit from the teachings of the present invention in the artOne will recognize that other materials (e.g., other semiconductors/oxides) may be used to achieve the same or similar results.
Process block 305 illustrates forming an oxide layer (e.g., bond oxide 105) on the p + Si cap layer. Thus, the oxide layer is disposed proximate to the second semiconductor layer, and the second semiconductor layer is disposed between the first semiconductor layer and the oxide layer. In one example, the bonding oxide layer may comprise a semiconductor oxide or a metal oxide.
In process block 307, a carrier wafer (e.g., carrier wafer 103) is attached to the oxide layer, which allows the pixel architecture to be processed in/on the first semiconductor layer. The carrier wafer may comprise silicon, silicon oxide, metal oxide, or the like.
Process block 309 shows processing the pixel, pinned well, and support architecture in/on the front side (e.g., front side 149) of the first semiconductor layer. In one example, this process includes forming one or more groups of pixels (e.g., red pixels 161, green pixels 163, IR pixels 165, and blue pixels 167) disposed in a front side of a first semiconductor layer. Forming one or more pixel groups may include forming first and second pixel portions. The first pixel portion (e.g., the red pixel 161, the green pixel 163, and the blue pixel 167) may be separated from the second semiconductor layer by a spacing region (e.g., the spacing region 111). The second pixel portion (e.g., IR pixel 165) may include a first doped region (e.g., first doped region 120), and the first doped region may be in contact with the second semiconductor layer. The first doped region may also have the same majority charge carrier type as the second semiconductor layer.
Furthermore, pinning wells may be formed to separate individual pixels in one or more pixel groups. A pinning well (e.g., pinning well 113) extends through the first semiconductor layer. Similarly, deep pinned wells (e.g., deep pinned well 115) may be formed to separate one or more groups of pixels, where the deep pinned wells extend through the first and second semiconductor layers.
In one example, an isolation layer (e.g., isolation layer 133) can be formed proximate to the front side of the first semiconductor layer, and the isolation layer can include a conductive interconnect (e.g., conductive interconnect 147). The isolation layer may be made of silicon oxide, metal oxide, polymer, or the like. The conductive interconnects can comprise a metal. An anti-reflective coating (e.g., anti-reflective coating 135) can also be formed such that the isolation layer is disposed between the first semiconductor layer and the anti-reflective coating. Further, a filter layer (e.g., filter layer 137) may be formed, and may include red, green, blue, and infrared filters. In one example, the filter layer is disposed such that the anti-reflective coating is between the isolation layer and the filter layer.
As illustrated in fig. 3, forming the second semiconductor layer may occur prior to forming one or more pixel groups. However, in different examples, forming the second semiconductor layer may occur after forming at least a portion of one or more groups of pixels.
Figures 4A-4C show a process 400 of forming visible and infrared image sensors according to the teachings of this disclosure. Notably, portions of process 400 correspond to process blocks in process 300. The order in which some or all of the processes occur should not be construed as limiting. In particular, those skilled in the art, having the benefit of this disclosure, will appreciate that some of the processes described may be performed in a variety of orders not illustrated, or even in parallel.
Fig. 4A shows the formation of a second semiconductor layer 409 on the backside 451 of the first semiconductor layer 431 (see process block 301). In one example, the first semiconductor layer 431 is primarily silicon and the second semiconductor layer 409 comprises SiGe. In one example, the second semiconductor layer 409 also includes a doping element, such as boron, nitrogen, phosphorus, or the like. The Ge content in the second semiconductor layer 409 may gradually change and increase in a direction moving away from the first semiconductor layer 431. As previously stated, the Ge grading may help to reduce the dislocation density in the second semiconductor layer 409.
Fig. 4B illustrates the construction of several elements of the device architecture (e.g., p + Si cap layer 407, bonding oxide 405, and carrier wafer 403) that may occur prior to fabrication of most pixel device architectures (see fig. 4C). After forming the second semiconductor layer 409, a cap layer 407 may be formed on the second semiconductor layer 409. In one example, cap layer 407 is a p + Si cap layer. Next, an oxide layer 405 may be formed on the cap layer 407. The carrier wafer 403 is secured to existing layers of the device architecture (e.g., the cap layer 407, the second semiconductor layer 409, and the first semiconductor layer 431) using the oxide layer 405. The carrier wafer 403 allows processing of the remaining device architectures.
Fig. 4C illustrates processing of the remaining device architecture in/on the front side 449 of the first semiconductor layer 431 (see process block 309). In one example, this process includes forming one or more groups of pixels (e.g., red pixels 461, green pixels 463, IR pixels 465, and blue pixels 467) disposed in the front side 449 of the first semiconductor layer 431. Forming one or more pixel groups may include forming first and second pixel portions. The first pixel portion (e.g., red pixel 461, green pixel 463 and blue pixel 467) is separated from the second semiconductor layer 409 by the spacing region 411. The second pixel portion (e.g., IR pixel 465) includes a first doped region 420 in contact with the second semiconductor layer 409. The first doped region 420 may also have the same majority charge carrier type (e.g., both n-type or both p-type) as the second semiconductor layer 409.
In one example, the pinning well 413 is formed to separate individual pixels in one or more groups of pixels, with the pinning well 413 extending through the first semiconductor layer 431. Similarly, deep pinned well 415 may be formed to separate one or more groups of pixels, where deep pinned well 415 extends through first semiconductor layer 431 and second semiconductor layer 409.
In one example, isolation layer 433 can be formed proximate front side 449 of first semiconductor layer 431, and isolation layer 433 can include conductive interconnects 447. Spacer layer 433 may be made of silicon oxide, metal oxide, polymer, or the like. Further, the conductive interconnects 447 can comprise a metal. The anti-reflective coating 435 may also be formed such that the isolation layer 433 is disposed between the first semiconductor layer 431 and the anti-reflective coating 435. The filter layer 437 may be formed and the filter layer 437 may include a red filter 423, a green filter 425, a blue filter 429, and an infrared filter 427. In one example, filter layer 437 is disposed such that anti-reflective coating 435 is located between isolation layer 433 and filter layer 437. Although not depicted, the filter layer 437 can be one continuous layer comprising individual filters.
Other elements of the device architecture not depicted may also be formed. In one example, a microlens layer (including individual microlenses) is fabricated proximate to the filter layer 437. The microlenses are positioned to transmit incident light through the filter layer 437, anti-reflective coating 435, and isolation layer 433 into individual pixels.
The above description of illustrated examples of the invention, including what is described in the abstract of the specification, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific examples and examples of the invention are described herein for illustrative purposes, various modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific examples disclosed in the specification. Rather, the scope of the invention should be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
Claims (21)
1. An array of pixels, the array of pixels comprising:
SixGeya layer disposed below the first semiconductor layer;
a plurality of pixels disposed in the first semiconductor layer, the plurality of pixels including:
a plurality of doping regions, wherein each pixel of the plurality of pixels comprises a respective doping region;
a first subset of pixels, wherein the respective doped region portion of each pixel in the first subset of pixels extendsExtending into the first semiconductor layer and through a spacer region with the SixGeySeparating layers; and
a second subset of pixels, wherein the respective doped region of each pixel in the second subset of pixels extends through the first semiconductor layer and with the SixGeyContacting the layers; and
a plurality of pinned wells, each pinned well of the plurality of pinned wells disposed between individual pixels of the plurality of pixels, wherein a first subset of pinned wells extends through the first semiconductor layer and a second subset of pinned wells extends through the first semiconductor layer and the SixGeyAnd (3) a layer.
2. The pixel array of claim 1, wherein the SixGeyThe layer is graded such that the Ge content increases in a direction away from the first semiconductor layer.
3. The pixel array of claim 1, wherein the first subset of pixels includes red, green, and blue pixels, and wherein the second subset of pixels includes infrared pixels.
4. The pixel array of claim 3, wherein said second pinned-well subset separates a group of pixels including at least red, green, blue, and infrared pixels.
5. The pixel array of claim 1, wherein the SixGeyThe layer includes silicon and germanium.
6. The pixel array of claim 1, further comprising red, green, blue, and infrared filters, wherein the red, green, and blue filters are positioned to transmit visible light and infrared light to the first subset of pixels, and wherein the infrared filter is positioned to transmit at least one of visible light and infrared light to the second subset of pixels.
7. An image sensor, the image sensor comprising:
a second semiconductor layer disposed on a backside of the first semiconductor layer;
one or more groups of pixels disposed in a front side of the first semiconductor layer, the one or more groups of pixels including:
a plurality of pixels including a first subset of pixels and a second subset of pixels, wherein each pixel of the plurality of pixels includes a first doped region,
wherein the first doped region of each pixel in the first subset of pixels extends through the first semiconductor layer and is separated from the second semiconductor layer by a spacer region, an
Wherein the first doped region of the second subset of pixels extends through the first semiconductor layer and is in contact with the second semiconductor layer, and wherein the first doped region has the same majority charge carrier type as the second semiconductor layer;
pinning wells separating individual pixels in the group of pixels, wherein the pinning wells extend through the first semiconductor layer; and
a deep pinned well separating the one or more groups of pixels, wherein the deep pinned well extends through the first semiconductor layer and the second semiconductor layer.
8. The image sensor of claim 7, wherein the second semiconductor layer comprises SiGe.
9. The image sensor of claim 7, wherein the first subset of pixels includes red, green, and blue pixels, and wherein the second subset of pixels includes infrared pixels.
10. The image sensor of claim 9, wherein the one or more groups of pixels include at least red, green, blue, and infrared pixels.
11. The image sensor of claim 7, wherein each of the individual pixels in the one or more groups of pixels includes a transfer gate coupled to transfer charge from a second doped region to a floating diffusion, wherein the second doped region is in contact with the first doped region and has an opposite majority charge carrier type than the first doped region.
12. The image sensor of claim 7, wherein individual pixels in the one or more groups of pixels are arranged in a pixel array comprising rows and columns.
13. The image sensor of claim 7, further comprising:
control circuitry coupled to control operation of the individual pixels in the one or more groups of pixels;
readout circuitry coupled to readout image data from the individual pixels in the one or more groups of pixels; and
functional logic coupled to the readout circuitry to perform logical operations on the image data.
14. An image sensor manufacturing method, the method comprising:
forming a second semiconductor layer on a backside of the first semiconductor layer;
forming one or more groups of pixels disposed in a front side of the first semiconductor layer, the one or more groups of pixels including:
a first pixel portion, wherein the first pixel portion is separated from the second semiconductor layer by a spacing region;
a second pixel portion, wherein a first doped region of the second pixel portion is in contact with the second semiconductor layer, and wherein the first doped region has the same majority charge carrier type as the second semiconductor layer;
a pinning well separating individual pixels of the one or more groups of pixels, wherein the pinning well extends through the first semiconductor layer; and
a deep pinned well separating the one or more groups of pixels, wherein the deep pinned well extends through the first semiconductor layer and the second semiconductor layer.
15. The method of claim 14, wherein forming the second semiconductor layer comprises forming a layer comprising SiGe with a Ge content that increases in a direction.
16. The method of claim 14, further comprising forming a cap layer disposed proximate to the second semiconductor layer, wherein the second semiconductor layer is disposed between the first semiconductor layer and the cap layer.
17. The method of claim 14, further comprising forming an oxide layer disposed proximate to the second semiconductor layer, wherein the second semiconductor layer is disposed between the first semiconductor layer and the oxide layer.
18. The method of claim 17, further comprising attaching a carrier wafer to the oxide layer.
19. The method of claim 14, wherein forming the second semiconductor layer occurs before forming the one or more groups of pixels.
20. The method of claim 14, wherein forming the second semiconductor layer occurs after forming at least a portion of the one or more groups of pixels.
21. The method of claim 14, further comprising:
forming an isolation layer disposed proximate to the front side of the first semiconductor layer, wherein the isolation layer includes a conductive interconnect;
forming an anti-reflective coating, wherein the isolation layer is disposed between the first semiconductor layer and the anti-reflective coating; and
forming a filter layer, wherein the filter layer includes red, green, blue, and infrared filters, and wherein the anti-reflective coating is disposed between the isolation layer and the filter layer.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/341,257 | 2014-07-25 | ||
| US14/341,257 US9806122B2 (en) | 2014-07-25 | 2014-07-25 | Visible and infrared image sensor |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1216051A1 HK1216051A1 (en) | 2016-10-07 |
| HK1216051B true HK1216051B (en) | 2019-07-19 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105321966B (en) | Visible light and infrared imaging sensor | |
| CN105374834B (en) | The method for making single-photon avalanche diode imaging sensor | |
| TWI578507B (en) | Self-aligning isolation structure and filter | |
| US8441087B2 (en) | Direct readout focal plane array | |
| US20100271524A1 (en) | Multilayer image sensor pixel structure for reducing crosstalk | |
| US20170345851A1 (en) | Graded-semiconductor image sensor | |
| TWI618235B (en) | Quantum dot image sensor | |
| US7948018B2 (en) | Multilayer image sensor structure for reducing crosstalk | |
| TW201729406A (en) | Hard mask as contact etch stop layer in image sensor | |
| HK1216051B (en) | Visible and infrared image sensor | |
| CN111146219B (en) | Small pitch image sensor | |
| CN105405854A (en) | Imaging Sensor Pixel And Method For Manufacturing Imaging Sensor Pixel | |
| CN108933149B (en) | Imaging sensor pixels and systems | |
| US20170162621A1 (en) | Light channels with multi-step etch | |
| HK1224816A1 (en) | Image sensor with enhanced quantum efficiency | |
| HK1218348B (en) | Image sensor, imaging system, and method of image sensor fabrication | |
| CN105845698A (en) | Image sensor with enhanced quantum efficiency | |
| HK1224816B (en) | Image sensor with enhanced quantum efficiency |