[go: up one dir, main page]

HK1201982B - System and method for inspecting a wafer - Google Patents

System and method for inspecting a wafer Download PDF

Info

Publication number
HK1201982B
HK1201982B HK15101274.8A HK15101274A HK1201982B HK 1201982 B HK1201982 B HK 1201982B HK 15101274 A HK15101274 A HK 15101274A HK 1201982 B HK1201982 B HK 1201982B
Authority
HK
Hong Kong
Prior art keywords
image
wafer
illumination
semiconductor wafer
inspecting
Prior art date
Application number
HK15101274.8A
Other languages
Chinese (zh)
Other versions
HK1201982A1 (en
Inventor
阿曼努拉.阿杰亚拉里
葛汉成
Original Assignee
联达科技设备私人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from SG200900229-6A external-priority patent/SG163442A1/en
Application filed by 联达科技设备私人有限公司 filed Critical 联达科技设备私人有限公司
Publication of HK1201982A1 publication Critical patent/HK1201982A1/en
Publication of HK1201982B publication Critical patent/HK1201982B/en

Links

Abstract

A method and a system for inspecting a wafer. The system comprises an optical inspection head, a wafer table, a wafer stack, a XY table and vibration isolators. The optical inspection head comprises a number of illuminators, image capture devices, objective lens and other optical components. The system and method enables capture of brightfield images, darkfield images, 3D profile images and review images. Captured images are converted into image signals and transmitted to a programmable controller for processing. Inspection is performed while the wafer is in motion. Captured images are compared with reference images for detecting defects on the wafer. An exemplary reference creation process for creating reference images and an exemplary image inspection process is also provided by the present invention. The reference image creation process is an automated process.

Description

System and method for inspecting wafers
Technical Field
The invention relates to a wafer detection process. And more particularly to an automated system and method for inspecting semiconductor components.
Background
The ability to ensure the production quality of semiconductor components, such as semiconductor wafers and chips, is of increasing importance in the semiconductor manufacturing industry. Semiconductor wafer fabrication processes are continually being improved to incorporate an increasing number of feature sizes into semiconductor wafers having a small surface area. Accordingly, the use of photolithography processes for the production of semiconductor wafers is becoming more mature, allowing an increasing number of feature sizes to be incorporated into semiconductor wafers having a smaller surface area (e.g., semiconductor wafers achieving higher performance). Therefore, the size of possible errors on a semiconductor wafer is typically in the micron to submicron range.
It is clear that the manufacture of semiconductor wafers has an increasing need for improved quality control of semiconductor wafer production and for inspection procedures that ensure high quality semiconductor wafer production. Defects found after inspection of semiconductor wafers typically have particles, imperfections, undulations and other irregularities on the surface. These defects can affect the final properties of the semiconductor wafer. Therefore, it is very important to remove or eliminate the defects of the semiconductor wafer in the production process of the semiconductor wafer.
Semiconductor inspection systems and processes have become very advanced. For example, higher resolution imaging systems, faster computers and more accurate mechanical loading systems are used. In addition, in the past, inspection systems, methods and techniques for semiconductor wafers have utilized at least one of bright field illumination, dark field illumination and spatial filtering.
With bright field imaging, small particles on the semiconductor wafer scatter the light, causing the light to deviate from the imaging device's collection aperture, resulting in less light energy returning to the imaging device. When the particles are smaller than the spot spread function of the lens or digitized pixel, bright field light energy in the field around the particles concentrates a large amount of the particles' light energy, making it difficult to find the particles. In addition, the very small reduction in optical energy is due to the fact that the size of the small particles is often masked by the change in reflectivity around the particles, resulting in increased instances of error in defect detection. To overcome the above phenomenon, semiconductor inspection systems have been equipped with high-end cameras with higher resolution that are capable of capturing images of smaller surfaces on semiconductor wafers. Bright field images generally have better pixel contrast and this is also advantageous for estimating the size of defects and when detecting dark defects.
Dark field imaging and its advantages are well known in the art. Dark field imaging has been used in some existing semiconductor wafer inspection systems. Dark field imaging generally relies on the angle of incidence of light on the object to be inspected. At a low angle (e.g., 3 to 30 degrees) relative to the horizontal plane of the inspected article, dark field imaging typically produces a black image except for defective locations that are not black, such as particles, imperfections, and other irregularities on the surface. A particular application of this dark field imaging is to illuminate defects having a size smaller than the resolving size of the lens used to generate the dark field image. At a high angle (e.g., 30 to 85 degrees) relative to the horizontal plane of the article to be inspected, dark field imaging generally produces better contrast maps for comparison with bright field images. This particular application of high angle dark field imaging improves the degree of contrast of irregularities on the surface of mirror finished or transparent objects. In addition, high angle dark field imaging improves the imaging quality of tilted articles.
The reflectivity of the light of a semiconductor wafer typically has a significant effect on the image quality obtained for each of the bright and dark field imaging. The microstructures and macrostructures present on a semiconductor wafer affect the reflectivity of light from the semiconductor wafer. Generally, the amount of light reflected by a semiconductor wafer is a function of the direction or angle of the incident light, the direction of observation, and the reflectivity of the light at the surface of the semiconductor wafer. The reflectivity of light is in turn determined by the wavelength of the incident light and the material from which the semiconductor wafer is fabricated.
It is often difficult to control the reflectivity of light from a semiconductor wafer for inspection. This is because the semiconductor wafer is composed of multiple layers of materials. Each layer of material is capable of transmitting light of a different wavelength, e.g., a different velocity. In addition, each layer has a different light source permeability or even reflectivity. Thus, it will be apparent to those skilled in the art that the use of light, or the use of a single wavelength or narrow band of light, will generally affect the quality of the acquired image. It is often inconvenient to require multiple spatial filters or wavelength tuners to change a single wavelength or narrow band of light. To alleviate such problems, it is important to use broadband illumination (e.g., light sources with a wide wavelength range), such as a broadband illumination having a wavelength range between 300 nanometers and 1000 nanometers.
Broadband illumination has a significant impact on the acquisition of high quality images and the detection of semiconductor wafers with a wide range of surface reflectivities. In addition, wafer inspection systems improve the ability to find defects by utilizing multiple illumination angles or contrast, for example, bright field and dark field illumination. Wafer inspection systems currently on the market typically do not utilize multiple angles of illumination and a full broadband wavelength light source.
Currently available wafer inspection systems or equipment typically utilize one of the following methods to obtain multiple reactions during wafer inspection:
(1) multiple Image acquisition device with Multiple lighting (Multiple Image Capture devices MICD)
The MICD utilizes multiple image acquisition devices and multiple illuminations. MICD is based on the principle of dividing the total wavelength spectrum into a number of narrow-band light and using each segmented wavelength spectrum for the respective illumination. In designing a system for a method utilizing MICD, each image capture device corresponds to an illumination (e.g., illumination source) and is equipped with a corresponding optical assembly such as a spatial filter or specially coated beam splitter. For example, the wavelength of bright field illumination is limited to between 400 and 600 nanometers using a mercury arc lamp and spatial filter, and the wavelength of dark field illumination is limited to between 650 and 700 nanometers using a laser emitter. The MICD approach also has its drawbacks, such as lower image quality and relatively inflexible system design and configuration. Lower image treatments are generally due to varying surface reflectivity of the semiconductor wafer being inspected, as well as the use of narrow band light sources to inspect the semiconductor wafer. The inflexibility of system design is due to the fact that changing the wavelength of the single illumination used by the system typically requires reconfiguring the optics of the entire system. Additionally, the MICD method generally does not allow for the easy acquisition of illumination of varying wavelengths by an image acquisition device that does not improve the quality of the acquired image or the speed at which the image is acquired.
(2) Single Image Capture Device (SICD) with multiple lighting
The SICD method utilizes a single image acquisition device to acquire multiple illuminations of light, each illumination being a segmented wavelength spectrum light source (e.g., a narrowband light source) or a broadband light source. However, this method cannot obtain multiple illumination responses while the semiconductor wafer is moving. In other words, the SICD method allows only one illumination response while the semiconductor wafer is moving. In order to realize multiple illumination reactions, the SICD method needs to acquire images while the semiconductor wafer is stationary, which affects the working efficiency of the wafer inspection system.
Semiconductor wafer inspection systems that use simultaneous or independent dynamic image acquisition by using broadband bright and dark field illumination, or general multiple illumination and multiple image acquisition devices, are currently not available due to a relative lack of understanding of its practical implementation and operational advantages.
As described above, the existing semiconductor wafer inspection system generally uses MICD or SICD. Devices using MICDs do not utilize broadband illumination and typically suffer from either lower image quality and inflexibility of system setup or configuration. On the other hand, a semiconductor wafer inspection system using SICD may reduce the operating efficiency of the system and fail to obtain a dynamic image while a plurality of illumination responses occur.
A presently preferred semiconductor wafer optical inspection system utilizing bright field illumination and dark field illumination is described in U.S. patent No. 5,822,055(KLA 1). An embodiment of an optical detection system is described in KLA1 using the MICD described above. The optical detection system described in KLA1 uses multiple cameras to acquire bright field and dark field images, respectively. The acquired brightfield and darkfield images are then processed separately or together to discover defects on the semiconductor wafer. In addition, the optical inspection system of KLA1 uses different light sources for bright field and dark field illumination to acquire bright field and dark field synchronized images. The optical detection system of KLA1 enables simultaneous image acquisition (e.g., acquisition of bright field and dark field images) by using segmented wavelength spectra emitted by the illumination emitter and spatial filters. With respect to the optical inspection system of KLA1, a camera is configured to capture dark field images, with corresponding use of narrow band laser illumination and spatial filters. Another camera is provided for acquiring bright field images, correspondingly using bright field illumination and a beam splitter with a special coating. Disadvantages of the optical inspection system described by KLA1 include that you are well suited for imaging semiconductor wafers having a large amount of surface reflectivity. This is due to the use of segmented wavelength spectrum illumination. Each of these cameras collects illumination in a preset wavelength spectrum. There is little flexibility for each of the different wavelength spectra to be collected to enhance the images collected for a certain type of wafer. For example, a wafer having a carbon coating on its first surface may exhibit only a slightly weak reflectivity at certain illumination angles, such as bright field illumination. Therefore, observing certain defects on these wafers requires a combination of bright field illumination and high angle dark field illumination. The optical detection system of KLA1 uses some illumination emitter or source and filters. The optical inspection system of KLA1 is subjected to multiple inspections (e.g., multiple inspections) to enable it to acquire bright field and dark field images. Such optical detection systems therefore do not work efficiently.
Another presently preferred optical detection system using bright field and dark field imaging is described in U.S. patent No. 6,826,298(AUGTECH1) and U.S. patent No. 6,937,753(AUGTECH 2). The optical inspection systems of AUGTECH1 and AUGTECH2 use some laser emitters for performing low angle dark field imaging and fiber ring light for performing high angle dark field imaging. In addition, the optical detection systems of AUGTECH1 and AUGTECH2 both use a single camera sensor and the SICD method described earlier. Thus, inspection of semiconductor wafers using the optical inspection systems of AUGTECH1 and AUGTECH2 is performed by bright field imaging or dark field imaging or a combination of both, wherein each of the bright field imaging and the dark field imaging is initiated after the other action is completed. The detection systems of AUGTECH1 and AUGTECH2 are not capable of simultaneous, dynamic and independent bright field and dark field imaging. Therefore, each semiconductor wafer needs to undergo inspection a plurality of times in order to complete the inspection. The result is reduced production efficiency and increased use of resources.
In addition, some existing optical inspection systems use excellent maps or reference maps for comparison with semiconductor wafer images obtained more recently. The method of reference map selection is typically to capture some known or manually selected "good" semiconductor wafer image and then use statistical methods or techniques to obtain the reference map. One drawback of the above sorting technique is that it is not precise or consistent with manually sorting "good" semiconductor wafers. Optical inspection systems using such reference maps often experience good semiconductor wafers being rejected due to inaccurate or inconsistent reference maps. With the increasingly complex geometry of semiconductor wafers, it becomes increasingly impossible to rely on manual selection of "good" semiconductor wafers as reference figures, especially as the quality standards set by the semiconductor inspection industry increase.
Obtaining an excellent reference map involves many statistical techniques and calculations. Most existing statistical techniques are very common and have their own advantages. Currently available optical inspection systems or devices typically use a mean or average and standard deviation in obtaining excellent reference pixels. Obtaining an excellent reference pixel using the mean and standard deviation can become a useful good pixel; otherwise, any defective or noisy pixels could interfere with and affect the final average or median value of the reference pixels. Another statistical technique utilizes median to reduce the intervention due to noisy pixels. However, it is not possible or at least difficult to achieve essentially eliminating the effect of the early spots. Existing optical inspection systems or equipment views reduce the effects of noise by utilizing varying statistical techniques. However, a user-friendly or simple method for reducing or eliminating the effects of noise (e.g., errors) is still in design. Such an approach would help eliminate noisy pixels that could affect the final reference pixel.
US6,324,298(AUGTECH3) describes an alignment method for generating an excellent reference map or reference map for use in semiconductor wafer inspection. The method described in AUGTECH3 requires "known high quality" or "defect free" wafers. This choice of "known high quality" wafers is manual or user operated. A reference map is then obtained using statistical methods or techniques. Also, accurate and consistent selection of "known high quality" wafers is critical to maintaining high quality semiconductor inspection. The method of AUGTECH3 uses the mean and standard deviation to calculate the different pixels on the reference map. Thus, the presence of any imperfect pixels results in inaccurate reference pixels being obtained. Imperfect pixels are due to impurities or other defects. Such impurities or defects adversely affect the statistical calculation and lead to inaccurate reference pixels being obtained. It will be apparent to those skilled in the art that inaccuracies, inconsistencies and errors in the inspection of semiconductor wafers occur with the method of AUGTECH 3.
In addition, the optical inspection system described by AUGTECH3 uses a flash or strobe light to illuminate the semiconductor wafer. One skilled in the art will appreciate that inconsistencies between different flashes or strobes may result from a number of factors, including, but not limited to, differences in temperature, electronic inconsistencies, and different intensities of flashes or strobes. These differences and inconsistencies are inherent even in "good" semiconductor wafers. The occurrence of such differences will affect the quality of the excellent reference map obtained if the system does not take these differences into account. In addition, factors that affect illumination intensity and uniformity across a large section of the semiconductor wafer surface include, but are not limited to, different flatness of the wafer, mounting and reflectance of light at different locations on the semiconductor wafer surface. Regardless of the differences and factors described above, any reference map obtained using the above-described methods is unreliable and inaccurate when used to compare images obtained at different locations on the surface of a semiconductor wafer.
Variations in product standards are common in the semiconductor industry, such as semiconductor wafer size, complexity, surface reflectivity. Therefore, there is a need for a semiconductor wafer inspection system and method that is capable of inspecting semiconductor wafers of different standards. However, existing semiconductor wafer inspection systems and methods for inspecting a wide range of standard semiconductor wafers are often unsatisfactory, particularly due to the increased quality standards set by the semiconductor industry.
For example, generally current semiconductor wafer inspection systems use conventional optical devices composed of elements, such as cameras, illuminators, filters, polarizers, mirrors, and lenses, which are fixed in position in space. Adding or removing components on an optical device typically requires reassembly and redesign of the entire optical device. Accordingly, such semiconductor wafer inspection systems have inflexible designs or configurations and require a relatively long retooling time. In addition, the distance between the objective lens of conventional optical devices and the semiconductor wafer for inspection is typically too short to slow the adoption of different angles of fiber illumination for facilitating dark field imaging.
There are many other existing semiconductor wafer inspection systems and methods. However, due to the current lack of expertise and operating skills, existing semiconductor wafer inspection systems are unable to simultaneously perform bright field and dark field imaging for inspection while the wafer is in motion, despite design and configuration flexibility. There is also a need for a semiconductor wafer inspection system and method that is flexible, accurate and fast in terms of resource efficiency. Particularly due to the increased complexity of the electronic circuits of the semiconductor wafers and the increased quality standards of the semiconductor industry.
Disclosure of Invention
Today, there is a lack of semiconductor wafer inspection systems and methods that are capable of inspecting moving semiconductor wafers using bright field and dark field imaging, in addition to convenient system configuration and design flexibility. In addition, there is a need for components of semiconductor wafer inspection systems that have flexibility and space-adjustable relative configurations, such as illumination emitters, cameras, objective lenses, filters, and mirrors. Due to the increasing complexity of electronic circuits on semiconductor wafers and the increasing quality standards set by the semiconductor industry, the accuracy and consistency of inspection of semiconductor wafers becomes critical. To obtain excellent reference and reference images for comparison with the acquired images of the semiconductor wafer, manual selection of "good" semiconductor wafers is currently required. Such manual selection can lead to inaccuracies and inconsistencies in the resulting reference map and subsequent inspection of the semiconductor wafer. Accordingly, there is a need for an improved alignment method or process for obtaining a reference map for subsequent comparison with a captured image of a semiconductor wafer. The present invention seeks to address at least one of the above problems.
The present invention provides an inspection system and method for inspecting semiconductor modules including, but not limited to, semiconductor wafers, chips, Light Emitting Diode (LED) chips and solar silicon wafers. The inspection system is designed for performing both two-dimensional (2D) and three-dimensional (3D) wafer inspection. Such an inspection system is further designed for defect detection.
The 2D wafer inspection is simplified by a 2D optics module, said 2D optics module comprising at least two image capturing devices. 2D wafer inspection uses at least two different contrast illuminations for acquiring corresponding contrast illumination images. The 2D wafer inspection can be performed during the movement of the semiconductor wafer and can complete the work after one semiconductor wafer passes. 3D wafer inspection is simplified by a 3D optics module comprising at least one image capture device and at least one thin line illuminator or thin line illumination emitter. A thin line illuminator, which is a laser emitter or a broadband illumination source or a combination of both, captures a 3D image of a moving semiconductor wafer. The defect detection performed by the detection system is simplified by the defect detection module.
According to a first aspect of the invention, a method is described comprising acquiring a first image of a wafer under a first contrast illumination and acquiring a second image of the wafer under a second contrast illumination, the first illumination and the second illumination each having a broadband wavelength, the first contrast illumination and the second contrast illumination being used to find at least one defect location in the first and second images. The wafer is placed at a position preset between the capturing position of the first image and the capturing position of the second image. The method still further includes associating the first and second images and comparing the location of the defect in the first image to the location of the defect in the second image to provide a proof of the defect.
According to a second aspect of the invention, a method is described comprising providing a first image of a wafer having one or more defect locations and a second image of the wafer having one or more defect locations, and the wafer is placed at a location between the providing of the first and second images. The method still further includes correlating the spatial displacements of the wafer for the first and second images and comparing the defect locations in the first image to the defect locations in the second image to provide a defect manifest.
According to a third aspect of the invention, a method is described comprising acquiring a first image of a wafer under first contrast illumination and acquiring a second image of the wafer under second contrast illumination, the first contrast illumination and the second contrast illumination being used to find at least one defect location in the first and second images. The wafer is placed at a position preset between the capturing position of the first image and the capturing position of the second image. The method still further includes associating the first and second images and comparing the location of the defect in the first image to the location of the defect in the second image to provide a proof of the defect.
According to a fourth aspect of the invention, a system is described comprising a first image acquisition module for acquiring a first image of a wafer and a second image acquisition module for acquiring a second image of the wafer, the wafer being placed in a predetermined position between the acquisition position of the first image and the acquisition position of the second image. The system further includes a defect location comparison module coupled to the first and second image acquisition modules, the defect location comparison module for correlating spatial displacements of the wafer in the first and second images, comparing a defect location found in the first image with another defect location found in the second image and thereby providing a defect manifest.
Drawings
Preferred embodiments of the present invention are described below and with reference to the accompanying drawings, in which:
FIG. 1 shows a partial plan view of a preferred system for inspecting wafers in accordance with a preferred embodiment of the present invention;
FIG. 2 shows a partial isometric view of the system of FIG. 1;
FIG. 3 shows a partially exposed isometric view of the optical detection head of the system of FIG. 1, shown projecting according to the "A" direction in FIG. 2;
FIG. 4 shows a partial exposure isometric view of the automated wafer table of the system of FIG. 1, shown in a view according to "B" of FIG. 2;
FIG. 5 shows a partial exposure isometric view of the system of FIG. 1 with automatic wafer loading/unloading, according to the "C" orientation of FIG. 2;
FIG. 6 shows a partial exposure isometric view of the police stack module of the system of FIG. 1, highlighted according to the "D" direction in FIG. 2;
FIG. 7 shows a partial isometric view of an optical detection head of the system shown in FIG. 1;
FIG. 8 shows a partial front view of an optical detection head of the system of FIG. 1;
FIG. 9 shows the ray paths of the system of FIG. 1 between a brightfield illuminator, a low angle darkfield illuminator, a high angle darkfield illuminator, a first image collector, and a second image collector;
FIG. 10 is a flow chart of a preferred first ray path along bright field illumination provided by the bright field illuminator of FIG. 9;
FIG. 11 is a flow chart of a preferred second ray path along the high angle darkfield illumination provided by the high angle darkfield illuminator of FIG. 9;
FIG. 12 is a flow chart of a preferred third ray path along the low angle darkfield illumination provided by the low angle darkfield illuminator of FIG. 9;
FIG. 13 shows the path of the illumination light between the thin line illuminator and the 3D image collector or camera in the system of FIG. 1;
FIG. 14 shows the illumination light path between the review brightfield illuminator, the review darkfield illuminator, and the review image capture device of the system of FIG. 1;
FIG. 15 is a flow chart along a preferred fourth ray path for brightfield illumination between the review brightfield illuminator and the review image capture device shown in FIG. 14;
FIG. 16 is a flow chart along a preferred fifth ray path for darkfield illumination between the review darkfield illuminator and the review image capture device of FIG. 14;
FIG. 17 is a method flow chart of a preferred method for inspecting a wafer provided by the present invention;
FIG. 18 is a flowchart of a preferred reference image generation process for generating a reference image for comparison with an image acquired during execution of the method of FIG. 17;
FIG. 19 is a process flow diagram of a preferred two-dimensional wafer scanning process with timing offsets among the steps of the method of FIG. 17;
FIG. 20 shows a table of lighting configurations selected by the lighting configurator of the system of FIG. 1;
FIG. 21 shows a pulse waveform diagram for acquiring a first image by a first image acquirer and a second image by a second image acquirer;
FIG. 22a shows a first image captured by the first image capturing device of FIG. 1;
FIG. 22b shows a second image acquired by the second image acquisition device of FIG. 1;
FIG. 22c shows the combination of the first image of FIG. 22a and the second image of FIG. 22b to demonstrate image shift due to the acquisition of the first image and the second image as the wafer moves;
FIG. 23 is a process flow diagram of a preferred two-dimensional image processing procedure for performing the steps of the method of FIG. 17;
FIG. 24 is a process flow diagram of a preferred three-dimensional image processing procedure for performing the steps of the method of FIG. 17;
FIG. 25 shows a preferred illumination ray path between the thin line illuminator and the 3D image grabber or camera of the system of FIG. 1;
FIG. 26 is a process flow diagram of a second preferred three-dimensional wafer scanning process for performing the steps of the method of FIG. 17;
FIG. 27 is a process flow diagram of a preferred review process for performing the steps of the method of FIG. 17.
Detailed Description
The inspection of semiconductor components, such as semiconductor wafers and chips, is an increasingly important step in the processing and manufacturing of semiconductors. As the complexity of circuits on semiconductor wafers increases and as the quality standards for semiconductor wafers increase, there is an increasing need for improved inspection systems and methods for semiconductor wafers.
Current semiconductor wafer inspection systems and inspection methods, while providing flexibility in configuration and design, do not produce both bright and dark field images for dynamic inspection of semiconductor wafers and, in addition, require components of the semiconductor wafer inspection system that have flexibility and adjustable spatial relative configurations, such as illuminators, cameras or image capture devices, objective lenses, filters and mirrors thereof. Due to the increasing complexity of electronic circuits on semiconductor wafers and the increasing quality standards set by the semiconductor industry, the accuracy and consistency of inspection of semiconductor wafers becomes critical. To obtain excellent reference and reference images for comparison with the acquired images of the semiconductor wafer, manual selection of "good" semiconductor wafers is currently required. Such manual selection can lead to inaccuracies and inconsistencies in the acquired reference images and affect the results of the semiconductor wafer inspection. Accordingly, there is a need for an improved alignment method or process for obtaining a reference map for subsequent comparison with a captured image of a semiconductor wafer.
Embodiments of the present invention provide a preferred system and method for inspecting semiconductor devices to solve at least one of the above-identified problems.
For purposes of brevity and clarity, the description of the present invention is limited to the following systems and methods for semiconductor wafer inspection. Those skilled in the art will appreciate that they do not preclude the use of specific embodiments of the invention in other respects, having the same general principles as those of many embodiments of the invention, such as operation, function, or performance characteristics. For example, embodiments of the present invention provide systems and methods that can also be used for inspection of other semiconductor components, including but not limited to semiconductor chips, Light Emitting Diode (LED) chips, and solar silicon wafers.
As shown in fig. 1 and 2, a preferred system 10 for inspecting a wafer 12 is provided in accordance with a first embodiment of the present invention. The system 10 may also be used to inspect other semiconductor devices or components requiring inspection, and is characterized in that the system 10 includes an optical inspection head 14 (shown in fig. 3), a wafer transport table or wafer chuck 16 (shown in fig. 4), an automated wafer handler 18 (shown in fig. 5), a wafer stack model 20 (shown in fig. 6), or film frame cassette loader, an XY stage 22, and at least one set of four vibration isolators 24 (shown in fig. 1 and 2).
The optical detection head 14 shown in fig. 7 and 8 is composed of many illuminators and image capturing devices. The optical inspection head 14 includes a brightfield illuminator 26, a low angle darkfield illuminator 28 and a high angle darkfield illuminator 30. Those skilled in the art will appreciate that more darkfield illuminators may need to be applied to the system 10. It should further be appreciated by those skilled in the art that the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 can be integrated into a single darkfield illuminator and can be flexibly positioned as desired.
The brightfield illuminator 26, also referred to as a brightfield illumination light source or a brightfield illumination transmitter, provides or emits brightfield illumination or brightfield light. The brightfield illuminator 26 is, for example, a flash lamp or a white light emitting diode. It is characterized by the bright field illuminator 26 providing broadband bright field illumination substantially encompassing wavelengths between 300nm and 1000 nm. Those skilled in the art will appreciate that bright field illumination has wavelength variability and optical properties.
The brightfield illuminator 26 includes, inter alia, a first optical fiber (not shown) through which the brightfield illumination passes prior to being emitted from the brightfield illuminator 26, preferably the first optical fiber being used as a guide for the transmission of the brightfield illumination, and more particularly the first optical fiber directing the brightfield illumination emitted from the brightfield illuminator 26.
The low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 are also referred to as darkfield illumination sources or darkfield illumination emitters for providing or emitting darkfield illumination. The dark field illumination precisely aligns the illumination or light sources such that a minimized amount of directly transmitted (undispersed) light enters their respective image capture devices, which typically capture dark field images only receive illumination or light sources that have been dispersed by the specimen or object. The dark field image is typically enhanced to form contrast with the bright field image, with bright field illumination and dark field illumination being examples of contrast illumination.
The low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 are each exemplified by a flash lamp or a white light emitting diode. It is preferred that each of the low angle darkfield illuminator 28 and each of the high angle darkfield illuminator 30 provide darkfield illumination having similar optical characteristics as brightfield illumination. More specifically, the darkfield illumination provided by each of the low angle darkfield illuminator 28 and the high angle illuminator 30 is a broadband darkfield illumination (also referred to as darkfield broadband illumination) comprising wavelengths between 300nm and 1000 nm. That is, the bright field illumination and the dark field illumination are broadband illumination. In other words, the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 provide or emit darkfield illumination of different wavelengths or other optical characteristics.
The low angle darkfield illuminator 28 is positioned at a lower angle relative to the high angle darkfield illuminator 30 and the horizontal plane of the wafer 12 positioned on the wafer table 16 (or the horizontal plane of the wafer table 16). For example, the low angle darkfield illuminator 28 is preferably positioned at an angle of between 3 and 30 degrees to the horizontal plane of the wafer 12 positioned on the wafer table 16. In addition, the high angle darkfield illuminator 30 is preferably positioned at an angle of between 30 and 85 degrees to a horizontal plane of the wafer 12 positioned on the wafer table 16. The angles as described above are preferably changed by adjusting the position of each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30.
Each of the low angle darkfield illumination lamp 28 and the high angle darkfield illumination lamp 30 preferably comprises a second and third optical fibre (not shown) through which the darkfield illumination is transmitted. The second and third optical fibers act as a waveguide for directing the transmission of the darkfield illumination through the optical path of each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30. In addition, the second optical fiber facilitates directing the darkfield illumination to emanate from the low angle darkfield illuminator 28 and the third optical fiber facilitates directing the darkfield illumination to emanate from the high angle darkfield illuminator 30. The illumination provided by each of the darkfield illuminator 26, the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 is controllable and may be provided continuously or intermittently.
The full wavelength spectrum of the bright field illumination and the dark field illumination enhances the accuracy of the inspection of the wafer 12 and the detection of defects. Broadband illumination identifies the type of semiconductor wafer defect by altering surface reflections. In addition, the broadband wavelengths of the bright field illumination and the dark field illumination enable the inspection of the wafer 12 to be performed without being constrained by the reflective properties of the wafer 12. This means that defect detection on the wafer 12 is not affected by different sensitivities or reflectances or polarizations of the wafer 12 to different illumination wavelengths.
Preferably, the intensity of the brightfield illumination and the darkfield illumination provided by the brightfield illuminator 26, the darkfield illuminators 28, 30 can be selected and varied, respectively, as desired in accordance with the characteristics of the wafer 12, such as the material and surface coating of the wafer 12. In addition, each of the bright field and dark field illumination may be selected and varied to enhance the quality of the image captured by the wafer 12, while serving to improve the quality or accuracy of the inspection of the wafer 12.
As shown in fig. 7-9, the system 10 further includes a first image capture device 32 (e.g., a first camera) and a second image capture device 34 (e.g., a second camera). Both the first image capture device 32 and the second image capture device 34 are capable of receiving brightfield illumination provided by the brightfield illuminator 26 and darkfield illumination provided by the respective low angle darkfield illuminator 28 and high angle darkfield illuminator 30. The bright field and dark field illumination received into or entering the first image capture device 32 is preferably focused at a first image capture plane (not shown) for capturing the corresponding images. The bright field and dark field illumination received into or entering the second image capture device 34 is preferably focused in a second image capture plane (not shown) for capture of the corresponding image.
The first image capture device 32 and the second image capture device 34 are monochrome images or color images. The first image acquisition device 32 and the second image acquisition device 34 are also so-called image acquisition modules or image sensors. With single or three chip color sensors, it is desirable to be able to capture color images of the wafer 12 to enhance at least one of accuracy and defect detection speed. For example, the ability to capture color images of the wafer 12 helps reduce false defect detection on the wafer 12 and correspondingly reduces false rejects thereof.
The optical detection head 14 further comprises a first tube lens 36 for the first image acquisition device 32. In addition, the optical inspection head 14 further includes a second tube lens 38 for the second image capture device 34. The first tube lens 36 and the second tube lens 38 have common optical characteristics and functions. Thus, the tube lens 36 and 38 are capped with the first tube lens 36 and the second tube lens 38 for clarity only. The objective lens arrangement further comprises a plurality of objective lenses 40, for example four objective lenses 40. All of the objective lenses 40 are mounted together on a rotatable fixture 42 (shown in fig. 3) that is rotated to position each objective lens above each inspection location (not shown) or location of the wafer 12 for inspection. All objective lenses 40 and rotatable holders 42 can be referred to as a combination of one objective lens as a whole.
Each objective lens 40 is designed to achieve a different magnification and they have an iso-focal surface, each objective lens 40 preferably having a different predetermined magnification factor, for example 5 times, 10 times, 20 times and 50 times. Preferably, each objective lens 40 has infinite corrected aberrations. However, it will be appreciated by those skilled in the art that each objective lens may be altered or redesigned to achieve different magnifications and performance.
Each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 preferably includes focusing means or mechanisms for directing or focusing the darkfield illumination toward the wafer 12 positioned at the inspection position. The angle between the low angle darkfield illuminator 28 and the horizontal plane of the wafer 12 and the angle between the high angle darkfield illuminator 30 and the horizontal plane of the wafer 12 are preferably set and adjusted to enhance the accuracy of the defect detection. Preferably, each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 has a fixed spatial position with reference to the inspection position. In addition, the position of each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 is variable with reference to the inspection position during normal operation of the system 10.
As described above, both the bright field illumination and the dark field illumination are focused on the detection position. The bright field illumination and the dark field illumination are focused at the inspection position and light is directed at or at a portion of the wafer 12 at the inspection position.
As shown in fig. 6, the system 10 includes a wafer stack 20 or film frame cassette loader. The wafer stack 20 preferably includes a plurality of slots for loading a plurality of wafers 12. Each wafer 12 is loaded or transferred sequentially to a wafer table 16 (as shown in fig. 4) or to a wafer chuck (as shown in fig. 5) by an automated wafer handler 18. Preferably, a vacuum is drawn or formed on the wafer table 16 to ensure that the wafer 12 is positioned on the wafer table 16. The platen 16 preferably includes a predetermined plurality of apertures or gaps to create a vacuum to increase the secure and flat position of the frame cassette and frame (not shown) disposed on the platen 16. The wafer table 16 is also preferably capable of handling wafers having diameters between 6 and 12 feet, inclusive.
The wafer table 16 is coupled to an XY translation stage 22 (shown in fig. 1 and 2) to move the wafer table 16 in the X and Y directions. The transfer of the wafer table 16 correspondingly replaces the transfer of the wafer 12 placed thereon. In particular, the displacement of the wafer table 16 and the displacement of the wafer 12 placed thereon can be controlled to control the positioning of the wafer 12 at the inspection position. The XY-displacement stage 22 is optionally an air gap linear positioner. The XY-displacement table 22 or air gap linear positioner facilitates high precision displacement of the wafer table 16 in the X and Y directions with minimal effect from vibration transfer of other components of the system 10 to the wafer table 16, ensuring smooth and accurate positioning of the wafer 12 or components thereon at the inspection location. The XY-displacement table 22 and the wafer table 16 are mounted together on a bumper or vibration isolator 24 (shown in fig. 2) to absorb shock or vibration and ensure flatness of the assembly and other modules or assemblies mounted thereon. It will be appreciated by those skilled in the art that alternative mechanisms may be used for coupling or for controlling the transfer onto the wafer table 16 and facilitating high precision positioning of the wafer 12 at the inspection position.
The inspection of the wafer 12 is to detect possible defects on the wafer 12 as it moves. That is, the acquisition of images, such as bright field images and dark field images of the semiconductor image 12, preferably occurs as the wafer 12 is transferred past the inspection position. In addition, the wafer 12 can remain stationary at the inspection location as well as at the location where high resolution images are desired. The displacement or movement of the wafer 12 during inspection is controlled by software.
As described above, the system 10 also includes a first tube lens 36 and a second tube lens 38. Preferably, tube lens 36 is positioned between objective lens 40 and first image capture device 32. The illumination enters the first image capture device 32 through the first tube lens 36. In addition, a second tube lens 38 is placed between the objective lens 40 and the second image capture device 34, and the illumination passes through the second tube lens 38 and is reflected by a mirror or a prism into the second image capture device 34.
Each objective lens 40 has an infinitely corrected aberration. Thus, after passing through the objective lens 40, the illumination or light is collimated. That is, the illumination is collimated after being transmitted between the objective lens 40 and each of the first tube lens 36 and the second tube lens 38, and the collimation of the illumination between the objective lens 40 and each of the first tube lens 36 and the second tube lens 38 increases the flexibility of positioning each of the first image capturing device 32 and the second image capturing device 34 separately. The implementation of the tube lenses 36, 38 is such that there is no need to refocus into each of the first image capture device 32 and the second image capture device 34 when different objective lenses are used. Furthermore, the calibration of the illumination allows for the introduction and positioning of additional optical components and accessories into the system 10 in the field, particularly between the objective lens 40 and each of the first tube lens 36 and the second tube lens 38, without requiring a reconfiguration of the system 10. In addition, this arrangement facilitates a greater working distance between the objective lens 40 and the wafer 12 than prior art devices, and the longer working distance between the objective lens 40 and the wafer enables efficient use of darkfield illumination.
It should be understood by those skilled in the art that the system 10 of the present invention allows flexibility in the design and reconfiguration of the components of the system 10 in situ. The system 10 of the present invention facilitates the introduction of optical elements or systems into the system 10 and the removal of the system 10.
The first tube lens 36 facilitates the concentration of the collimated illumination to the first image acquisition plane. Likewise, second tube lens 38 facilitates collimating illumination onto the second image acquisition plane. Although in the foregoing description it is mentioned that a tube lens is used in the system 10 of the present invention, it will be appreciated by those skilled in the art that other optical or mechanical means may be used for the calibration of the illumination, more specifically, the bright field illumination and the dark field illumination are then focused on the respective first image acquisition planes and on the second image acquisition planes, respectively.
The first image capture device and the second image capture device 34 are preferably both disposed along adjacent parallel axes. Preferably, the spatial location of the first image capture device 32 and the second image capture device 34 determines a reduction in the amount of space occupied by the first image capture device 32 and the second image capture device 34, resulting in a smaller overall area (i.e., space efficiency) occupied by the system 10.
In particular, system 10 further includes a plurality of beam splitters and mirrors or reflective surfaces. The beam splitter and mirror or reflective surface are preferably positioned to direct the brightfield illumination and the darkfield illumination from each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30.
In particular, the system 10 further includes a Central Processing Unit (CPU) having a memory or database (also referred to as a post-processor) (not shown). The CPU is electronically connected or coupled to other components in the system 10, such as the first image capture device 32 and the second image capture device 34. The images acquired by the first image acquisition device 32 and the second image acquisition device 34 are converted into image signals and transmitted to the CPU.
The CPU can be programmed to process the information, more specifically the image, and transmit the image information to the CPU to detect defects on the wafer 12. The detection of defects on the wafer 12 is performed automatically by the system 10, and more particularly by the CPU. Further, defect detection on wafer 12 is performed automatically by system 10 and controlled by the CPU. In addition, at least one manual input is included to facilitate defect detection of wafer 12.
The CPU is programmable to store information and transmit the information to the database. In addition, the CPU may be programmed to classify the discovered defects. In addition, the CPU is further programmed to store information during processing, more specifically, processed images and discovered defects in the database. As to the capture of the image, the processing of the captured image, and the detection of defects on the wafer 12, will be described in detail below.
It will be appreciated by those skilled in the art, in light of the foregoing, that the brightfield illuminator 26 emits or provides brightfield illumination and each of the low angle darkfield illuminator 26 and the high angle darkfield illuminator 30 emits or provides darkfield illumination (hereinafter referred to simply as darkfield low angle or DLA illumination and darkfield high angle or DHA illumination, respectively), each accompanied by a different ray path or light path 3.
A flow chart of a preferred first ray path 100 with brightfield illumination is shown in fig. 10.
In step 102 of the first ray path 100, bright field illumination or light is provided by the bright field illuminator 26. As previously described, the brightfield illumination exits the first optical fiber of the brightfield illuminator 26. Further, the first fiber illumination directs the brightfield illumination from the brightfield illuminator 26, which passes through a condenser lens 44. The condenser lens 44 is used to concentrate the bright field illumination.
In step 104, the first reflective surface 46 or first mirror reflects the bright field illumination, and the bright field illumination reflected by the first reflective surface 46 is directed toward the first beam splitter 48.
In step 106, the first beam splitter 48 reflects at least a portion of the bright field illumination. The first beam splitter 48 has a 30: a reflection/transmission ratio (R/T) of 70. However, it will be appreciated by those skilled in the art that first beam splitter 48 can be adjusted as needed to control the intensity or amount of bright field illumination reflection or transmission.
The bright field illumination reflected by the first beam splitter 48 is directed to the detection location. More specifically, the bright field illumination reflected by the first beam splitter 48 is directed toward the objective lens 40 directly above the detection position. In step 108, the bright field illuminator 26 is focused by the objective lens 40 at the inspection position or on the wafer 12 placed at the inspection position.
The brightfield illuminator 26 provides brightfield illumination and focuses on the inspection location where the illuminated wafer 12, and more specifically the illuminated wafer 12, is positioned. In step 110, bright field illumination is reflected by the wafer 12 disposed at the inspection position.
In step 112, the bright field illumination reflected by the wafer 12 passes through the objective lens 40. As previously described, the objective lens 40 has infinite corrective aberrations. Thus, the bright field illumination passes through the objective lens and is collimated by the objective lens 40. The degree of magnification of the bright field illumination by the magnifier depends on the magnification factor of the objective lens 40.
The bright field illumination is directed through objective lens 40 to first beam splitter 48. In step 114, the bright field illumination projected onto the first beam splitter 48 and a portion thereof is transmitted through the first beam splitter 48. In step 114, the length of transmission of the first beam splitter 48 is dependent on the R/T ratio of the first beam splitter 48. The bright field illumination passes through the first beam splitter 48 and is directed to the second beam splitter 50.
The second beam splitter 50 of the system 10 is a cube beam splitter 50 having a predetermined R/T ratio of 50/50. The R/T ratio can be varied as desired. The cube beamsplitter 50 is because the cube beamsplitter 50 splits the received illumination into two optical paths. Thus, those skilled in the art will appreciate that the configuration and shape of the stereoscopic beam splitter 50 may provide better performance and alignment for this purpose. The length of the illumination reflected or transmitted by the second beam splitter 50 depends on the R/T ratio of the second beam splitter 50. In step 116, bright field illumination is projected onto the second beam splitter 50. Bright field illumination impinging on the beam splitter is transmitted or reflected therefrom.
The bright field illumination passing through the second beam splitter 50 is directed to the first image capture device 32. The brightfield illumination in step 118 passes through the first tube lens 36 and then to the first image capture device 32 in step 120. First tube mirror 36 helps to concentrate the collimated brightfield illumination onto the first image capture plane of first image capture device 32. The concentration of the brightfield illumination into the first image capture plane causes the first image capture device 32 to capture a brightfield image.
The bright field image acquired by the first image acquisition plane is preferably converted into an image signal. The image signal is then transmitted or downloaded to the CPU, which is also referred to as data transfer. Then, the CPU will at least process or store the bright field image in the CPU.
The bright field illumination is reflected by the second beam splitter 50 and directed towards the second image acquisition device 34. Bright field illumination is passed through the second tube lens 38 in step 122, and then into the second image capture device 34 in step 124. The second tube lens 38 helps to concentrate the collimated brightfield illumination onto the second image acquisition plane. The concentration of the brightfield illumination to the second image plane assists the second image capturing device 34 in capturing a brightfield image.
The bright field image acquired by the second image acquisition plane is preferably converted into an image signal. The image signal is then transmitted or downloaded to the CPU, which is also referred to as data transfer. Then, the CPU will at least process or store the transmitted bright field image in the CPU. .
A preferred flow chart of the second ray path 200 with Dark High Angle (DHA) illumination is shown in fig. 11.
In a second ray path 200 in step 202, DHA illumination is provided by the high angle illuminator 30. As previously described, the second optical fiber can help direct the DHA illumination provided by the high angle darkfield illuminator 30. The DHA illumination is directly focused at the detection location without passing through optical elements or accessories such as the objective lens 40.
In step 204, DHA illumination directed at the inspection site is reflected from the wafer 12 or a portion thereof placed at the inspection site. In step 206, the DHA illumination reflected from the wafer 12 passes through the objective lens 40. In step 206, the objective lens 40 with infinitely corrected aberrations calibrates the DHA illumination to pass from the objective lens position.
The DHA illumination is directed through the objective lens 40 towards the first beam splitter 48. In step 208, DHA illumination is directed at first beam splitter 48, a portion of the DHA illumination passes through first beam splitter 48, and the length of DHA illumination transmitted within first beam splitter 48 is dependent on the R/T ratio of first beam splitter 48.
The DHA illumination transmission is directed through the first beam splitter 48 toward the second beam splitter 50. In step 210, DHA illumination is projected onto second beam splitter 50, and the transmission or reflection of the DHA illumination projected onto second beam splitter 50 is dependent on the R/T ratio of second beam splitter 50.
The DHA illumination transmitted by the second beam splitter 50 passes through the first tube lens 36 in step 212 and then enters the first image capture device 32 in step 214. First tube lens 36 helps to concentrate the calibrated DHA illumination onto the first image capture plane of first image capture device 32. The concentration of the DHA illumination to the first image acquisition plane facilitates the acquisition of a dark field image, more specifically a dark field high angle (DHA) image, by the first image acquisition device 32.
In addition, DHA illumination is reflected by second beam splitter 50. The DHA illumination reflected from the second beam splitter 50 is transmitted through the second tube mirror 38 in step 216 and then into the second image capture device 34 in step 218. The second tube lens 38 helps to concentrate the calibrated DHA illumination at the second image acquisition plane of the second acquisition device 34. The concentration of the DHA illumination in the second image capture plane facilitates the capture of dark-field images, and more particularly, the capture of a dark-field high angle (DHA) image by the second image capture device 34.
As shown in FIG. 12, a preferred third ray path 250 is a flow chart for dark field low angle (DLA) illumination.
In step 252, which includes the third ray path 200, DLA illumination is provided by the low angle darkfield illuminator 28. The third optical fiber helps to guide the DLA illumination provided by the low angle darkfield illuminator 28. The DLA illumination is directed at the detection location without passing through optical elements or accessories such as the objective lens 40.
In step 254, DLA illumination directed at the inspection location is reflected by the wafer 12 or a portion thereof disposed at the inspection location. In step 256, DLA light reflected by the semiconductor wafer is passed through the objective lens 40. In step 256, the objective lens 40 with infinitely corrected aberrations calibrates the DLA to pass from the objective lens position.
The DLA illumination is directed through the objective lens 40 toward the first beam splitter 48, and the DLA illumination and a portion thereof projected onto the first beam splitter 48 is transmitted through the first beam splitter 48 in step 258. The transmission length through the first beam splitter 48 depends on the R/T ratio of the first beam splitter.
The DLA illumination is directed through the first beam splitter 48 and then directly to the second beam splitter 50. In step 260, the DLA illumination is projected to the second beam splitter 50. The transmission or reflection of the DLA projected onto the second beam splitter 50 depends on the R/T ratio of the second beam splitter 50.
The DLA illumination transmitted through the second beam splitter 50 passes through the tube lens 36 in step 262 and then enters the first image capture device in step 264. The first tube lens 36 helps to concentrate the collimated DLA illumination onto the first image capture plane of the first image capture device 32. The DLA illumination focused on the first image capture plane facilitates the capture of a dark field image, more specifically a dark field high angle (DLA) image captured by the first image capture device 32.
In addition, the DLA illumination is reflected by the second beam splitter 50. The DLA illumination reflected from the second beamsplitter 50 passes through the second tube mirror 38 in step 266 and into the second image capture device in step 268. The second tube lens 38 helps to focus the collimated DLA illumination onto the second image capture plane of the second image capture device 34. The collection of a dark field image, and more particularly a dark field high angle (DLA) image, is facilitated by the concentration of the DLA image illuminated at the second image collection plane, by the second image collection device 34.
It will be appreciated by those skilled in the art from the foregoing description that the DHA illumination and DLA illumination follow a similar ray path after reflection off the wafer 12. However, the second ray path 200 for DHA illumination and the third ray path 250 for DLA illumination may be individually modified using techniques known in the art. In addition, the angles at which the DHA illumination and the DLA illumination are projected onto the wafer 12 placed at the inspection position may be adjusted as needed to enhance the accuracy of defect detection. For example, the angles at which the DHA illumination and the DLA illumination are projected onto the wafer 12 positioned at the inspection position may be adjusted according to the type of wafer 12 positioned at the inspection position or the needs of the user of the system 10.
The DHA image and the DLA image captured by each of the first image capturing device 32 and the second image capturing device 34 are converted into image signals, which are then transmitted or downloaded to the CPU. The transfer of the image signal to the CPU is also referred to as data transfer. Then, the cpu processes or stores the transmitted DHA image or DLA image as needed.
As described above, the first image capturing device 32 and the second image capturing device 34 have respective predetermined relative spatial positions. The use of the objective lens 40 in conjunction with the first and second tube lenses 36, 38 facilitates spatial positioning of the first and second image capture devices 32, 34. It will be further appreciated by those skilled in the art that other optical elements or accessories, such as mirrors, may also be used to direct the brightfield illumination, DHA illumination, and DLA illumination, and also to facilitate the spatial positioning of the first and second image capture devices 32, 34. More preferably, the spatial positions of the first image capturing device 32 and the second image capturing device 34 are set with reference to the detection position. The spatial positioning of the first image capture device 32 and the second image capture device 34 increases the performance of the system 10 in at least one of the accuracy and efficiency of wafer inspection. For example, the setting of the spatial positions of the first image acquisition device 32 and the second image acquisition device 34 relative to the detection position is preferably used to reduce calibration losses and calibration feedback losses associated with moving the image acquisition device or camera.
The optical detection head 14 of the system 10 further includes a third illuminator (hereinafter referred to as the thin line illuminator 52). The thin line illuminator 52 is also called a thin line illumination emitter. The thin line illuminator 52 emits or provides thin line illumination. The thin line illuminator 52 is a laser source for providing thin line laser illumination. In addition, the thin line illuminator 52 is a broadband illuminator for providing broadband thin line illumination. The thin line illumination is directed at the inspection position, more specifically at a predetermined angle to the wafer 12 disposed at the inspection position, which angle may be changed as desired. A mirror arrangement 54 or mirror is preferably coupled to or disposed in a position opposite the filament illuminator 52 to direct the filament illumination toward the inspection position.
The optical inspection head 14 of the system 10 further includes a third image capture device (hereinafter referred to as a three-dimensional (3D) image camera 56). The three-dimensional image camera 56 receives thin line illumination reflected by the wafer 12. The thin line illumination entering the 3D image camera 56 is focused on a 3D image capture plane (not shown) and thereby captures a 3D image of the wafer 12. The 3D optics include a thin line illuminator 52 and a 3D image camera 56 as shown in fig. 13.
The optical writing detecting head 14 further includes an objective lens for a 3D image camera (hereinafter referred to as a 3D image objective lens 58). The thin line illumination reflected by the wafer 12 passes through the 3D image objective lens 58 and then into the 3D image camera 56. The 3D image objective lens 58 has a function of infinitely correcting aberrations. Thus, the thin line illumination passes through the 3D image objective 58 and is calibrated thereby. The optical inspection head 14 further includes a tube lens 60 for the 3D image objective lens 58 and the 3D image camera 56. The tube lens 60 focuses the collimated thin line illumination onto the 3D image acquisition plane. The use of the tube lens 60 and the 3D image objective 58 in conjunction with the 3D image camera 56 facilitates flexible positioning and reconstruction of the 3D image camera 56. In addition, the use of the tube lens 60 and the 3D image objective lens 58 in conjunction with the 3D image camera 56 also facilitates the introduction of other optical elements or accessories between the 3D image objective lens 58 and the tube lens 60.
The thin line illuminator 52 and the 3D image camera 56 are used together to facilitate 3D image scanning and inspection of the wafer 12. Both the thin line illuminator 52 and the 3D image camera 56 are coupled to the CPU, which facilitates coordinated or synchronized operation of the thin line illuminator 52 and the 3D image camera 56. More preferably, system 10 scans and inspects wafer 12 using automated 3D images. The automated 3D image scanning and inspection wafer 12 is preferably under the control of a CPU.
In addition, the optical inspection head 14 includes a review image capture device 62. The review image capture device 62 is, for example, a color camera. The review image capture device 62 captures color images. In addition, the review image capture device 62 also captures a monochrome image. The review image capture device 62 captures at least one determined review image of the wafer 12 to classify and review defect detections on the wafer 12.
The optical inspection head 14 further includes a review brightfield illuminator 62 and a review darkfield illuminator 64 for implementing brightfield illumination and darkfield illumination, respectively. The review image capture device 60 receives the brightfield illumination and the darkfield illumination provided by the brightfield illuminator 62 and the review darkfield illuminator 64, respectively, and is reflected by the wafer 12 for capturing a review image of the wafer 12. In addition, the review image capture device 60 captures the illumination provided by the optional illuminator. Such as one example described above, for capturing a review image of the wafer 12. The review image capture device 60 captures high resolution images of the wafer 12.
FIG. 14 depicts a review brightfield illuminator 62, a review darkfield illuminator 64, a review image capture device 60, and the illumination patterns therebetween. FIG. 15 depicts a flow diagram of a preferred fourth ray path 300 followed by brightfield illumination provided by the review brightfield illuminator 62.
In step 302, which includes the fourth ray path 300, bright field illumination is provided by the review bright field illuminator 62. The brightfield illumination provided by the review brightfield illuminator 62 is directed toward a first reflective surface 66. In step 304, brightfield illumination provided by the review brightfield illuminator 62 is directed at a first reflective surface 66. In step 304, the bright field illumination is reflected by the first reflective surface 66 and directed toward a beam splitter 68. In a subsequent step 306, the bright field illumination projected onto the beam splitter 68 is reflected therefrom and directed toward the detection location. The length of the bright field illumination reflected by the beam splitter 68 depends on the R/T ratio of the beam splitter.
In step 308, the bright field illumination is reflected by the wafer 12 or a portion thereof disposed at the inspection position. In step 310, the reflected bright field illumination passes through a review objective 70. The review objective 70 is inherited as an objective device or set of objectives. The review objective lens has the capability of infinitely correcting aberrations. Thus, the bright field illumination passing through the review objective 70 is calibrated by the review objective 70 in step 310.
In step 312, the bright field illumination projected onto the beam splitter 68 and a portion thereof is transmitted therefrom. The length of the bright field illumination passing through the beam splitter 68 depends on the R/T ratio of the beam splitter 68. Brightfield illumination is passed through a review tube lens 72 in step 314 and then into the review image capture device 60 in step 316. The review tube lens 72 focuses the collimated brightfield illumination onto an image capture plane of the review image capture device 60. The collection of the review brightfield image in step 318 is facilitated by the bright field illumination being concentrated at the image collection plane of the review image collection device.
The aiming of the brightfield illumination between the review objective 70 and the review tube lens 72 facilitates the introduction of optical elements and accessories therebetween. In addition, the aiming of the brightfield illumination between the review objective 70 and the review tube lens 72 is flexible to position and reconfigure as required by the review image capture device 60.
As shown in fig. 16, a flow chart of a preferred fifth ray path 350 for the darkfield illumination provided by the review darkfield illuminator 64.
In step 352, which includes fifth light path 350, dark field illumination is provided by review dark field illuminator 64, and the dark field illumination provided by review dark field illumination 64 is directed to impinge on the detection location in focus. In addition, the darkfield illumination provided by the review darkfield illumination 64 is preferably directed at the inspection position at a predetermined angle to the horizontal plane of the wafer 12. The predetermined angle is preferably a high angle and may be adjusted as needed using techniques well known to those skilled in the art.
In step 354, dark field illumination is reflected from the wafer 12 or a portion thereof positioned at the inspection position. The reflected dark field illumination then passes through the review objective 70 in step 356. In step 356, the dark field illumination is passed through the review objective 70 and calibrated by the review objective 70.
In step 358, the calibrated dark field illumination projected onto the beam splitter and a portion thereof is transmitted therefrom. The length of the dark field illumination passing through beamsplitter 68 depends on the R/T ratio of beamsplitter 68. The dark field illumination then passes through the review scope 72 in step 360 and then into the review image capture device 60 in step 362. The fourth tube lens 72 focuses the calibrated dark field illumination onto an image capture plane of the review image capture device 60. The collection of the review brightfield image in step 364 is facilitated by the concentrated illumination of the darkfield illumination at the image collection plane of the review image capture device 60. The aiming of each of the bright field illumination and dark field illumination between review objective 70 and review tube lens 72 enhances the ease of design and reconfiguration of system 10, the introduction of optical components and accessories between them. Further, the aiming of the brightfield illumination between the review objective 70 and the review tube lens 72 is preferably flexibly positioned and reconfigured as required by the review image capture device 60 to facilitate the capture of the brightfield and darkfield images while the wafer 12 is in motion.
The acquired review bright field image and the acquired review dark field image are converted into picture signals and transmitted to the programmable controller by the review acquisition device 60, thereby being processed and stored or saved in a database.
The review image capture device 60 may have a fixed spatial position relative to the inspection position. The fixed spatial position of the review image capture device 60 preferably reduces calibration losses and calibration feedback losses associated with moving the image capture device or camera, thereby enhancing the quality of the capture of review brightfield images and review darkfield images.
The system 10 further includes a vibration isolator 24, collectively referred to as a stabilization device. The system 10 is preferably mounted on a vibration isolator 24 or a stabilizer when the system is operating properly. The system 10 includes four vibration isolators 24, each positioned at a different corner of the system 10. Vibration isolators 24 help support and stabilize system 10. Each vibration isolator 24 is preferably a compressible structure or tank structure that absorbs ground vibrations and acts as a cushion to prevent the transmission of ground vibrations to the system. Vibration isolators 24 help to enhance the quality of the images acquired by each first image acquisition apparatus 32 by preventing unwanted vibration or physical movement of system 10. The second image capture device 34, the 3D image camera 56 and the review camera 60 improve the inspection quality of the wafer 12.
In accordance with an embodiment of the present invention, a preferred method 400 for inspecting a wafer 12 is provided. Fig. 17 depicts a method flow diagram for implementing the method 400. The method 400 for inspection of a wafer 12 performs at least one of detection, classification, and review of defects on the wafer 12.
The embodied method 400 for inspection of a wafer 12 utilizes a reference image (also referred to as a golden reference or golden reference map) against which the captured image of the wafer 12 is compared for at least one of detection, classification and review of defects on the wafer 12. For clarity, before describing the embodied method 400 for inspecting semiconductor wafers, a generation process 900 for implementing reference images is provided. A flowchart for implementing the reference image generation process 900 is depicted as 18.
Implementing the reference image generation process 900
In step 902 of the reference pattern generation process 900, the method includes loading a preset number of reference areas on the wafer 12. The method is preferably generated by computer software programming. Alternatively, the method may be generated manually. The method may be stored in a database of the CPU. In addition, the method may also be stored in an external database or storage space.
The predetermined number of each reference area is set on the wafer 12, the quality of which is unknown. The use of multiple reference areas helps compensate for the potential for surface vibrations at different locations of the wafer 12 or vibrations between multiple wafers 12. Such surface vibrations include, but are not limited to, various degrees of flatness and illumination intensities. It will be understood by those skilled in the art that the predetermined number of reference regions represents the entire surface area of the wafer 12. In addition, the preset number of the reference area may also represent a plurality of preset positions on a plurality of chips.
In step 904, a first reference region is selected, followed by step 906 in which a preset number ("n") of images is obtained for the first acquisition position of the selected reference region. More specifically, n images are obtained for each preset position of the selected reference area, and the number and position of the preset positions of the selected reference area can be changed as needed and can be changed conveniently by software programming or manual input.
The n images can be acquired by using at least one of the first image acquisition device 32, the second image acquisition device 34, the 3D image camera 56, and the review image acquisition device 62, as needed. In addition, different image acquisition devices can be used to obtain the n images. The illumination used to obtain the n images can be varied as desired, for example with one or a mixture of bright field illumination, DHA illumination, DLA illumination and thin line illumination. The color, wavelength and light intensity used to acquire the n images may be selected and varied as desired.
The acquisition of multiple images at each location allows for the use of optics and imaging devices in the acquisition of the reference image, taking into account the vibration of the illumination when the reference image is generated. The method of reference image generation reduces unwanted vibrations or effects on defect detection and classification due to variations between lighting conditions. In addition, some images of the optional reference areas may be acquired for each particular lighting condition. The acquisition of multiple images under each specific lighting condition compensates for and normalizes the lighting vibrations caused by the flash lamp and the valve.
The n images are stored in a library of the CPU. In addition, the n images may also be stored in an external database or storage space as desired. In step 908, the n images acquired in step 906 are aligned and pre-processed. Preferably, the sub-pixels of the n images acquired in step 906 are noted. The sub-pixels of the acquired n images are recorded using existing methods including, but not limited to, forming trace bumps or pads on one or more wafers with one or more binary, grayscale, or geometric image matching.
In step 910, reference intensities for the n images are calculated. More specifically, a reference intensity of each image acquired at a preset position of the selected reference region is calculated. The calculation of the reference intensities for the n images helps to compensate for color variations at different locations and areas on the wafer 12 (or wafers) to normalize them. Further, the calculation of the reference intensity for each of the n images helps to describe or compensate for other surface vibrations at different locations and regions on the wafer 12 (or wafers).
The result of step 910 is that n reference intensities are calculated, n reference intensities corresponding to n images one-to-one, and in step 912, a large amount of static information is calculated for each pixel of each image. The calculation of the amount of static information includes, but is not limited to, the mean, range, standard deviation, maximum intensity and minimum intensity of each pixel of each image.
More specifically, the average is a geometric mean of the reference intensities per pixel for each of the n images. The geometric mean is an average or mean that represents the mean or mean of the centers of a set of data or n numbers, derived from the set of data by first multiplying and then opening the root number n times, and the equation that yields the geometric mean is as follows:
the calculation of the geometric mean is different from the mathematical mean or median, which prevents the unreasonable influence of the calculation of the mean intensity of each pixel of each of the n images on the extreme values in the data set.
In addition, a range of absolute intensities (hereinafter referred to as Ri) for each pixel of the n images is calculated, and Ri corresponding to each pixel of the n images is a value between the maximum value and the minimum value of the absolute intensities for each pixel of the n images.
As previously described, the standard deviation value for each pixel intensity of each of the n images of the first reference region acquired in step 906 may also be calculated. More specifically, the standard deviation value is a deviation value of a geometric standard describing how a set of data having a preferred value as a geometric mean is dispersed, and the standard deviation formula is as follows:
where μ g is the geometric mean of a set of numbers { a1, a 2.., An }.
In step 914, the acquired n images, along with their corresponding information, such as the location on the wafer 12 or on the first reference area, are temporarily saved. The statistical information calculated in step 912 is also temporarily saved in step 914. The data is stored in a database of the CPU. In addition, the data is stored in an optional database or storage space as needed.
In step 916, it is determined whether more images of the selected reference area are needed. Step 916 is preferably performed by software control or automatically. The execution of step 916 relies on the information obtained in step 910 and step 912. In addition, step 916 is either manually operated or controlled by techniques known in the art.
If it is determined in step 916 that more images within the reference area need to be selected, the steps 904 through 916 are repeated. Steps 904 through 916 may be repeated any number of times as desired. When it is determined in step 916 that the image within the first reference area is no longer needed, step 918 determines whether steps 904-916 require a reference area (for the purposes of describing the present invention, the second reference area) that is repeated for the next reference area with a predetermined number. Step 918 is software controlled and automatically performed. In addition, step 918 is preferably implemented with information obtained in at least one of steps 910, 912 or 916, step 918 being manually operable or controlled by those skilled in the art.
If an image of a second reference region is determined to need to be acquired in step 918, for example, for the second reference region, if steps 904 through 916 need to be repeated, a signal is generated for repeating steps 904 through 916. Steps 904 through 918 may be repeated as many times as desired. The repetition of steps 904 through 916 is controlled by software or automated.
When it is determined in step 918 that steps 904-918 do not need to be repeated, for example, a picture of a next reference area of the preset number of reference areas is not needed, a golden reference picture (hereinafter referred to as a reference picture) is then calculated in step 920.
The calculation or acquisition of the reference image is preferably controlled by software and implemented by a series of programming guides. The following steps are implemented to calculate the reference image. It will be appreciated by those skilled in the art that other steps or techniques may be complementary to the following steps in the performance of the reference image calculation process.
In step 922, pixels having reference intensities greater than a predetermined limit are determined. In addition, a pixel having a pixel range greater than the predetermined range is determined in step 922. The predetermined limits and ranges in step 922 may be selected and determined by software or determined manually. In step 924, pixels having a standard deviation of pixel intensities greater than a preset value are identified. The preset value of step 924 may be selected or determined by software or determined by manual operation. If it is determined in steps 922 through 924 that there are pixels having reference intensities that exceed a preset value or range, in step 926, the previously saved image, such as the image saved in step 914, is reloaded for repeating one or more of steps 904 through 924.
Steps 922 through 926 facilitate image recognition of pixels that include a particular pixel intensity. More specifically, steps 922 through 926 enable image identification including pixels having reference intensities outside predetermined limits and ranges, e.g., identification of "undesirable" image identification. More specifically, steps 922 through 926 eliminate "undesirable" pixels from the reference image calculation to help prevent the "undesirable" pixels from affecting the final reference pixel values of the reference image.
The "undesirable" image is discarded, which facilitates the exclusion of defective data or images, thereby preventing the occurrence of similar defective data that would otherwise have produced the reference image. In step 928, the images including pixels within the preset limits and preset ranges are sorted (e.g., images that are not discarded).
Preferably, the reference image generation process 900 generates the following image data:
normalized average of intensity per pixel for each image that was collated.
Standard deviation of intensity of each pixel of each image collated.
Maximum and minimum intensity for each pixel of each image that is sorted.
The average reference intensity for each preset number of the reference regions determined in step 702.
The images sorted in step 928 represent reference images. The reference image, along with the image data, is further included in step 928. Both the reference image and its corresponding image data are preferably included in a CPU-loaded database. Additionally, reference images and their corresponding image data may optionally be stored in an optional database or storage space. It will be appreciated by those skilled in the art that steps 922 and 926 help reduce the amount and size of memory space required to store the reference images and their corresponding data, thereby enabling the method 400 to be implemented more quickly or more accurately.
The average intensity of each pixel is preferably normalized to 255 for display and visualization of the reference image. It will be appreciated by those skilled in the art that the average intensity of each pixel may be normalized to a selectable value for displaying and visualizing the reference image.
Steps 904 through 928 may be repeated a set number of times to cause at least one of the at least one first image capture device 32, the second image capture device 34, and the review camera to capture a corresponding plurality of images. Additionally, steps 904 through 928 may be repeated to capture images with different illumination or illumination conditions, such as brightfield illumination, DHA illumination, DLA illumination, and thin line illumination, as desired. The repetition of steps 904 through 928 produces reference images for multiple illuminations or multiple illumination conditions and employs multiple image capture devices as needed.
As previously described, the reference images generated by the multiple reference areas and multiple lighting conditions of the wafer 12 (multi-layer wafer) help ensure accounting for and need to compensate for variations in quality of the captured images that accompany variations in light conditions. For example, the capture of reference images at different reference regions of the wafer 12 (e.g., different regions on the wafer 12) preferably ensures accounting for and compensating for variations in color at different regions on the wafer 12.
Both steps 904 and 928 are preferably executed and controlled by the CPU. Preferably, steps 904 through 928 are at least one of implemented or controlled by software programming. Additionally, at least one of steps 904-928 is manually assisted, if desired. The reference image generated by implementing the reference image generation process 900 is used to compare with images subsequently acquired on the wafer 12, thereby enabling at least one of inspection, classification, and review to be performed on the wafer 12.
As previously mentioned, the present invention provides a method 400 for performing inspection of a wafer 12, thereby performing at least one of inspection, sorting and review of semiconductor wafers.
In step 402 of method 400, a wafer 12 is loaded by system 10 onto wafer table 16 for inspection, and wafer 12 is preferably removed from semiconductor stack 20 and transferred to wafer table 16 by robotic wafer handler 18. Suction or vacuum is applied to the wafer table 16 to ensure that the wafer 12 is positioned on the wafer table.
The wafer 12 preferably includes a wafer inspection number (1D number) or a bar code. A wafer ID number or bar code is engraved or affixed on the surface of the wafer 12, particularly the periphery of the surface of the wafer 12. The wafer ID number or bar code helps identify the wafer 12 and ensures that the wafer 12 is accurately loaded on the wafer table 16.
In step 404, a wafer map of the wafer 12 loaded onto the wafer table 16 is obtained, which may be downloaded from a programmable controller. Alternatively, the wafer map may be retrieved by an external database or processor. Further, it will be understood by those skilled in the art that the wafer map may be prepared or obtained from the wafer 12 carried on the moveable platform by known techniques or methods.
In step 406, one or more reference locations are acquired or determined on the wafer map, and at least one of the directional movement of the wafer X, Y and the theta rotation compensation are calculated using techniques well known to those skilled in the art.
In a subsequent step 408, a wafer semiconductor scan motion path and a plurality of image capture locations are calculated and determined. The wafer map obtained in step 404 preferably facilitates calculating a wafer semiconductor scan motion path and a plurality of image capture locations. Preferably, the wafer scan motion path is calculated based on one of a plurality of known parameters. Known parameters include, but are not limited to, rotational compensation, wafer size, wafer chip size, inspection area, wafer scan speed, and encoder position. Each of the plurality of image capture locations reflects or corresponds to a location on the wafer 12 at which an image is captured. Preferably, each of the plurality of image acquisition positions may be varied by techniques well known to those skilled in the art, or may be varied by techniques well known to those skilled in the art.
Preferably, system 10 automatically performs steps 404-408, and more particularly, via a programmable controller of system 10, and further, any of steps 404-408 may be implemented or assisted by an optional processor.
In step 410, the availability of an appropriate excellent reference (hereinafter referred to as a reference image) is determined by the programmed controller of the system 10. If a reference picture is not available, then the reference picture is generated by the preferred reference picture generation process 900 as mentioned in step 412 above.
Preferably, a reference image is first obtained or generated prior to performing a preferably two-dimensional (2D) wafer scanning process 400 as described in step 414. A process flow diagram of a preferred two-dimensional (2D) wafer scanning process 500 is shown in fig. 19.
A preferred two-dimensional (2D) wafer scanning process 500
The 2D wafer scanning process 500 acquires bright field images and dark field images via the first image acquisition device 32 and the second image acquisition device 34.
In step 502 of the two-dimensional scanning process 500, the first image acquisition device 32 is exposed. At step 504, a first illumination is provided or emitted. For example, the first illumination may be brightfield illumination provided or emitted by the brightfield illuminator 26. DHA illumination provided or emitted by the high angle darkfield illuminator 30 or DLA illumination provided or emitted by the low angle darkfield illuminator 28. The selection of the first illumination provided or emitted in step 504 is preferably dependent on an illumination configuration (not shown). Preferably, the lighting configuration is an element of system 10 and is electrically coupled to the illuminators (28, 30, 52, 64 and 66) of system 10, and further, the lighting configuration is an element of the CPU.
The image capture devices 32 and 34 are capable of receiving or capturing any combination of illumination provided or emitted by the brightfield illuminator 26, the DHA illuminator 30, and the DLA illuminator 28. As shown in the table of fig. 20, examples of possible combinations of the first illumination received by the image capture device 32 and the second illumination received by the image capture device 34. If the first image capture device 32 and the second image capture device 34 receive identical illumination, then the throughput of such a structure will be the highest of the throughputs of all possible structures.
For example, the structure 1 is selected by the illumination configuration as shown in the table of fig. 20, and accordingly, the first illumination is brightfield illumination provided by the brightfield illuminator 26.
Preferably, step 502 and step 504 are performed simultaneously. Execution of steps 502 and 504 causes the first image acquisition device 32 to acquire a first image, as shown in FIG. 22 a. In step 506, the first image captured by the first image capturing device 32 is converted into an image signal and transmitted to the CPU through a data transmission process, preferably, and stored in a database or storage system.
In step 508, the second image capture device 34 is exposed. In step 510, a second illumination is obtained. Like the first illumination, the choice of the second illumination preferably depends on the illumination configuration. For the purposes of the present description, configuration 1 is selected by the illumination configuration as shown in the table of fig. 20, and accordingly the second illumination is DHA illumination provided by high angle dark field illumination 30. However, those skilled in the art will appreciate that the first illumination and the second illumination are illumination that may be selectable as desired, for example, as preferred illumination for different configurations within the table shown in fig. 20.
Preferably, step 508 and step 510 are performed simultaneously. Preferably, step 506 is performed continuously with steps 508 and 510. Steps 508 and 510 cause the second image capture device 34 to capture a second image, as shown in fig. 22 b. In step 512, the second image captured by the second image capturing device 34 is converted into an image signal and transmitted to the programming controller through a data transmission process, preferably, and stored in a database or a storage memory system.
Fig. 21 is a diagram illustrating the process of exposing the first image capturing device 32 to provide a first illumination, exposing the second image capturing device 34 to provide a second illumination, and transferring data between the first image capturing device 32 and the second image capturing device 34. Steps 502 through 512 may be repeated any number of times for acquiring a plurality of first and second images corresponding to the wafer 12. More specifically, steps 502 through 512 are preferably repeated to acquire an image with the first illumination and the second illumination at each of a plurality of image acquisition locations of the wafer 12 along the wafer scanning motion path as calculated in step 408.
As described above, each of the first image and the second image is converted into an image signal and transmitted to the programmable controller and stored in the database or the storage memory system. Each of the capturing 502 through 512 is performed during movement of the wafer 12, that is, the capturing of the first and second images is performed as the wafer 12 moves along the wafer scanning motion path. Accordingly, it will be understood by those skilled in the art that between steps 502, 504 (preferably occurring simultaneously) and steps 508, 510 (preferably also occurring simultaneously), the wafer 12 is displaced along the wafer scanning motion path by a predetermined distance that depends on a number of factors including, but not limited to, the speed of displacement of the wafer 12 along the wafer scanning motion path and the time required for any of steps 502-512. The preset distance may be controlled or changed as desired, for example by the CPU. The control and variation of the preset distance may be at least one of software or convenient manual operation.
Accordingly, the first image has a predetermined image offset when it needs to be superimposed or compared with the second image. As shown in fig. 22c, is a combined image of the first image and the second image used to display image compensation resulting from the acquisition of the first image and the second image as the wafer 12 moves. The pre-set image compensation depends on several factors including, but not limited to, the speed of displacement of the wafer 12 along the wafer scan motion path and the time required for any of steps 502 through 512. The control and variation of the pre-set image compensation can be at least one of software or convenient manual operation.
In step 514, XY encoded values are obtained, which preferably can be obtained in each of step 504 and step 510. Preferably, the XY-code value represents the position (X-Y displacement) of the wafer 12 along the path of wafer scanning motion. The XY encoded values are obtained for use in calculating image compensation (coarse compensation) between the first image and the second image (e.g., relative compensation of the second image to the first image) in step 516. The final image compensation is calculated by sub-pixel alignment using a pattern matching technique. The final compensation is obtained by applying a preset mathematical formula to the coarse and final image compensation. The preset mathematical formula is adjusted as needed using techniques well known to those skilled in the art.
The 2D wafer scanning process 500 in step 414 of the method 400 produces an acquisition of a plurality of images of the semiconductor wafer 12, preferably computing image positions along a wafer scanning motion path.
In step 416 of the method 400, a preferred two-dimensional (2D) image processing process 600 is performed for at least one of defect identification, inspection, classification, sorting, or storage of the semiconductor wafers 12. Fig. 23 depicts a process flow diagram of a preferred 2D image processing procedure 600.
Preferred 2D image processing procedure 600
The 2D image processing process 600 facilitates the processing of image acquisition in the 2D wafer scanning process 500. In addition, the 2D image processing process 600 facilitates at least one of defect identification, detection, classification, sorting, or storage on the wafer 12.
In step 602 of the 2D process 600, a first working image is selected and loaded into a memory working area. During the 2D wafer scanning process, the first working image is selected from the plurality of first and second images that are acquired and saved. For the purposes of the present description. The first working image represents the acquisition of the first image by the first image acquisition device 32 during the 2D wafer scanning process 500.
In step 604, sub-pixel alignment of the first working image is performed. Sub-pixel alignment is performed using one or more pattern matching techniques. It is performed using one of binary, grayscale, or geometric image matching methods. Once aligned, a reference intensity for each image is calculated from one or more preset regions of interest of the image as shown in step 606. Step 604 and step 606 may be collectively referred to as a pre-processing of the first working image. It will be readily appreciated that the pre-treatment process is not limited to the above steps. The pretreatment process may include other steps, if desired.
In a subsequent step 608, a first golden reference or reference image is selected. The first reference image selected in step 608 corresponds or matches the first working image. Preferably, the first reference image is selected from a database or reference images generated by a preferred reference generation process 900 in step 412 of the method 400. As shown in fig. 18, a preferred reference generation process 900 is described in detail above.
In step 610, a data value for each pixel of the first working image is calculated. In step 612, the calculated data value of each pixel of the first working image is referenced to a preset threshold value and increment or other factors.
In step 614, the first working pattern is then matched or evaluated with the image selected in step 608, the matching or evaluation of the first working pattern with the first reference image facilitating the detection and identification of defects on the wafer 12. Preferably, the CPU is programmed for efficient automated matching between the first working image and the first reference image. The programmable controller preferably executes a series of computer operations or algorithms of matching the first working image and the first reference image to thereby detect or identify defects on the wafer 12.
The 2D image processing process 600 determines the presence of one or more defects in step 616. If more than one defect is found or identified in step 616, the algorithm will classify the defects from largest to smallest based on one or all of area, length, width, variance, closeness, fill, edge strength, and others. Further, the algorithm only selects the specified criteria that meet the user's requirements to calculate the defect region of interest (DROI). A defect (or defects) is found or identified in step 616 and then the dros is calculated for the wafer 12 in step 618. Preferably, the DROI is dynamically computed by the CPU in step 618. The CPU is preferably programmable (e.g., includes or embodies a series of computing instructions or software) for the calculation of the dros.
In step 620, the corresponding DROI of the second working image is detected. More specifically, the second working image is a second image captured by the second image capturing device 34 in the two-dimensional wafer scanning process 400. That is, after performing the second working image sub-pixel alignment, the DROI of the second image (the image corresponding to the first image) is detected in step 620. The detection of the DROI of the second working image preferably facilitates the determination of defect detection in step 616. More preferably, step 620 facilitates classification of defect detection in step 606.
The system 10 processes the DROIs of the second working image rather than processing the entire image. Alternatively, in step 616, if no defect is found, the method ends at step 618 (i.e., the method steps following step 616 are not performed). This will further reduce the amount of resources or processing bandwidth required for the second working image. It will be readily appreciated that such intelligent processing sequences (i.e., the flow of steps of the method) are dynamically determined or executed based on the results of the steps of the method described above. This intelligent processing of the 2D image processing process 600 advantageously increases the operating efficiency of the system 10 (i.e., the number of wafer inspections per hour by the system 10).
In step 622, the detection of the defect, and more particularly, the location or position where the defect was found and its classification are saved. Preferably, the detection of defects, more particularly the location or position of defects and their classification is saved to a database of the CPU, and the detection of defects, more particularly the location or position of defects and their classification is saved to an optional database or memory storage space.
Steps 602 through 622 may be repeated or cycled through the two-dimensional wafer scanning process 500 for processing image acquisition, each image acquired during the two-dimensional wafer scanning process 500 being subsequently loaded into a memory storage work area and processed to facilitate detection of defects that may be present on the wafer 12. Steps 602 through 622, and their repetition, facilitate at least one of the detection, identification, and classification of defects that may be present at any of a plurality of image capture locations on the wafer 12 along the wafer scan motion path.
In step 624, the plurality of defects and their locations and classifications detected by the two-dimensional image processing process 600 are sorted and saved, preferably into a CPU, or the defects and their locations and classifications may be sorted and saved in other databases or memory storage spaces.
The two-dimensional image processing process is preferably an automated process. Preferably, the CPU is used to program or be programmed with a series of instructions or a computer program for automatically performing the two-dimensional image processing procedure. In addition, the two-dimensional image processing process may have at least one manual input as desired for convenience.
Completion of the two-dimensional image processing procedure 600 in step 416 of method 400 causes the defects to be sorted and stored using brightfield illumination, DHA illumination, and DLA illumination, as well as their locations and classifications.
In a subsequent step 418 of method 400, a first preferred three-dimensional (3D) wafer scanning process 700 is performed. Preferably, the first 3D wafer scanning process 700 can acquire a 3D profile image of the wafer 12 to facilitate subsequent formation of a 3D profile of the wafer 12. The wafer 12 is moved along the calculated wafer scanning motion path to acquire any one or more 3D images of the plurality of pattern acquisition locations on the wafer 12 along the wafer scanning motion path calculated in step 408.
Preferred 3D wafer scanning Process 700
During the 3D wafer scanning process of step 702, the thin line illuminator 52 provides or emits thin line illumination, which is directed to the inspection location by the mirror arrangement 54 in step 704.
In a subsequent step 706, the thin line illumination is reflected by the wafer 12 or a portion thereof to be positioned at the inspection location. In step 708, the thin line illumination reflected from the wafer 12 is transmitted through the 3D contour objective lens 58. The 3D contour objective 58 has an infinite correction image offset. The transmission of the thin line illumination through the 3D contour objective 58 calibrates the thin line illumination at step 708.
In step 710, the calibrated thin line illumination is then passed through the tube lens 60 and into the 3D contour camera 56 as described in step 712. The tube lens 60 preferably focuses the calibrated thin line illumination onto the 3D contour camera 56. In step 714, thin line illumination focused on the 3D image capture plane may capture a first 3D image of the wafer 12. The aiming of the thin line illumination between the 3D profile objective 58 and the tube lens 60 facilitates the introduction of optical components or accessories between them and facilitates flexible installation and reconfiguration of the 3D profile camera 56.
As previously described, thin line illumination is provided by a laser or broadband fiber optic illumination source. In addition, the thin line illumination may also preferably be directed at the inspection position at a particular angle to the horizontal plane in which the wafer 12 is disposed. The angle at which the thin line illumination is directed at the detection location can be varied as desired by those skilled in the art using well known techniques. It will also be appreciated by those skilled in the art that the wavelength of the thin line illumination may be selected and varied as desired. Preferably, the broadband wavelength of thin line illumination is selected to enhance one of defect detection, verification or classification. The wavelength of the thin line illumination is at least the same as the wavelength of one of bright field illumination, DHA illumination, and DLA illumination.
In step 716, the first 3D image is converted into an image signal and transmitted to the CPU. In step 718, the first 3D image is processed by the CPU in at least one of 3D height measurement, coplanarity measurement, inspection and classification of a defect.
Preferably, steps 702 to 718 may be repeated a plurality of times for acquiring a corresponding plurality of 3D images, transmitting the corresponding plurality of 3D images to the CPU and processing the corresponding plurality of 3D images. Steps 702 through 718 can be performed in any predetermined number or to select image capture locations along the wafer scan motion path or wafer 12.
Preferably, the first 3D wafer scanning process 700 provides increased accuracy by which the preferred method 300 can detect a semiconductor wafer. More specifically, the first 3D wafer scanning process 700 improves the accuracy of defect detection performed by the method 400. The 3D wafer scanning process 700 provides detailed 3D metrology detail, such as the height of coplanar, three-dimensional structures, such as solder balls, gold bumps, chips of a single wafer 12, or warpage of the entire wafer 12.
Preferably, the results of step 718 and other repetitions and processing of the 3D images are stored in a database in the CPU. In addition, the results of step 718 and other repetitions and processing of the 3D images are stored in an optional database or memory storage space as needed.
A preferred second three-dimensional (3D) wafer scanning process 750 may also be used in place of the first preferred 3D wafer scanning process 700. The optical path of the preferred second 3D wafer scanning process 750 is shown in fig. 25, and a process flow diagram of the correspondingly preferred second 3D wafer scanning process 750 is shown in fig. 26.
In step 752 of the second 3D wafer scanning process 750, thin line illumination is provided or emitted by the thin line illuminator 52. In step 754, thin line illumination is directed to the inspection location by a reflective combiner 80. The reflective combining means 80 is optionally a well-known set of prisms or a device comprising two mirrors or prisms.
In step 756, the thin line illumination is reflected by the wafer 12. The thin line illumination reflected by the wafer 12 may be reflected in different directions depending on the surface profile of the wafer 12. For example, variations in the structure and geometry of the wafer 12 may cause thin line illumination to be reflected by the wafer 12 in different directions (or as a diffuse dispersion of illumination).
The reflector assembly 80 receives the thin line illumination reflected by the wafer 12. More particularly, the reflector assembly 80 is configured to collect thin line illumination reflected in multiple directions, and preferably the reflector assembly 80 includes a first pair of mirrors or prisms 82 and a second pair of mirrors or prisms 84. In step 758, the reflected thin line illumination is transmitted along two optical paths, a first optical path being passed or directed by the first pair of mirrors or prisms 84 and a second optical path being passed or directed by the second pair of mirrors or prisms 84. Those skilled in the art will appreciate that the reflector assembly may be configured as desired to direct collected reflected thin line illumination along different numbered optical paths.
In step 760, the thin line illumination is transmitted through the objective lens 58 along each of the first and second optical paths, and the two thin line illuminations passing through the 3D contoured objective lens 58 are aligned. The first pair of mirrors or prisms 82 and the second pair of mirrors or prisms 84 are preferably symmetrically disposed.
In step 762, two aligned thin lines of light are passed through the tube mirror 60. In step 764, the two thin line illuminations then enter the 3D contour camera 56. The tube lens 60 facilitates the concentrated illumination of the two thin line illuminations onto the image capture plane of the 3D image camera 56. In step 766, a plurality of perspective 3D profile images on the wafer 12 may be acquired with the two thin line illuminations collectively illuminating on the image acquisition plane of the 3D image camera 56.
In step 768, the multi-view 3D profile of the wafer 12 is converted into an image signal and transmitted to the CPU. In step 770, the multi-perspective 3D image is processed by the CPU to perform at least one of 3D height measurement, coplanarity measurement, detection and classification of the defect. From steps 752 to 770, these are more capable of repeating any number of times to acquire a corresponding number of multi-view 3D images, transmit the corresponding number of multi-view 3D images to the CPU, and process the corresponding number of multi-view 3D images.
The system 10 for performing the second 3D wafer scanning process 750 may capture two 3D profiles of the wafer 12 with a single 3D image capture device 56. In other words, the second 3D wafer scanning process 750 is capable of acquiring images having multiple perspectives of the wafer 12. Each acquired multi-view 3D image exhibits illumination in a different direction reflected by the wafer 12. The acquisition of multi-view 3D images of the wafer 12 (i.e., 3D images with multiple views of the wafer 12) improves the accuracy of the 3D profile or inspection of the wafer 12. In addition, illumination reflected in different directions from the wafer 12 may be redirected to be collected by the 3D image capture device 56 using two symmetrically disposed mirrors and prisms 82, 84. Those skilled in the art will appreciate that the reflector assembly 80 may be configured to direct illumination reflected from the wafer 12 in multiple directions (e.g., two, three, four, and five directions) to be collected by the 3D image capture device 56 in a single illumination pass.
To receive two views of the same outline of the wafer 12, existing equipment employs expensive, bulky, and complex multiple image acquisition devices. Due to the discontinuity in the profile of the wafer 12, the reflected light is not continuously returned to the predetermined light path to the plurality of image capture devices. That is, illumination dispersion often results in inaccuracies in the acquisition of a single view image of the wafer 12 due to variations in the structural and geometric images of the wafer 12 surface.
To overcome variations in the advantages and disadvantages of reflecting light from the wafer. Illumination reflected from the wafer 12 in different directions during the inventive system 10 is collected by the 3D image capture device 56. More specifically, system 10 utilizes a reflector assembly 80 to receive and direct illumination reflected by wafer 12 in different directions for subsequent collective acquisitions by 3D image acquisition device 56. This helps to improve the accuracy of the 3D profile measurement and wafer 12 inspection. The use of a separate camera, and more particularly, the 3D image capture device 56, also increases the cost and space efficiency of the system 10. Still further, the ability to use a separate objective lens and a separate tube lens (in this case, objective lens 58 and tube lens 60) for acquiring multiple views of wafer 12 facilitates alignment and improves the accuracy of the alignment.
After the first 3D wafer scanning process 700 or the second 3D wafer scanning process 750 is completed, all detected defects on the wafer 12 and their locations and classifications obtained by performing steps 416 and 418 are preferably collated. The consolidation of the defect and its location and classification facilitates the calculation of a retrospective scan motion path as described in step 420. Preferably, the retrospective scan motion path is calculated based on the locations of defects detected on the wafer 12 along the wafer scan motion path. In addition, the defect image location along the retrospective scan motion path is calculated or determined, via step 420. In steps 416 and 418, the defect image capture locations calculated by step 420 preferably correspond to locations on the wafer 12 where defects were found (e.g., the DROI of the wafer 12).
In step 422 of the preferred method 400, a preferred review process 800 is performed, which review process 800 may review the defect detections of steps 416 and 418. Preferably, the review process 800 is generated by at least one of the first mode 800a, the second mode 800b, and the third mode 800 c. It is preferred to review the process flow diagram of process 800 as shown in fig. 27.
The preferred review Process 800
As previously mentioned, the review process 800 preferably includes three review modes, namely, a first mode 800a, a second mode 800b, and a third mode 800c, and in step 802, one review mode is selected (e.g., the first mode 800a, the second mode 800b, and the third mode 800c)
First mode 800a of review Process 800
Step 804 in the first mode 800a of the review process 800 collates and saves the first and second images of all defects detected in the 2D image processing process 600 as described in step 416 of the method 400.
In step 806, the collated and saved first and second images of the defects detected in the semiconductor wafer 12 are uploaded or transferred to an external memory or reviewed off-line.
In step 808, the wafer 12 (e.g., a generic wafer 12 on the wafer table 16) is unloaded and a second wafer is loaded by the robot arm from the wafer stack 20 to the wafer table 16, each of steps 804 through 808 being repeated for the second wafer.
Steps 804 through 810 are then repeated a plurality of times, depending on the wafer number of the wafer stack 20. The repetition of steps 804 through 810 collates and saves the resulting first and second images for each wafer of the wafer stack 20. And the first image and the second image are uploaded to an external memory or reviewed as one offline. Those skilled in the art will appreciate that the first mode 800a of the review process 800 allows steps 804 through 810 to be performed automatically and without user intervention and without affecting throughput. The first mode 800a of the review process 800 allows for continuous production while the user can perform an online review of the saved images. In addition, the first mode 800a of the review process 800 increases the utilization and throughput of the system 10.
Second mode 800b of the review Process 800
In step 820 of the second version 800b of the review process 800, a plurality of review images are acquired at each defect image acquisition location as calculated in each step 420. More specifically, one review bright field image and one review dark field image are acquired at each defect image acquisition position calculated as each step 420 by the review image acquisition device 60 shown in fig. 14. That is, the brightfield image reviewed with the brightfield illuminator 62 and the darkfield image reviewed with the darkfield illuminator 64 capture each defect discovered or detected by the 2D image processing procedure 600 in step 416. Each of the plurality of review images is acquired by the review image acquisition device 60, preferably a color image.
It will be understood by those skilled in the art that the intensity of the brightfield illumination and the darkfield illumination for acquiring the brightfield review image and the darkfield review image, respectively, disclosed herein can be determined and varied as desired. For example, the intensity of illumination used to capture the plurality of review images may be selected based on the type of various wafer defects that a user of the system 10 wishes to review or based on the material of the wafer 12. Multiple review images can also be acquired with multiple mixes and multiple intensity levels of bright field illumination and dark field illumination as set by the user.
In step 822, the plurality of review images acquired at each defect image acquisition location as calculated in step 420 are collated and saved. The collated and saved review images acquired at each defect image acquisition location are then uploaded to external memory or as an off-line review in step 824.
In step 826, the wafer 12 (e.g., a generic wafer 12 on the wafer table 16) is unloaded and a second wafer is loaded by the robot 18 from the wafer stack 20 to the wafer table 16, and in step 828, each of the steps 402 through 422 is repeated for the second wafer. The collated and saved first and second images of the defects detected on the second wafer are uploaded to an external memory or reviewed off-line.
In the second approach 800b of the review process 800, steps 820 through 828 may be repeated multiple times, depending on the number of wafers on the wafer stack 20. The repetition of steps 820 through 828 may collate and save the bright field review images and the dark field review images acquired for each wafer 12 of the wafer stack 20. And uploading the first image and the second image to an external memory or as an off-line review.
The second mode 800b of the review process 800 allows for continuous production while the user can perform an online review of the saved images. The second approach 800b of the review process 800 allows multiple images of each defect to be acquired under multiple mixed illuminations for offline review without affecting machine utilization and yield improvement.
Third mode 800c of the review Process 800
The third mode 800c of the review process 800 preferably utilizes a manual input, more preferably an input or command by the user. In step 840, the user acquires a first review brightfield image and a first review darkfield image at a first defect image acquisition location. In a step 842, the user manually detects or reviews the acquired first review brightfield image and first review darkfield image. Preferably, the first review bright field image and the first review dark field image are displayed on a display screen or detector for easy visual inspection by a user. The user can inspect the defect with a different combination of illumination from the brightfield illuminator and the darkfield illuminator.
In step 844, the user may accept or reject the reclassification of the defect based on the first defect image capture location, and steps 840 through 844 are repeated for each defect image capture location as calculated in step 420.
Steps 840 through 844 are then repeated for each defect image capture location, and the exact defects and their classifications are then collated and saved as described in step 846. The sorted and saved positive defects and their classifications are then uploaded or transferred to external storage or served in step 848. In the third mode 800c of the review process 800, the wafer 12 (e.g., a generic wafer 12 on the wafer table 16) is unloaded only after step 846 is completed. Accordingly, those skilled in the art will appreciate that the third mode of the review process 800c requires user online or input to review and review each wafer.
In step 848 of the review process 800, the wafer 12 (a generic wafer 12 on the wafer table 16) is unloaded and a second wafer is loaded by the robot 18 from the wafer stack 20 to the wafer table 16, steps 840 through 848 being repeated a number of times depending on the number of wafers to be inspected (or the number of wafers on the wafer stack 20).
As will be apparent to those skilled in the art in light of the foregoing disclosure, the first mode 800a and the second mode 800b of the review process 800 affect the downloading of the associated non-authenticated saved, stored and captured images to an external memory or server. The first approach 800a and the second approach 800b represent an auto-reply process. The user can access an external memory or server to review the captured images off-line as needed or as needed. The first mode 800a and the second mode 800b may be a continuous review of each wafer 12 on the wafer stack 20, or a continuous image acquisition, sorting, uploading, or storage.
It should be understood by those skilled in the art that the present invention further describes three review modes, i.e., a first mode 800a, a second mode 800b, and a third mode 800 c. Other review processes or different permutations or combinations of the three ways of the first way 800a, the second way 800b and the third way 800c may be applied by those skilled in the art. In addition, it should be understood by those skilled in the art that each step of the first mode 800a, the second mode 800b and the third mode 800c may be modified or changed by the techniques known in the art without departing from the scope of the present invention.
After the review process 800 is performed, the verified defects and their locations and classifications are sorted and stored in step 426. the verified defects and their locations and classifications may optionally be sorted and stored in a database or any of an external database or memory space. The wafer map is also uploaded in step 426.
As previously described, each of the acquired brightfield illumination, DHA image and DLA image is compared to the corresponding golden reference or reference image for identifying or detecting defects on the wafer 12. The present invention provides a reference image generation process 900 (shown in fig. 18) that facilitates generating or producing the reference image. Those skilled in the art will appreciate that the reference image generation process 900 may also be used as a training process.
As previously described, the 2D bright field image, the 2D DHA image, and the 2D DDLA image acquired during the 2D wafer scanning process 500 are preferably matched with their corresponding reference images generated by the reference image generation process 900.
The 2D image processing procedure 600 has been described as a preferred comparison procedure, however, for greater clarity, a summary of the matching between the working image and the reference image is provided below. First, the selected sub-pixels of the working image are implemented with known references including, but not limited to, templates, traces, bumps, pads, and other unique ways. Second, the reference intensity of the working image of the wafer 12 acquired at the preset image acquisition position is calculated. That is, a reference intensity for each pixel on each working image of the wafer 12 is determined. Statistical parameters of the reference intensity for each pixel on each working image of the wafer 12 are then calculated. An appropriate reference image is then selected for comparison or matching with the working image (or reference images for comparison with the working image). The appropriate reference image is preferably selected from a plurality of reference images generated by the reference image generation process 900. The reference image for comparison with the working image is selected based on the calculated statistical parameters.
The CPU is preferably programmed for selection and extraction of an appropriate reference image for comparison or matching with the working image. More specifically, the CPU is preferably programmed to select a reference image for comparison with the working image based on the calculated statistical parameters. Preferably, the calculation, storage, normal or geometric mean, standard deviation, maximum or minimum intensity of each reference image (these are referred to as statistical parameters), speed and accuracy of selecting or extracting the appropriate reference image to the working image to be compared is improved by the reference image generation process 900.
The corresponding data material for each pixel of the working image is then calculated. The data material includes, for example, a normal or geometric mean, a standard deviation, and maximum and minimum intensities for each pixel of the working image. Each pixel data value of the working image is then referenced or checked against the corresponding data value of each pixel of the selected reference image.
Comparison of data values between pixels of the working image and pixels of the reference image may allow for identification or detection of defects. Preferably, the user sets a predetermined threshold. The difference in data values between the pixels of the working image and the pixels of the reference image is compared with a preset threshold value in one of multiplicative, additive and invariant values. If the difference between the data value of the pixel of the working image and the pixel of the reference image is greater than a preset threshold value, a defect (defects) is/are marked.
The predetermined threshold value may be changed as desired, and preferably the predetermined threshold value is changed to adjust the method 400 to be tighter. In addition, the preset limit value is preferably changed as needed according to the type of defect to be detected, the material of the wafer 12 for inspection, or the illumination condition. In addition, the preset threshold value may be varied according to the needs of a customer or more common semiconductor manufacturers.
One preferred system 10 and one preferred method 400 for semiconductor wafer inspection are described above. Those skilled in the art will appreciate from the foregoing description that modifications may be made to the system 10 and method 400 without departing from the intended scope of the invention. For example, the steps associated with method 400, and the steps associated with processes 500, 600, 700, 750, 800, and 900 may be modified without departing from the scope of the claimed invention.
It is an object of the system 10 and method 400 of the present invention to enable accurate and cost effective inspection of semiconductor components, such as wafers. The ability to automate inspection of wafers while the wafers are in motion through the system 10 and method 400 enhances the efficiency of the inspection of the wafers. This is because time is not wasted in slowing down and stopping the individual wafers at the inspection position where the image capture is performed, and in accelerating and transporting the wafers at the inspection position after the image capture, which is required in existing wafer inspection systems. Known image offsets between multiple image acquisitions facilitate processing of the acquired images to inspect for possible defects therein. The offset for a particular set of images of the same wafer allows the software to accurately determine the coordinates on the wafer and then the position of the wafer across the frame. The offset is preferably determined by simultaneously reading encoder values for the X-and Y-axis displacements and used to calculate the coordinates of the defect or defects. In addition, the advantage of fusing two different imaging techniques with two images at each inspection location facilitates more accurate wafer inspection.
Those skilled in the art will appreciate that the synchronization of the acquired images may be varied as desired. More specifically, synchronization may be adjusted to enhance the ability of the programmable controller to compensate for image offsets between acquired images. The system 10 and method 400 of the present invention facilitate accurate synchronization between the illumination provided and the corresponding exposure of the image capture device used for image capture, minimizing degradation of inspection quality.
The illumination used by system 10 may be collected with the full visible spectrum of light to enhance image quality. The illumination intensity and combination thereof provided by system 10 for image acquisition can be readily selected and varied as desired according to a number of factors including, but not limited to, the type of defect to be detected, the material of the wafer to be detected and the stringent requirements. The system 10 and method 400 provided by the present invention can also perform height measurements of 3D elements on the wafer 12, as well as analysis of the 3D profile image, during wafer motion.
The system 10 of the present invention has an optical arrangement that does not require frequent spatial reconfiguration to accommodate changes in semiconductor wafer structure or characteristics. In addition, the system 10 uses tube mirrors 36, 38, 60, 72 to facilitate reconfiguration and design of the system 10. The use of the tube lens 36, 38, 60, 72 facilitates the introduction of optical elements and accessories into the system, more particularly between the objective lens 40 or objective lens arrangement and the tube lens 36, 38, 60, 72.
The system 10 of the present invention includes a vibration isolator 24 (collectively referred to as a stabilizer mechanism) for dampening unwanted vibrations generated to the system 10. The vibration isolators 24 help to improve the quality of the images acquired by the first image acquisition device 32, the second image acquisition device 34, the 3D profile camera 56, and the review image acquisition device 62, thereby improving the accuracy of defect detection. In addition, the XY table 22 of the system 10 may enable precise displacement and alignment of the wafer 12 relative to the inspection position.
As discussed in the background, existing reference image generation or generation processes require manual selection of "good" wafers, resulting in relative inaccuracy and inconsistency with the generated reference image. Therefore, the quality of wafer inspection is adversely affected. The system 10 and method 400 of the present invention improve inspection quality by generating a reference image without the need for manual selection (e.g., subjective selection) of a "good" wafer. The reference image generation process 900 allows for different threshold values of intensity to be applied at different locations on the wafer to account for variations in non-linear illumination across the wafer 12. Thus, the method 400 facilitates reducing detection of spurious or unwanted defects and ultimately improves the quality of wafer inspection.
The present invention enables automatic defect monitoring by using an analytical model or software to compare a reference image with a captured image of a wafer of unknown quality. The present invention preferably enables automatic defect monitoring by using digital analysis on digital images, such as a working image and a reference image.
The present invention makes possible an automatic review mode without significantly affecting the product and improves the use of the machine, despite the fact that existing equipment has only a manual review mode, requiring the operator to make a decision for each of a plurality of different lighting intensity defects to be used and seen.
The present invention enables an automatic review mode without significantly affecting the production or inspection of the wafer. In addition, the present invention facilitates improved use of the system or machine. Current devices or inspection systems typically provide only a manual review mode, requiring an operator to manually determine or determine each defect and also taking into account a number of factors and parameters, such as different illumination intensities.
In the foregoing method, embodiments of the present invention describe a preferred system and preferred method for inspecting semiconductor wafers and components thereof. While the preferred systems and methods discuss at least one of the problems identified in the background that present semiconductor inspection systems and methods face, in any event, those skilled in the art will appreciate that the present invention is not limited to the specific forms, solutions or arrangements of parts described in the foregoing embodiments. It will be apparent to those skilled in the art that numerous modifications and/or variations can be made to the present invention without departing from the spirit or scope of the invention.

Claims (26)

1. A method of inspecting a semiconductor wafer to capture an image of the semiconductor wafer as the semiconductor wafer is displaced along a scanning motion path which subsequently locates each of a plurality of predetermined regions of the semiconductor wafer within an inspection area, the method comprising:
when the semiconductor wafer is displaced along the scanning motion path, positioning a first area, wherein the first area is positioned in a plurality of preset areas of the semiconductor wafer, and the semiconductor wafer is positioned in the detection area;
performing a semiconductor wafer scanning process when a first region of the semiconductor wafer remains within the inspection area during displacement of the semiconductor wafer along the scanning motion path, the semiconductor wafer scanning process comprising:
during a first time interval:
(i) initiating exposure of a first image capture device comprising a first image sensor, (ii) capturing a first image of a first region of a semiconductor wafer at a capture location of a first image within the inspection area using the first image capture device under a first contrast illumination provided by a first flash or strobe, (iii) generating first image data using the first image capture device, and (iv) terminating exposure of the first image capture device after capturing the first image to end the first time interval period;
during a second time interval beginning immediately after the end of the first time interval:
(v) (vi) capturing a second image of the same first area of the semiconductor wafer at a capture location of a second image within the inspection area using a second image capture device under a second contrast illumination provided by a second flash or strobe, the images of the first and second images being offset in that the semiconductor wafer is spatially displaced a predetermined distance between the location at which the first image was captured and the location at which the second image was captured, (vii) generating second image data using the second image capture device, and (viii) terminating exposure of the second image capture device after capturing the second image to end the second time interval;
associating the first image and the second image by determining an image offset between the first image and the second image.
2. The method of inspecting a wafer of claim 1 wherein each of the plurality of predetermined areas of the semiconductor wafer includes at least one semiconductor wafer chip or a portion of a semiconductor wafer, and wherein during the first time interval the first contrast illumination is reflected from the first area of the semiconductor wafer and then passes through an image objective, wherein during the first time interval the second contrast illumination is reflected from the first area of the semiconductor wafer and then passes through the image objective, and wherein the image objective is positioned between the first area of the semiconductor wafer and each of the first and second image capture devices.
3. The method of inspecting a wafer of claim 1 or 2 further comprising comparing defect locations on the first image with defect locations on the second image to provide a defect inspection result after correlating the first and second images.
4. The method of inspecting a wafer of claim 1 or 2 wherein the first contrast illumination is brightfield illumination and the first image is a brightfield image.
5. The method of inspecting a wafer of claim 4 wherein the second contrast illumination is dark field illumination and the second image is a dark field image.
6. The method of inspecting a wafer of claim 3 wherein the first contrast illumination and the second contrast illumination are both broadband illumination.
7. The method of inspecting a wafer of claim 3 wherein the first contrast illumination and the second contrast illumination are bright field illumination, dark field illumination or a combination of bright field illumination and dark field illumination and the first contrast illumination and the second contrast illumination are directed at an inspection location.
8. The method of inspecting a wafer of claim 7 wherein the first and second contrast illuminations are bright field illumination, dark field high angle illumination, dark field low angle illumination or any combination thereof and the first and second contrast illuminations are directed to an inspection position.
9. The method of inspecting a wafer of claim 8 wherein said bright field illumination, said dark field high angle illumination and said dark field low angle illumination are broadband illumination having equal wavelength spectra.
10. The method of inspecting a wafer of claim 8 wherein the bright field illumination is emitted by a flash lamp.
11. The method of inspecting a wafer of claim 8 wherein the bright field illumination is white light illumination.
12. A method for inspecting a wafer as recited in claim 3, wherein if the location of the defect in the first image is coincident with the location of the defect in the second image, then this is an accurate inspection result.
13. The method of inspecting a wafer of claim 3 wherein if there is no coincident defect location on the first image and the second image, the defect is detected as a false detection.
14. The method of inspecting a wafer of claim 1, wherein determining an image offset between the first image and the second image comprises:
retrieving XY-encoded values representing a first semiconductor wafer and a second semiconductor wafer position, respectively, the first and second semiconductor wafer positions corresponding to the capturing of the first image and the capturing of the second image; and
calculating a coarse compensation between the first image and the second image based on the retrieved XY-encoded values.
15. The method of inspecting a wafer of claim 14 wherein determining an image offset between the first image and the second image further comprises calculating a final compensation between the first image and the second image by sub-pixel alignment of the first image and the second image.
16. The method of inspecting a wafer of claim 1, further comprising:
positioning a second region when the semiconductor wafer is displaced along the scanning motion path, the second region being located in a plurality of predetermined regions of the semiconductor wafer within the inspection area;
while a second region of the semiconductor wafer remains within the inspection area during displacement of the semiconductor wafer along the scanning motion path, repeating the semiconductor wafer scanning process with respect to the second region of the semiconductor wafer to acquire:
a first image of a second region of the semiconductor wafer, the first image being captured by a capture device of the first image under the first contrast illumination, the semiconductor wafer being located at a capture location of the first image within the inspection area; and
a second image of a second region of the semiconductor wafer, the second image being captured by the capture device of the second image under the second contrast illumination, the semiconductor wafer being located at a second image capture location within the inspection area,
characterized in that the time difference between (a) a first position of a semiconductor wafer positioned within the inspection area for acquiring a first image of the first position of the semiconductor wafer, and (b) a second position of the semiconductor wafer positioned within the inspection area for acquiring a first image of the second position of the semiconductor wafer is equal to (i) the time required for exposing the first image acquisition device and acquiring a first image of a first region of the semiconductor wafer, plus (ii) the time required for converting the first image of the first region of the semiconductor wafer into a first image signal and for transmitting the first image signal to a database or storage system.
17. A system for inspecting a semiconductor wafer to acquire an image of the semiconductor wafer as the semiconductor wafer is displaced along a scanning motion path which subsequently locates each predetermined area of the semiconductor wafer within an inspection area, the system comprising:
a first image capture module comprising a first image sensor for (a) capturing a first image of a first region of the semiconductor wafer under a flash or strobe of a first contrast illumination during a first exposure interval while the first region of the semiconductor wafer remains within the inspection area during displacement of the semiconductor wafer along the scanning motion path, and (b) generating first image data corresponding to the first image;
a second image acquisition module comprising a second image sensor for (c) when said first region of said semiconductor wafer remains within said detection area during displacement of said semiconductor wafer along said scanning motion path, during a second exposure interval separated from the first exposure interval and started immediately after the end of the first exposure interval, acquiring a second image of the first region of the semiconductor wafer under a flash or strobe of second contrast illumination immediately after acquiring the first image of the first region of the semiconductor wafer, and (d) generating second image data corresponding to said second image, said first image and said second image being offset in that said semiconductor wafer is spatially displaced a predetermined distance between the location at which said first image is acquired and the location at which said second image is acquired; and
a defect location comparison module is coupled to the first and second image capture modules, the defect location comparison module correlating the first image and the second image to a spatial displacement of the semiconductor wafer by determining the image offset, comparing a defect location found on the first image with another defect location found on the second image and providing a defect detection result therefrom.
18. The system for inspecting a wafer of claim 17 wherein each of the plurality of predetermined regions of the semiconductor wafer includes at least one semiconductor wafer chip or a portion of a semiconductor wafer, and wherein the system for inspecting a wafer further includes an image objective lens positioned between the first region of the semiconductor wafer and each of the first and second image capture modules through which the first and second contrast illuminations reflected from the first region of the semiconductor wafer travel to the first and second image capture modules, respectively.
19. The system for inspecting a wafer of claim 17 or 18 wherein the first contrast illumination and the second contrast illumination are both broadband illumination.
20. The system for inspecting a wafer of claim 17 or 18 further comprising an illumination device for selectively directing a supply of bright field illumination, dark field illumination, and combinations thereof, as each of the first contrast illumination and the second contrast illumination at the inspection location.
21. The system for inspecting a wafer of claim 20 wherein said illumination apparatus includes an illumination configurator that selectively controls the provision of bright field illumination, high angle dark field illumination, low angle dark field illumination, and any combination thereof, as each of said first and second contrast illuminations.
22. The system for inspecting a wafer as claimed in claim 17 or 18, further comprising an output module coupled to the defect location comparing module, the output module storing the semiconductor wafer according to the inspection result provided by the defect location comparing module.
23. The system for inspecting a wafer of claim 22 wherein when the location of a defect found on said first image is coincident with the location of a defect found on said second image, then the defect detection result is an accurate result and wherein otherwise said defect detection result is an erroneous result.
24. The system for inspecting wafers of claim 23 wherein said output module includes a first output node and a second output node and wherein a semiconductor wafer is deposited on said first output node when said inspection result is an accurate result and on said second output node when said inspection result is an erroneous result.
25. The system for inspecting a wafer of claim 17 or 18 wherein the defect location comparison module is configured to determine the image offset to correlate the first and second images to a displacement of the semiconductor wafer by: retrieving XY encoded values corresponding to the first image capture location and the second image capture location; and calculating a coarse compensation between the first image and the second image based on the retrieved XY-encoded values.
26. The system for inspecting a wafer of claim 25 wherein the defect location comparison module is further configured to determine the image shift to correlate the first image and the second image with the displacement of the semiconductor wafer by calculating a final compensation, the final compensation between the first image and the second image being calculated by sub-pixel alignment of the first image and the second image.
HK15101274.8A 2009-01-13 2015-02-05 System and method for inspecting a wafer HK1201982B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG200900229-6A SG163442A1 (en) 2009-01-13 2009-01-13 System and method for inspecting a wafer
SG200900229-6 2009-01-13

Publications (2)

Publication Number Publication Date
HK1201982A1 HK1201982A1 (en) 2015-09-11
HK1201982B true HK1201982B (en) 2018-03-23

Family

ID=

Similar Documents

Publication Publication Date Title
CN101924053B (en) Systems and methods for inspecting wafers
CN101783306B (en) Systems and methods for inspecting wafers
CN101853797B (en) For detecting the system and method for wafer
JP5866704B2 (en) System and method for capturing illumination reflected in multiple directions
HK1201982B (en) System and method for inspecting a wafer
HK1149632B (en) System and method for inspecting a wafer
HK1146332B (en) System and method for inspecting a wafer
HK1146332A (en) System and method for inspecting a wafer
HK1149367B (en) System and method for inspecting a wafer
HK1189941B (en) System and method for inspecting a wafer
HK1189941A (en) System and method for inspecting a wafer
HK40006797B (en) System and method for inspecting a wafer
HK40006797A (en) System and method for inspecting a wafer
HK1165905A (en) System and method for capturing illumination reflected in multiple directions
SG185301A1 (en) System and method for inspecting a wafer