[go: up one dir, main page]

HK1189941B - System and method for inspecting a wafer - Google Patents

System and method for inspecting a wafer Download PDF

Info

Publication number
HK1189941B
HK1189941B HK14103020.2A HK14103020A HK1189941B HK 1189941 B HK1189941 B HK 1189941B HK 14103020 A HK14103020 A HK 14103020A HK 1189941 B HK1189941 B HK 1189941B
Authority
HK
Hong Kong
Prior art keywords
image
illumination
wafer
images
thin line
Prior art date
Application number
HK14103020.2A
Other languages
Chinese (zh)
Other versions
HK1189941A (en
Inventor
阿杰亚拉里.阿曼努拉
林靖
葛汉成
黄国荣
Original Assignee
联达科技设备私人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 联达科技设备私人有限公司 filed Critical 联达科技设备私人有限公司
Publication of HK1189941A publication Critical patent/HK1189941A/en
Publication of HK1189941B publication Critical patent/HK1189941B/en

Links

Abstract

A method and a system for inspecting a wafer. The system comprises an optical inspection head, a wafer table, a wafer stack, a XY table and vibration isolators. The optical inspection head comprises a number of illuminators, image capture devices, objective lens and other optical components. The system and method enables capture of brightfield images, darkfield images, 3D profile images and review images. Captured images are converted into image signals and transmitted to a programmable controller for processing. Inspection is performed while the wafer is in motion. Captured images are compared with reference images for detecting defects on the wafer. An exemplary reference creation process for creating reference images and an exemplary image inspection process is also provided by the present invention. The reference image creation process is an automated process.

Description

System and method for inspecting wafers
Technical Field
The invention relates to a wafer detection process. More particularly, the present invention relates to an automated system and method for inspecting semiconductor components.
Background
It is increasingly important in the semiconductor industry to ensure that semiconductor components, such as semiconductor wafers and chips, are capable of consistently high quality during their production. Semiconductor wafer manufacturing techniques have been improved to incorporate an ever increasing number of features onto a smaller surface area of a semiconductor wafer. Accordingly, the lithographic processes employed in semiconductor wafer fabrication have become more sophisticated, allowing ever increasing features to be incorporated onto smaller surface areas of the semiconductor wafer (i.e., higher performance of the semiconductor wafer). Thus, the size of potential defects on semiconductor wafers is also typically embodied in the micron to submicron range.
It is apparent that manufacturers of semiconductor wafers are increasingly demanding improvements in semiconductor wafer quality control and inspection procedures to ensure consistently high quality semiconductor wafers are produced. Semiconductor wafers are routinely inspected for defects thereon, such as the presence of surface particles, imperfections, irregularities, and other irregularities. These defects may affect the final performance of the semiconductor wafer. Therefore, it is crucial to exclude or locate defective semiconductor wafers in their production.
Various improvements have been made in semiconductor inspection systems and processes. For example, higher resolution imaging systems, computers with faster computational speeds, and mechanical processing systems with higher precision have been put into use. Further, semiconductor wafer inspection systems, methods and techniques have historically applied at least one of bright field illumination, dark field illumination and three dimensional filtering techniques.
In bright field imaging, particles on the semiconductor wafer scatter light from the collection aperture of the image capture device, resulting in a reduction in energy back to the image capture device. When the particle is smaller than the optical point spread function of the lens or digitized pixel, the bright field energy from the current region surrounding the particle provides a significant amount of energy relative to the particle, making the particle difficult to detect. In addition, the very slight energy attenuation due to the size of the small particles is often masked by the reflectivity change from the current area surrounding the particles, resulting in increased error cases for inspecting defects. To overcome the above phenomenon, the semiconductor inspection system is equipped with a high-end camera having a greater resolution. However, bright field imaging generally has a better pixel contrast, and this is also advantageous for determining the size of the defect and for detecting dark spot defects at the time.
Dark field illumination and its advantages are well known in the art. Dark field imaging has been applied to some existing semiconductor wafer inspection systems. Dark field imaging generally relies on the angle of light incident on the object for inspection. Inspection at low angles (e.g., 3 to 30 degrees) in the horizontal plane relative to the object, dark field imaging typically produces a dark image except for imperfect locations such as surface particles, defects, and other irregularities. A special use of dark field imaging is to illuminate defect portions of smaller size than the resolving power of the lens used to make bright field imaging. At higher angles (e.g., 30 to 85 degrees) relative to the horizontal, dark field imaging typically produces images with better contrast than bright field images. Such high angle dark field imaging is particularly useful in enhancing contrast of surface irregularities in mirror finishes or transparent objects. In addition, high angle dark field imaging enhances imaging of oblique objects.
The light reflectivity of a semiconductor wafer typically has a significant effect on the quality of the image obtained by each of the bright field and dark field imaging. Both the micro-and macrostructure of a semiconductor wafer can affect the optical reflectivity of the semiconductor wafer. Generally, the amount of light reflected by a semiconductor wafer is affected by the direction or angle of the incident light, the observation angle, and the light reflectance of the surface of the semiconductor wafer. The light reflectivity in turn depends on the wavelength of the incident light and the material composition of the semiconductor wafer.
It is often difficult to control the optical reflectivity of a semiconductor wafer that is shown for inspection. This is because the semiconductor wafer may be composed of multiple layers of materials. Each material layer is capable of transmitting light of a different wavelength in a different manner, for example at a different speed. Furthermore, the layers may have different light permeability, or even light reflectivity. Accordingly, it will be apparent to those skilled in the art that the use of a single wavelength or narrow band wavelength light or light source will generally adversely affect the quality of the acquired image. The frequent modifications required for a single wavelength or narrow band of wavelengths requires the use of multiple spatial filters or wavelength tuners, which are often inconvenient. To alleviate these problems, it is important to use a broadband light source (i.e., a light source having a wide range of wavelengths), such as a broadband light source having a wavelength range between 300nm and 1000 nm.
Broadband light sources are critical in achieving high quality images and processing wafers at a wide range of surface reflectivities. In addition, the defect inspection capability of the wafer inspection system can also be generally enhanced by using multiple light source angles, such as bright field and dark field illumination. The existing wafer systems on the market do not utilize light sources at multiple angles and with a full broadband wavelength.
Currently available wafer inspection systems or devices typically use one of the following methods to achieve multiple responses during wafer inspection:
(1) multiple Image Capture Devices with Multiple lighting (Multiple Image Capture Devices-MICD)
The MICD uses multiple image capture devices and multiple illuminations. MICD is based on the principle of segmentation of the wavelength spectrum into narrow bands and assigns the wavelength spectrum of each segment to a separate illumination. The MICD method is applied during system design, with each image capture device paired with a corresponding illumination (i.e., illumination source) that cooperates with a corresponding optical accessory such as a spatial frequency filter or a beamsplitter with a special coating. For example, a laser is used with the wavelength of the bright field limited to 650nm to 700 nm. The MICD method has some disadvantages according to experience, such as poor image quality and design stiffness. Design rigidity is due to the wavelength correction of a single illumination, which typically requires reconfiguration of the optical structure of the entire wafer inspection system. Furthermore, the MICD method is generally not capable of capturing illumination of varying wavelengths by a single image capture device that does not include the quality of the captured image.
(2) Single Image Capture Device with multiple illumination (Single Image Capture Device-SICD)
The SICD method uses a single image acquisition device to acquire multiple illuminations with segmented or broadband wavelengths. However, it is not possible to obtain multiple illumination responses simultaneously while the wafer is in motion. In other words, the SICD method allows only a single illumination response while the wafer is in motion. In order to realize multiple illumination responses, the SICD method requires an image collector when the wafer is in an unfixed state, thereby affecting the flow of the wafer inspection system.
Semiconductor wafer inspection systems employing simultaneous, independent, dynamic image acquisition using broadband bright and dark fields or multiple illuminations in general and multiple image acquisition devices are currently unavailable due to the relative lack of understanding that makes it impractical to implement and operate its advantages. Existing semiconductor wafer inspection systems employ either MICD or SCID, as previously described. Devices employing MICDs cannot use a wide frequency band and suffer from poor image quality and inflexible system setup. On the other hand, devices using SICD suffer from reduced system throughput and inability to obtain dynamically synchronized multiple illumination responses.
A typical prior art optical inspection system for semiconductor wafers uses both bright field illumination and dark field illumination as disclosed in U.S. patent No. 5822055 (KLA 1). One embodiment of the optical detection system disclosed in KLA1 employs MICD as previously described. It uses multiple cameras to acquire respective bright and dark field images of a semiconductor wafer. The acquired bright field and dark field images are then processed individually or together for inspection of defects on the semiconductor wafer. In addition, the optical inspection system of KLA1 utilizes respective bright field and dark field illumination while acquiring bright field and dark field images. KLA1 achieves simultaneous image acquisition by employing segmentation of the illumination wavelength spectrum, a narrow band illumination source, and spatial frequency filters for acquiring bright field and dark field images. In the KLA 1-optical system, one camera is set up to use a narrow band laser and spatial frequency filter to accept dark field images. Another camera is set up to receive the remaining wavelength spectrum using bright field illumination and with a special paint beam splitter. The disadvantages of the optical detection system disclosed by KLA1 are mainly: it is not suitable for imaging of different semiconductor wafers, including large variations in surface reflectivity due to wavelength spectrum separation. The cameras are closely coupled to the respective illuminations to enhance the detection of certain wafer types. One such wafer has a carbon coating on its front side and they show poor reflection properties at certain illumination angles, for example when only bright fields are used. This requires a combination of bright field and high angle dark field illumination to inspect for certain defects. Accordingly, the optical inspection system of KLA1 requires a large number of light or illumination sources and filters for performing multiple inspections (alternating multiple scans affecting the throughput of the system) to acquire multiple bright and dark field images.
Other typical existing optical inspection systems that utilize both bright field and dark field imaging modes are disclosed in U.S. patent No. 6826298 (AUGTECH 1) and U.S. patent No. 6937753 (AUGTECH 2). Dark field imaging of the optical inspection systems in AUGTECH1 and AUGTECH2, using multiple lasers for low angle dark field imaging, and one fiber optic ring lamp for high angle dark field imaging. In addition, AUGTECH1 and AUGTECH2 optical detection systems use a single camera sensor and belong to the SICD method described earlier. Accordingly, inspection of the semiconductor wafers in AUGTECH1 and AUGTECH2 is accomplished by bright field imaging or dark field imaging, or by a combination of both bright field imaging and dark field imaging, wherein each of bright field imaging and dark field imaging is performed when the other is completed. The detection systems of AUGTECH1 and AUGTECH2 are not capable of simultaneous, dynamic or independent bright field and dark field imaging while the wafer is in motion. Accordingly, each semiconductor wafer requires multiple inspections to complete its inspection, resulting in reduced production throughput and increased resource usage.
In addition, some existing optical inspection systems use golden mirror or reference images for comparing newly acquired semiconductor wafer images. The acquisition of a reference image typically requires the acquisition of a plurality of known or manually selected images of "good" semiconductor wafers and then the application of statistical formulas or techniques to obtain the reference image. The above-described manner of obtaining has the disadvantage of being inaccurate or inconsistent in the manual selection of "good" semiconductor wafers. The use of such reference images by optical inspection systems often results in semiconductor wafers being falsely judged as faulty due to inaccuracies or inconsistencies in the reference images. With the increasing complexity of the circuit geometry of semiconductor wafers, reliance on manual selection of "good" semiconductor wafers for obtaining reference images is also continually in disagreement with the ever-increasing high quality standards of the semiconductor inspection industry.
Obtaining a golden reference image involves a large number of statistical techniques and calculations. Most statistical techniques are very common and have their own advantages. With existing equipment technology, the mean or average with standard deviation is typically used to calculate the golden mirror pixel. This approach works well when good pixels are known; otherwise, any defective or noisy pixels will interfere with and affect the final average or the average of the reference pixels. Another approach is to use a median and it reduces the interference due to noisy pixels, but it does not essentially eliminate the effect of noisy pixels. All existing devices attempt to reduce errors due to the application of different types of statistical techniques, such as average, median of other numbers, but they fail to get any special or user-friendly results to eliminate the errors. These special results clearly help to eliminate those pixels that affect the final reference pixel value.
U.S. patent No. 6324298 (AUGTECH 3) discloses a training method for use in semiconductor wafer inspection to generate a golden reference mirror or reference image. This method disclosed in AUGTECH3 requires "known good quality" or "defect free" wafers. The selection of such wafers is manual or user-done. Statistical formulas or techniques are then applied to obtain the reference image. As such, accurate and consistent selection of "good quality" wafers is critical to accurate and consistent semiconductor inspection quality. In addition, AUGTECH3 uses the mean and standard deviation to calculate a single pixel of the reference image, and the presence of any defective pixels will result in inaccurate reference pixels. These defective pixels, which are due to foreign matter or other imperfections, may be confused with statistical calculations and lead to inaccuracies in the reference pixels. The method of AUGTECH3, as will be apparent to those skilled in the art, is prone to inaccuracies, inconsistencies, and errors in semiconductor wafer inspection.
In addition, in the optical inspection system disclosed in AUGTECH3, a flash or strobe lamp is used for illuminating the semiconductor wafer. One skilled in the art will recognize that inconsistencies between different flashes or strobes will result from a number of factors, including but not limited to: temperature differences, non-uniformity of the electronics, and different flashes and flash intensities. Such differences and inconsistencies are inherent even in "good" semiconductor wafers. If the system does not take into account these differences due to stroboscopic light, the presence of these differences will affect the quality of the golden reference image. In addition, the illumination intensity and uniformity varies across the surface of the semiconductor wafer due to various factors including, but not limited to, flatness of the wafer, mounting at different locations on the surface, and light reflectivity. Any reference images generated in the above-described methods may be unreliable and inaccurate when they are used to compare images acquired at different locations on a semiconductor wafer, since variations in the flash intensity and the stroboscopic characteristics of the lamp are not accounted for.
Variations in product specifications, such as semiconductor wafer size, complexity, surface reflectivity, and standards for quality inspection, are common in the semiconductor industry. Accordingly, there is a need for a semiconductor wafer inspection system and method that can detect such variations in product specifications. However, existing semiconductor wafer inspection systems and methods are generally not capable of satisfactorily detecting such product specification changes, especially in view of the increasing quality standards body of the semiconductor industry.
For example, typical existing semiconductor wafer inspection systems use conventional optically integrated comparison elements, such as cameras, illuminators, filters, polarizers, mirrors, and lenses, that have fixed spatial positions. The introduction or removal of the optically integrated components typically requires the rearrangement and redesign of the entire optically integrated assembly. Accordingly, such semiconductor wafer inspection systems have inflexible designs or configurations and require a relatively long replacement time to modify them. In addition, the distance between the objective lens of a conventional optical assembly and the semiconductor wafer placed for inspection is generally too short to easily introduce fiber optic illumination with different angles for dark field illumination.
There are many other existing semiconductor wafer inspection systems and methods. However, due to the lack of technical knowledge and know-how available today, the present semiconductor wafer inspection system is not capable of using both bright field and dark field imaging simultaneously to inspect the wafer during its movement, while still having design and configuration flexibility. There is also a need for a semiconductor wafer inspection system and method that enables resource efficient, flexible, accurate, and fast inspection of semiconductor wafers. This also takes into account, inter alia, the ever-increasing complexity of the electrical circuits of the semiconductor wafers and the ever-increasing quality standards of the semiconductor industry.
Disclosure of Invention
There is a lack of semiconductor wafer inspection systems that can independently apply both bright field and dark field imaging simultaneously for performing inspection while the semiconductor wafer is in motion. In addition, the requirements of semiconductor wafer monitoring systems for their internal components, such as illumination, cameras, objective lenses, filters, and mirrors, are flexibility and adjustability of spatial internal configurations. In view of the increasingly complex semiconductor wafer circuitry, and the increasing quality standards body of the semiconductor industry, the accuracy and consistency of semiconductor wafer inspection is of increasing importance. Acquiring a golden reference or reference image for comparison with the acquired image of the semiconductor wafer currently requires manual selection of a "good" semiconductor wafer. Such manual selection can lead to inaccuracies and inconsistencies in the acquired reference images and, in turn, manifest themselves in subsequent inspection of the semiconductor wafer. Accordingly, there is a need for an improved training method or process for obtaining a reference image that can be compared to an acquired hiccup image of a semiconductor wafer. The present invention seeks to solve at least one of the above problems.
Embodiments of the present invention provide an inspection system and method for inspecting semiconductor components, including but not limited to semiconductor wafers, chips, LED integrated circuit chips, and solar silicon wafers. The inspection system is designed to perform two-dimensional (2D) and three-dimensional (3D) wafer inspection. The inspection system is further designed to perform defect inspection.
2D wafer inspection is facilitated by a 2D optical assembly that includes at least two image capture devices. The 2D wafer inspection utilizes at least two different contrast illuminations for acquiring images of the respective contrast illuminations. 2D wafer inspection is performed while the wafer is in motion and is done in a single pass. 3D wafer inspection is facilitated by a 3D optical assembly that includes at least one image capture device and at least one thin line illuminator. A thin line illuminator, which is one or both of a laser or a broadband illumination source, is employed to direct a semiconductor wafer while the semiconductor wafer is in motion to capture a 3D image of the semiconductor wafer. Defect inspection performed by the inspection system is facilitated by the defect inspection optical assembly.
In accordance with a first embodiment of the present invention, an inspection system is disclosed that includes an illumination mechanism for providing a first broadband illumination and a second broadband illumination, a first image capture assembly for receiving at least one of the first broadband illumination and the second broadband illumination reflected from a wafer, and a second image capture device for receiving at least one of the first broadband illumination and the second broadband illumination reflected from the wafer. The first image capturing device and the second image capturing device are configured to receive at least one of the first broadband illumination and the second broadband illumination in sequence to capture respective first and second images of the wafer. The wafer is displaced in space by acquiring a distance between the first image and the second image.
In accordance with a second embodiment of the present invention, a system for inspecting semiconductor components is disclosed that includes an illumination mechanism for providing a broad band of first comparative illumination and a broad band of second comparative illumination, and a plurality of image capturing devices, each of the plurality of image capturing devices capable of receiving each of the first comparative illumination and the second comparative illumination reflected from the wafer. The plurality of image capture devices are configured for sequentially receiving one of the broadband first comparison illumination and the broadband second comparison illumination for capturing a first comparison image and a second comparison image, respectively, of the wafer. The capturing of the first comparison image and the capturing of the second comparison image occur while the wafer is moving.
According to a third embodiment of the present invention, an inspection system is disclosed that includes a first image capturing device for receiving at least one of a broadband first comparative illumination and a broadband second comparative illumination, and a second image capturing device for receiving at least one of a broadband first comparative illumination and a broadband second comparative illumination. The acquisition of the broadband first comparative illumination and the broadband second comparative illumination by the first image acquisition device and the second image acquisition device causes the acquisition of a first comparative image and a second comparative image of the wafer, respectively.
In accordance with a fourth embodiment of the present invention, an inspection system is disclosed that includes an illumination mechanism for providing a first broadband illumination, a first image capture assembly for receiving the first broadband illumination reflected from a wafer to capture a first image of the wafer; and a second image capture device for receiving the first broadband illumination reflected by the wafer to capture a second image of the wafer. The wafer is spatially displaced between the acquisition points of the first image and the second image. The spatial displacement is then calculated from the encoded values associated with each of the first and second images.
Drawings
Preferred embodiments of the present invention will hereinafter be described in conjunction with the appended drawings, wherein
FIG. 1 shows a partial plan view of a preferred system for inspecting wafers in accordance with a preferred embodiment of the present invention;
FIG. 2 shows a partial isometric view of the system of FIG. 1;
FIG. 3 shows a partially exposed isometric view of the optical detection head of the system of FIG. 1, shown projecting according to the "A" direction in FIG. 2;
FIG. 4 shows a partial exposure isometric view of the automated wafer table of the system of FIG. 1, shown in a view according to "B" of FIG. 2;
FIG. 5 shows a partial exposure isometric view of the system of FIG. 1 with automatic wafer loading/unloading, according to the "C" orientation of FIG. 2;
FIG. 6 shows a partial exposure isometric view of the police stack module of the system of FIG. 1, highlighted according to the "D" direction in FIG. 2;
FIG. 7 shows a partial isometric view of an optical detection head of the system shown in FIG. 1;
FIG. 8 shows a partial front view of an optical detection head of the system of FIG. 1;
FIG. 9 shows the ray paths of the system of FIG. 1 between a brightfield illuminator, a low angle darkfield illuminator, a high angle darkfield illuminator, a first image collector, and a second image collector;
FIG. 10 is a flow chart of a preferred first ray path along bright field illumination provided by the bright field illuminator of FIG. 9;
FIG. 11 is a flow chart of a preferred second ray path along the high angle darkfield illumination provided by the high angle darkfield illuminator of FIG. 9;
FIG. 12 is a flow chart of a preferred third ray path along the low angle darkfield illumination provided by the low angle darkfield illuminator of FIG. 9;
FIG. 13 shows the path of the illumination light between the thin line illuminator and the 3D image collector or camera in the system of FIG. 1;
FIG. 14 shows the illumination light path between the review brightfield illuminator, the review darkfield illuminator, and the review image capture device of the system of FIG. 1;
FIG. 15 is a flow chart along a preferred fourth ray path for brightfield illumination between the review brightfield illuminator and the review image capture device shown in FIG. 14;
FIG. 16 is a flow chart along a preferred fifth ray path for darkfield illumination between the review darkfield illuminator and the review image capture device of FIG. 14;
FIG. 17 is a method flow chart of a preferred method for inspecting a wafer provided by the present invention;
FIG. 18 is a flowchart of a preferred reference image generation process for generating a reference image for comparison with an image acquired during execution of the method of FIG. 17;
FIG. 19 is a process flow diagram of a preferred two-dimensional wafer scanning process with timing offsets among the steps of the method of FIG. 17;
FIG. 20 shows a table of lighting configurations selected by the lighting configurator of the system of FIG. 1;
FIG. 21 shows a pulse waveform diagram for acquiring a first image by a first image acquirer and a second image by a second image acquirer;
FIG. 22a shows a first image captured by the first image capturing device of FIG. 1;
FIG. 22b shows a second image acquired by the second image acquisition device of FIG. 1;
FIG. 22c shows the combination of the first image of FIG. 22a and the second image of FIG. 22b to demonstrate image shift due to the acquisition of the first image and the second image as the wafer moves;
FIG. 23 is a process flow diagram of a preferred two-dimensional image processing procedure for performing the steps of the method of FIG. 17;
FIG. 24 is a process flow diagram of a preferred three-dimensional image processing procedure for performing the steps of the method of FIG. 17;
FIG. 25 shows a preferred illumination ray path between the thin line illuminator and the 3D image grabber or camera of the system of FIG. 1;
FIG. 26 is a process flow diagram of a second preferred three-dimensional wafer scanning process for performing the steps of the method of FIG. 17;
FIG. 27 is a process flow diagram of a preferred review process for performing the steps of the method of FIG. 17.
Detailed Description
Inspection of semiconductor components, such as semiconductor wafers and dies, is an increasingly important step in the processing and manufacturing of semiconductors. As the complexity of circuits on semiconductor wafers increases and as the quality standards for semiconductor wafers become more and more important, there is an increasing need for improved inspection systems and inspection methods for semiconductor wafers.
The current semiconductor wafer inspection systems and inspection methods do not produce both bright field images and dark field patterns for dynamic inspection of semiconductor wafers and do not provide flexibility in configuration and design, and furthermore, there is a need for components of the semiconductor wafer inspection system that have flexibility and adjustable spatial relative configuration, such as illuminators, cameras, objective lenses, filters and mirrors thereof. Due to the increasingly complex circuitry on semiconductor wafers, and the increasingly higher quality standards set in the semiconductor industry, semiconductor wafer inspection with accuracy and consistency is becoming increasingly important. Good reference and reference images are generated for comparison with the captured images of the semiconductor wafer, which currently requires manual selection of a "good" semiconductor wafer. Such manual selection can result in inaccuracies and inconsistencies in the generated reference images, thereby affecting the results of the semiconductor wafer inspection. Accordingly, there is a need for an improved training method or process for generating a reference image for subsequent comparison with an acquired image of a semiconductor wafer.
Embodiments of the present invention provide exemplary systems and methods for inspecting semiconductor devices to address at least one of the above identified problems.
For purposes of brevity and clarity, the description of the specific embodiments of the present invention is limited only to the following systems and methods for semiconductor wafer inspection. Those skilled in the art will appreciate that it is not intended to exclude the application of the invention in other respects, such as having similar general principles as those of many of the embodiments of the invention, e.g., having operational, functional or performance characteristics. For example, the systems and methods provided by embodiments of the present invention can also be used for inspection of other semiconductor components, including but not limited to semiconductor dies, LED chips, and solar wafers.
A preferred system 10 for inspecting semiconductor wafers 12, as shown in fig. 1 and 2, and according to a first embodiment of the present invention, the system 10 may also be used for inspecting other semiconductor devices or components, wherein the system 10 includes an optical inspection head 14 (as shown in fig. 3), a wafer transport table or wafer chuck 16 (as shown in fig. 4), an automated wafer handler 18 (as shown in fig. 5), a wafer stacking module 20 (as shown in fig. 6), or film frame cassette loader, an XY displacement table 22, and at least four vibration isolators 24 (as shown in fig. 1 and 2).
The optical inspection head 14 shown in fig. 7 and 8 is comprised of a plurality of illuminators and image capture devices and is characterized by the optical inspection head 14 including a brightfield illuminator 26, a low angle darkfield illuminator 28 and a high angle darkfield illuminator 30. Those skilled in the art will appreciate that more darkfield illuminators are required to be implemented in the system 10 and those skilled in the art will further appreciate that the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 are integrated into a single darkfield illuminator and can be flexibly positioned as desired.
Bright field illumination or bright field light is provided or emitted at a bright field illuminator 26, also referred to as a bright field illumination source or bright field emitter. The bright field illuminator 26 is, for example, a flash lamp or a white light emitting diode. Characterized by the bright field illuminator 26 providing broadband bright field illumination substantially including wavelengths between 300nm and 1000nm, it being understood by those skilled in the art that the bright field illumination can be of selectable wavelengths and optical characteristics.
In particular, the brightfield illuminator 26 includes a first optical fiber (not shown) through which the brightfield illumination passes before being emitted from the brightfield illuminator 26, preferably the first optical fiber being used as a guide for the transmission of the brightfield illumination, and more particularly, the first optical fiber directing the brightfield illumination emitted from the brightfield illuminator 26.
The low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 are also referred to as darkfield illumination light sources for emitting or providing darkfield illumination. The dark field illumination is carefully aligned with the illumination or light sources so that a minimal amount of directly transmitted (undispersed) light enters their respective image capture devices, which typically capture a dark field image only receiving illumination or light sources that have been scattered by the specimen or object. The dark field image is typically enhanced to form contrast with the bright field image, with bright field illumination and dark field illumination being examples of contrast illumination.
The low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 are each exemplified by a flash lamp or a white light emitting diode. It is preferred that each of the low angle darkfield illuminator 28 and each of the high angle darkfield illuminator 30 provide darkfield illumination having similar optical characteristics to brightfield illumination. More specifically, each of the low angle darkfield illuminator 28 and the high angle illuminator 30 is broadband illumination including wavelengths between 300nm and 1000nm, inclusive. More specifically, the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 provide darkfield illumination of different wavelengths or other optical characteristics.
The low angle darkfield illuminator 28 is set at a low angle relative to the high angle darkfield illuminator 30 and the horizontal plane of the semiconductor wafer 12 positioned above the wafer stage 16 (or the horizontal plane of the wafer stage 16). For example, the low angle darkfield illuminator 28 is preferably positioned at an angle of between 3 and 30 degrees to a horizontal plane of the wafer table 16 on which the semiconductor wafer 12 is positioned. In addition, the high angle darkfield illuminator 30 is preferably positioned at an angle of between 30 and 83 degrees to a horizontal plane of the wafer table 16 on which the semiconductor wafer 12 is positioned. The angles that meet the above requirements are preferably adjustable by adjusting the position of each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30.
Each of the low angle darkfield illumination lamp 28 and the high angle darkfield illumination lamp 30 preferably comprises a second and third optical fibre (not shown) through which the darkfield illumination is transmitted. The second and third optical fibers act as a waveguide for directing the transmission of the darkfield illumination through the optical path of each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30. In addition, the second optical fiber facilitates directing the darkfield illumination to emanate from the low angle darkfield illuminator 28 and the third optical fiber facilitates directing the darkfield illumination to emanate from the high angle darkfield illuminator 30. Illumination is provided by each of the darkfield illuminator 26, the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30, which is controllable and may be provided continuously or intermittently.
The full wavelength spectrum of the bright field illumination and the dark field illumination is best used to enhance the accuracy of inspection and defect detection of the semiconductor wafer 12. Broadband illumination identifies the type of semiconductor wafer defect by varying surface reflections. In addition, the similar broadband wavelengths of the bright field illumination and the dark field illumination enable the inspection of the wafer 12 to be performed without being constrained by the reflective properties of the semiconductor wafer 12. This means that defects on the semiconductor wafer 12 do not cause undesirable effects due to different sensitivities or reflectances or polarizations of the semiconductor wafer 12 to different illumination wavelengths.
Preferably, the intensity of the brightfield illumination and the darkfield illumination provided by the brightfield illuminator 26, the darkfield illuminators 28, 30 can be selected and varied, respectively, as desired in accordance with the characteristics of the semiconductor wafer 12, such as the material 12 of the semiconductor wafer. In addition, each of the bright field and dark field illumination may be selected and varied to enhance the quality of the image captured by the semiconductor wafer 12, while serving to enhance inspection of the semiconductor wafer 12.
As shown in fig. 7-9, the system 10 further includes a first image capture device 32 (i.e., a first camera) and a second image capture device 34 (i.e., a second camera). Each of the first image capture device 32 and the second image capture device 34 can receive brightfield illumination provided by the brightfield illuminator 26 and darkfield illumination provided by the respective low angle darkfield illuminator 28 and the respective high angle darkfield illuminator 30. The bright field and dark field illumination received into or from the first image capture device 32 is preferably focused at a first image capture plane for capturing the corresponding image. The bright field and dark field illumination received into or entering the second image capture device 34 is preferably focused at a second image capture plane for capture of the corresponding image.
The first image capture device 32 and the second image capture device 34 are monochrome images or color images. With single or three chip color sensors, it is desirable to be able to capture color images of the wafer 12 to enhance at least one of accuracy and defect detection speed. For example, the ability to capture color images of the semiconductor wafer 12 helps reduce false defect detection on the semiconductor wafer 12 and correspondingly reduces false rejects thereof.
The optical detection head 14 further comprises a first tube lens 36 for the first image acquisition device 32. In addition, the optical inspection head 14 further includes a second tube lens 38 for the second image capture device 34. Each of the first tube lens 36 and the second tube lens 38 preferably have common optical characteristics and functions. Thus, the tube lens 36 and 38 are capped with the first tube lens 36 and the second tube lens 38 for clarity only. The optical detection head 14 further comprises a plurality of objective lenses 40, for example four objective lenses 40. All of the objective lenses 40 are commonly mounted on a rotatable fixture 42 (shown in fig. 3) that is rotated to position each objective lens above each inspection location (not shown) or location of the semiconductor wafer 12 for inspection. All objectives 40 may be referred to collectively as a combination of objectives.
Each objective lens 40 is designed to achieve a different magnification and they have an iso-focal surface, each objective lens 40 preferably having a different predetermined magnification, such as 5, 10, 20 and 50. Preferably, each objective lens 40 has infinite corrected aberrations. However, it will be appreciated by those skilled in the art that each objective lens may be altered or redesigned to achieve different magnifications and their performance.
Each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 preferably includes focusing means or mechanisms for directing or focusing the darkfield illumination toward the semiconductor wafer 12 positioned at the inspection position. The angle between the low angle darkfield illuminator 28 and the horizontal plane of the wafer 12 and the angle between the high angle darkfield illuminator 30 and the horizontal plane of the wafer 12 are preferably set and adjusted to enhance the accuracy of the defect detection. Preferably, each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 has a fixed spatial position with reference to the inspection position. In addition, the position of each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30 is variable with reference to the inspection position during normal operation of the system 10.
As described above, both the bright field illumination and the dark field illumination are focused on the detection position. The bright field illumination and the dark field illumination are focused at the inspection position to illuminate the semiconductor wafer 12, or a portion thereof.
As shown in fig. 6, the system 10 includes a wafer stack 20 or film frame cassette loader. The wafer stack 20 preferably includes a plurality of slots for loading a plurality of semiconductor wafers. Each semiconductor wafer is loaded or transferred sequentially to a wafer table 16 (as shown in fig. 4) or to a wafer chuck (as shown in fig. 5) by an automated wafer handler 18. Preferably, a vacuum is drawn or formed on the wafer table 16 to ensure that the semiconductor wafer 12 is positioned on the wafer table 16. The platen 16 preferably includes a predetermined plurality of apertures or gaps to create a vacuum to increase the secure and flat position of the frame cassette and frame (neither shown) disposed on the platen 16. The wafer table 16 is also preferably capable of handling wafers having diameters between 6 and 12 feet, inclusive.
The wafer table 16 is coupled to an XY translation stage 22 (shown in fig. 1 and 2) to move the wafer table 16 in the X and Y directions. The transfer of the wafer table 16 correspondingly replaces the transfer of the semiconductor wafer 12 placed thereon. Specifically, the wafer stage 16 is displaced, and thus the semiconductor wafer 12 placed thereon is displaced, which is controlled to control the positioning of the semiconductor wafer 12 at the inspection position. The XY-displacement table 22 is preferred as an air gap linear positioner. The XY-displacement table 22 or air gap linear positioner facilitates high precision displacement of the wafer table 16 in the X and Y directions with minimal effect from vibration transfer of other components of the system 10 to the wafer table 16, ensuring smooth and accurate positioning of the semiconductor wafer 12 or components thereon at the inspection location. The XY-displacement table 22 and the wafer table 16 are mounted together on a bumper or vibration isolator 24 (shown in fig. 2) to absorb shock or vibration and ensure flatness of the assembly and other modules or assemblies mounted thereon. It will be appreciated by those skilled in the art that alternative mechanisms may be used for coupling or for controlling the transfer onto the wafer table 16 and facilitating high precision positioning of the semiconductor wafer 12 at the inspection position.
Inspection of the semiconductor wafer 12 is to detect defects that may be present on the semiconductor wafer 12 as it moves, that is, the capture of images, such as bright field and dark field images of the semiconductor wafer 12, preferably occurs as the semiconductor wafer 12 is translated past the inspection location. In addition, each new semiconductor wafer 12 may be stopped to obtain a high resolution image, which the user may achieve by designing the wafer stage 16 (i.e., controlling the wafer 16 via software).
As previously described, the system 10 also includes a first tube lens 36 and a second tube lens 38. Preferably, tube lens 36 is disposed between objective lens 40 and first image capture device 32. The illumination passes through a first tube lens 36 before entering the first image capture device 32. In addition, a second tube mirror 38 is disposed between the objective lens 40 and the second image capture device 34, and the illumination passes through the second tube mirror 38 and is reflected by a mirror or a prism into the second image capture device 34.
Each objective lens 40 has an infinite number of calibration offsets. Thus, after passing through the objective lens 40, the illumination or light is collimated. That is, the illumination is collimated after being transmitted between the objective lens 40 and each of the first tube lens 36 and the second tube lens 38, and the collimation of the illumination between the objective lens 40 and each of the first tube lens 36 and the second tube lens 38 increases the flexibility of positioning each of the first image capturing device 32 and the second image capturing device 34 separately. Implementation of the tube lenses 36, 38 eliminates the need to refocus into each of the first image capture device 32 and the second image capture device 34 (e.g., when different magnification factors are desired) when different objective lenses 40 are used. Furthermore, the calibration of the illumination allows for the introduction and positioning of additional optical components and accessories into the system 10 in the field, particularly between the objective lens 40 and each of the first tube lens 36 and the second tube lens 38, without requiring a reconfiguration of the system 10. In addition, this arrangement facilitates a greater working distance between the objective lens 40 and the semiconductor wafer 12 than prior art devices, and the longer working distance between the objective lens 40 and the wafer enables efficient use of dark field illumination.
It should be understood by those skilled in the art that the system 10 of the present invention allows for flexibility in the design and reconfiguration of the components of the system 10 and in situ. The system 10 of the present invention facilitates the introduction of optical elements or systems into the system 10 and the removal of the system 10.
The first tube lens 36 facilitates the concentration of the collimated illumination to the first image acquisition plane. Likewise, second tube lens 38 facilitates collimating illumination onto the second image acquisition plane. Although a tube lens is used in the system 10 of the present invention, it will be appreciated by those skilled in the art that other optical or mechanical means may be used for the calibration of the illumination, and more particularly, that the bright field illumination and the dark field illumination are then focused on the respective first image acquisition planes and the second image acquisition planes, respectively.
The first image capture device and the second image capture device 34 are preferably both disposed along adjacent parallel axes. Preferably, the spatial location of the first image capture device 32 and the second image capture device 34 determines a reduction in the amount of space occupied by the first image capture device 32 and the second image capture device 34, resulting in a smaller overall area (i.e., space efficiency) occupied by the system 10.
In particular, the system 10 further includes a plurality of beam splitters and a mirror or reflective surface. The beam splitter and mirrors or reflective surfaces are preferably positioned to direct the brightfield illumination and darkfield illumination from each of the low angle darkfield illuminator 28 and the high angle darkfield illuminator 30.
In particular, the system 10 further includes a Central Processing Unit (CPU) having a memory or database (also referred to as a post-processor) (not shown). The CPU is preferably electrically connected or coupled to other components of the system 10, such as the first image capture device 32 and the second image capture device 34. The images acquired by the first image acquisition device 32 and the second image acquisition device 34 are preferably converted into image signals and transmitted to the CPU.
The CPU is programmed to process information, more specifically images, and transmit the image information to the CPU to inspect defects on the semiconductor wafer 12, preferably, the inspection of the semiconductor wafer 12 is performed robotically by the system 10, with the best defect being performed in a regime of 10 where the inspection of the semiconductor wafer 12 is performed automatically by the system 10, preferably, controlled by the CPU. In addition, at least one manual input is included to facilitate defect detection of the semiconductor wafer 12.
The CPU is programmable to store information and transmit the information to the database. In addition, the CPU may be programmed to classify defects. In addition, the CPU is preferably programmed to store information during processing, and more particularly, processed images and discovered defects in the database. As to the capture of the image, the processing of the captured image, and the detection of defects on the semiconductor wafer 12, will be described in detail below.
It will be appreciated by those skilled in the art from the foregoing description that the brightfield illuminator 26 emits or provides brightfield illumination and that each of the low angle darkfield illuminator 26 and the high angle darkfield illuminator 30 emits or provides darkfield illumination (hereinafter referred to simply as low angle or DLA illumination and darkfield high angle or DHA illumination, respectively) is accompanied by a different ray path or light path 3.
A flow chart of a preferred first ray path 100 with brightfield illumination is shown in fig. 10.
In step 102 of the first ray path 100, bright field illumination or light is provided by the bright field illuminator 26. As previously mentioned, the brightfield illumination is preferably emitted from the first optical fiber of the brightfield illuminator 26, and is preferably directed to be emitted from the brightfield illuminator 26, preferably through a condenser 44. The condenser 44 is used for concentrated bright field illumination.
In step 104, the first reflective surface or first mirror reflects the bright field illumination, which is reflected by the first reflective surface and directed toward the first beam splitter 48.
In step 106, the first beam splitter 48 reflects at least a portion of the bright field illuminator. Preferably, the first beam splitter 48 has one 30: 70 reflection/transmission ratio. However, it will be appreciated by those skilled in the art that first split beam 48 can be adjusted as needed to control the intensity or amount of bright field illumination reflection or transmission.
The bright field illumination reflected by the first beam splitter 48 is directed to a detection location. More specifically, the bright field illumination reflected by the first beam splitter 48 is directed toward the objective lens 40 directly above the detection position. In step 108, the bright field illuminator 26 is focused by the objective lens 40 at the inspection position or on the semiconductor wafer 12 disposed at the inspection position.
The bright field illuminator 26 provides bright field illumination and focuses on the inspection location to illuminate the semiconductor wafer 12, and more particularly, to illuminate a portion of the semiconductor wafer 12 positioned at the inspection location, where the bright field illumination is reflected by the semiconductor wafer 12 positioned at the inspection location at step 110.
In step 112, bright field illumination reflected by the semiconductor wafer 12 passes through the objective lens 40. As previously described, the objective lens 40 has infinite corrective aberrations. Thus, the bright field illumination passes through the objective lens and is collimated by the objective lens 40. The degree of magnification of the bright field illumination by the magnifier depends on the magnification factor of the objective lens 40.
The bright field illumination is directed through the objective lens 40 toward the first beam splitter 48, and in step 114, the bright field illumination and a portion thereof projected onto the first beam splitter 48 is transmitted through the first beam splitter 48. The length of the transmission by the first beam splitter 48 is dependent on the R/T ratio of the first beam splitter 48, step 114. The bright field illumination passes through the first beam splitter 48 and is directed to the second beam splitter 50.
The second beam splitter 50 of the system 10 is preferably a cube beam splitter 50 having a predetermined R/T ratio, preferably 50/50. The R/T ratio can be varied as desired. The cube beam splitter 50 is preferred because the cube beam splitter 50 splits the received illumination into two optical paths. Thus, those skilled in the art will appreciate that the configuration and shape of the stereoscopic beam splitter 50 may provide better performance and alignment for this purpose. The length of the illumination reflected or transmitted by the second beam splitter 50 depends on the R/T ratio of the second beam splitter 50. In step 116, bright field illumination is projected onto the second beam splitter 50. Bright field illumination impinging on the beam splitter is transmitted or reflected therefrom.
The bright field illumination passing through the second beam splitter 50 is directed to the first image capture device 32. The brightfield illumination in step 118 passes through the first tube lens 36 and then to the first image capture device 32 in step 120. First tube mirror 36 helps to concentrate the collimated brightfield illumination onto the first image capture plane of first image capture device 32. The concentration of the brightfield illumination into the first image capture plane causes the first image capture device 32 to capture a brightfield image.
The bright field image acquired by the first image acquisition plane is preferably converted into an image signal. The image signal is then transmitted or downloaded to the CPU, which is also referred to as data transfer. The conversion of the bright field image is stored in the CPU by the CPU conversion or the conversion of the bright field image, and at least one of the two is implemented.
The bright field illumination is reflected by the second beam splitter 50 and directed toward the second image capture device 34, where it passes through the second tube mirror 38 and then into the second image capture device 34 in step 124, step 122. The second tube lens 38 helps to concentrate the collimated brightfield illumination onto the second image acquisition plane. The concentration of the brightfield illumination to the second image plane assists the second image capturing device 34 in capturing a brightfield image.
The bright field image acquired by the second image acquisition plane is preferably converted into an image signal. The image signal is then transmitted or downloaded to the CPU, which is also referred to as data transfer. The bright field image is converted by the CPU or stored to the CPU, at least one of which is implemented.
A flow chart of a preferred second light 220 with Dark High Angle (DHA) illumination is shown in fig. 11.
In the second ray 200 in step 202, DHA illumination is provided by the high angle illuminator 30. As previously mentioned, the second light ray preferably helps to direct the DHA illumination provided by the high angle darkfield illuminator 30. Preferably, the DHA illumination is concentrated directly at the detection location without passing through optical elements or accessories such as the objective lens 40.
In step 204, DHA illumination directed at the inspection position is reflected by the semiconductor wafer 12, or a portion thereof, disposed at the inspection position, and in step 206, DHA illumination reflected from the wafer passes through the objective lens 40 as described in step 206. The objective lens 40 has an infinite correction aberration, and the calibrated DHA illumination is passed through the objective lens position in step 206.
The DHA illumination is directed through the objective lens 40 toward the first beam splitter 48, and in step 208, DHA illumination is directed at the first beam splitter 48, a portion of the DHA illumination passes through the first beam splitter 48, and the length of the DHA illumination transmitted within the first beam splitter 48 is dependent on the R/T ratio of the first beam splitter 48.
Illumination transmission of DHA is directed through first beam splitter 48 toward second beam splitter 50. In step 210, DHA illumination is projected onto second beam splitter 50, and the transmission or reflection of the DHA illumination projected onto second beam splitter 50 is dependent on the R/T ratio of second beam splitter 50.
The DHA illumination transmitted through the second beam splitter 50 passes through the first tube mirror 36, as depicted at step 212, and then into the first image capture device 32, as depicted at step 214. First tube lens 36 helps to concentrate the calibrated DHA illumination onto the first image capture plane of first image capture device 32. The concentration of the DHA illumination to the first image acquisition plane facilitates the acquisition of a dark field image, more specifically a dark field high angle (DHA) image, by the first image acquisition device 32.
In addition, the DHA illumination is reflected by the second beam splitter 50. The DHA illumination reflected from the second beam splitter 50 is transmitted through the second tube mirror 38 as illustrated in step 216 and then into the second image capture device 34 as illustrated in step 218. The second tube lens 38 helps to concentrate the calibrated DHA illumination at the second image acquisition plane of the second acquisition device 34. The concentration of the DHA illumination in the second image capture plane facilitates the capture of dark-field images, and more particularly, the capture of a dark-field high angle (DHA) image by the second image capture device 34.
A preferred flow chart of the third ray path 250 with dark field low angle illumination is shown in fig. 12.
In a third step 252, comprising the third ray path 200, DLA illumination is provided by the low angle darkfield illuminator 28. The third optical fiber helps to guide the DLA illumination provided by the low angle darkfield illuminator 28. Preferably, the DLA illumination is directed at the inspection position without passing through optical elements or accessories such as the objective lens 40.
In step 254, DLA illumination directed at the inspection location is reflected by the semiconductor wafer 12 or a portion thereof disposed at the inspection location. The DLA light reflected by the wafer is passed through the objective lens 40 as shown in step 256, the objective lens 40 having the capability of infinitely correcting aberrations, and the alignment DLA is passed from the objective lens position as shown in step 256
The DLA illumination is directed through the objective lens 40 toward the first beam splitter 48. in step 258, the DLA illumination and a portion thereof projected onto the first beam splitter 48 is transmitted through the first beam splitter 48 with the length of transmission through the first beam splitter 48 being dependent upon the R/T ratio of the first beam splitter.
The DLA illumination is directed through the first beam splitter 48 and directed toward the second beam splitter 50. In step 260, the DLA illumination is projected to the second beam splitter 50, and the transmission or reflection of the DLA projected to the second beam splitter 50 is dependent on the R/T ratio of the second beam splitter 50.
The DLA illumination transmitted through the second beam splitter 50 as described in step 262 passes through the tube mirror 36 and then into the first image capture device as described in step 264. The first tube lens 36 helps to concentrate the collimated DLA illumination onto the first image capture plane of the first image capture device 32. The DLA illumination focused on the first image capture plane facilitates the capture of a dark field image, more specifically a dark field high angle (DLA) image captured by the first image capture device 32.
In addition, the DLA illumination is reflected by the second beam splitter 50. The DLA illumination reflected from the second beam splitter 50 as depicted at step 266 enters the second image capture device as depicted at step 268 through the second tube mirror 38. The second tube lens 38 helps to focus the collimated DLA illumination onto the second image capture plane of the second image capture device 34. The collection of a dark field image, and more particularly a dark field high angle (DLA) image, is facilitated by the concentration of the DLA image illuminated at the second image collection plane, by the second image collection device 34.
It will be appreciated by those skilled in the art from the foregoing description that the DHA illumination and DLA illumination preferably follow a similar ray path after reflection off the semiconductor wafer 12. However, the second ray path 200 for DHA illumination and the third ray path 250 for DLA illumination may be individually modified using techniques known in the art. In addition, the angles at which the DHA illumination and the DLA illumination are projected onto the semiconductor wafer 12 disposed at the inspection position may be adjusted as needed to enhance the accuracy of defect inspection. For example, the angles at which the DHA illumination and the DLA illumination are projected onto the semiconductor wafer 12 disposed at the inspection position may be adjusted according to the type of semiconductor wafer 12 disposed at the inspection position or the needs of a user of the system 10.
The DHA image and the DLA image captured by each of the first image capturing device 32 and the second image capturing device 34 are preferably converted into image signals, which are then transmitted or downloaded to the CPU. The transfer of the image signal to the CPU is also referred to as data transfer. The DHA image and the DLA image are converted by the CPU or the DHA image and the DLA image are stored in the CPU, at least one of which is implemented.
As described above, the first image capturing device 32 and the second image capturing device 34 have respective predetermined relative spatial positions. The use of the objective lens 40 in conjunction with the first and second tube lenses 36, 38 facilitates the spatial positioning of the first and second image capture devices 32, 34. It will be further appreciated by those skilled in the art that other optical elements or accessories, such as mirrors, may also be used to direct brightfield illumination, DHA illumination, and DLA illumination, and may also facilitate the spatial positioning of the first and second image capture devices 32, 34. More preferably, the spatial positions of the first image acquisition device 32 and the second image acquisition device 34 are set with reference to the examination position. The spatial positions of the first image capture device 32 and the second image capture device 34 are preferably set to increase the performance of the system in at least one of the accuracy and efficiency of inspecting the wafer. For example, the setting of the spatial positions of the first image acquisition device 32 and the second image acquisition device 34 relative to the inspection position is preferably used to reduce calibration losses and calibration feedback losses associated with moving the image acquisition device or camera.
The optical detection head 14 of the system 10 preferably further includes a third illuminator (hereinafter referred to simply as the thin line illuminator 52). The thin line illuminator may also be referred to as a thin line illumination emitter. The thin line illuminator 52 emits or provides thin line illumination. The thin line illuminator 52 is preferably a laser source for providing thin line laser illumination. Additionally, the thin line illuminator 52 is a broadband illuminator to provide a broadband thin line illumination. The thin line illumination is preferably directed to the inspection position. And more particularly to semiconductor wafers 12 disposed at inspection locations at a predetermined angle, which may be varied as desired. A mirror arrangement 54 is preferably coupled to or disposed opposite the thin line illuminator 52 to direct the thin line illumination toward the inspection position.
The optical inspection head 14 of the system 10 preferably further includes a third image capture device (hereinafter referred to simply as a three-dimensional (3D) image camera 56). Preferably, the three-dimensional image camera 56 receives thin line illumination reflected by the semiconductor 12. Preferably, the thin line illumination entering the 3D image camera 56 is focused on a 3D image capture plane (not shown) and thereby captures a 3D image of the semiconductor wafer 12. The 3D optics include a thin line illuminator 52 and a 3D image camera 56 as shown in fig. 13.
The optical writing detecting head 14 further includes an objective lens for a 3D image camera (hereinafter referred to as 3D image objective lens 58). The thin line illumination reflected by the semiconductor wafer 12 enters the 3D image camera 56 through the 3D image objective lens 58. Preferably, the 3D image objective 58 has the function of infinitely correcting aberrations, and accordingly, the thin line illumination passes through the 3D image objective 58 and is thereby calibrated. The optical inspection head 14 further includes a tube lens 60 for the 3D image objective lens 58 and the 3D image camera 56. The tube lens 60 focuses the collimated thin line illumination onto the 3D image acquisition plane. The use of the tube lens 60 and the 3D image objective 58 in conjunction with the 3D image camera 56 facilitates flexible positioning and reconstruction of the 3D image camera 56. In addition, the use of the tube lens 60 and the 3D image objective lens 58 in conjunction with the 3D image camera 56 also facilitates the introduction of other optical elements or accessories between the 3D image objective lens 58 and the tube lens 60.
The thin line illuminator 52 and the 3D image camera are preferably used together to facilitate 3D image scanning and inspection of the semiconductor wafer 12. Preferably, the thin line illuminator 52 and the 3D image camera 56 are both coupled to the CPU, which facilitates coordinated or synchronized operation of the thin line illuminator 52 and the 3D image camera 56. More preferably, the system 10 scans and inspects the semiconductor wafer 12 using automated 3D images. The automated 3D image scanning and inspection of the semiconductor wafer 12 is preferably under the control of a CPU.
In addition, the optical inspection head 14 includes a review image capture device 62. The review image capture device 62 is, for example, a color camera. The review image capture device 62 preferably captures color images. In addition, the review image capture device 62 captures a monochrome image. The review image capture device 62 preferably captures at least one determined review image of the semiconductor wafer 12 for classifying and reviewing defect inspections on the semiconductor wafer 12.
The optical inspection head 14 further includes a review brightfield illuminator 62 and a review darkfield illuminator for implementing brightfield illumination and darkfield illumination, respectively. The review image capture device 60 receives the brightfield illumination and the darkfield illumination provided by the brightfield illuminator 62 and the review darkfield illuminator 64, respectively, and is reflected by the semiconductor wafer 12 for capturing a review image of the semiconductor wafer 12. In addition, the review image capture device 60 captures the illumination provided by the optional illuminator. Such as one example described above, for capturing a review image of the semiconductor wafer 12. The review image capture device 60 preferably captures high resolution images of the semiconductor wafer 12.
Fig. 14 depicts a flow diagram of a review brightfield illuminator 62, a review darkfield illuminator 64, a review image capture device 60 and the illumination pattern therebetween, and fig. 15 depicts a preferred fourth ray path 300 followed by brightfield illumination provided by the review brightfield illuminator 62.
In step 302, which includes the fourth ray path 300, bright field illumination is provided by the review bright field illuminator 62. The brightfield illumination provided by the review brightfield illuminator 62 is directed toward a first reflective surface 66. In step 304, brightfield illumination provided by the review brightfield illuminator 62 is directed at a first reflective surface 66. In step 304, the bright field illumination is reflected by the first reflective surface 66 and directed toward a beam splitter 68. In a subsequent step 306, bright field illumination projected onto the beam splitter 68 is reflected therefrom and directed toward the inspection location. The length of the bright field illumination reflected by the beam splitter 68 depends on the R/T ratio of the beam splitter.
In step 308, the bright field illumination is reflected by the semiconductor wafer 12, or a portion thereof, disposed at the inspection position. In step 310, the reflected bright field illumination passes through a review objective 70, preferably having infinite aberration correction capability, and accordingly, the bright field illumination passing through the review objective 70 is collimated by the review objective 70 in step 310.
In step 312, the bright field illumination and a portion thereof projected onto the beam splitter 68 is transmitted therefrom. The length of the bright field illumination passing through the beam splitter 68 depends on the R/T ratio of the beam splitter 68. Brightfield illumination, as depicted at step 314, passes through a review tube lens 72 and then into the review image capture device 60, as depicted at step 316. The review tube lens 72 focuses the collimated brightfield illumination onto an image capture plane of the review image capture device 60. The collection of the review brightfield image in step 318 is facilitated by the bright field illumination being concentrated at the image collection plane of the review image collection device.
The aiming of the brightfield illumination between the review objective 70 and the review tube lens 72 preferably facilitates the introduction of optical elements and accessories therebetween. In addition, the collimation of the brightfield illumination between the review objective 70 and the review tube lens 72 is preferably flexibly positioned and reconfigured as required by the review image capture device 60.
A flow chart of a preferred fifth ray path 350 followed by the darkfield illumination provided by the review darkfield illuminator 64 is shown in fig. 16.
In step 352, which includes fifth ray path 350, the darkfield illumination is provided by review darkfield illuminator 64, the darkfield illumination provided by review darkfield illuminator 64 is preferably directed to the inspection position in a concentrated manner, and the darkfield illumination provided by review darkfield illuminator 64 is preferably directed to the inspection position at a predetermined angle, preferably a high angle, to the horizontal plane of the semiconductor wafer 12, and can be adjusted as needed using techniques well known to those skilled in the art.
In step 354, the dark field illumination is reflected by the semiconductor wafer 12 or a portion thereof disposed at the inspection position. The reflected dark field illumination is then passed through the review objective 70 in step 356. The dark field illumination passes through the review objective 70 in step 356 and is calibrated by the review objective 70.
In step 358, the calibrated dark field illumination projected onto the beam splitter and a portion thereof is transmitted therefrom. The length of the dark field illumination passing through the beam splitter 68 depends on the R/T ratio of the beam splitter 68. The dark field illumination is passed through a review tube lens 72 in step 360 and then into review image capture device 60 as described in step 362. The fourth tube lens 72 focuses the calibrated dark field illumination onto an image capture plane of the review image capture device 60. The collection of the review brightfield image in step 364 is facilitated by the concentrated illumination of the darkfield illumination at the image collection plane of the review image capture device 60. The aiming of each of the bright field illumination and dark field illumination between review objective 70 and review tube lens 72 enhances the ease of design and reconfiguration of system 10, the introduction of optical components and accessories between them. In addition, the collimation of the bright field illumination between the review objective 70 and the review tube lens 72 is preferably flexibly positioned and reconfigured as required by the review image capture device 60 to facilitate the capture of bright field images and dark field images while the semiconductor wafer 12 is in motion.
The acquired review bright field and dark field images are preferably converted to picture signals and transmitted by the review acquisition device 60 to the programmable controller for processing and storage or database storage.
The review image capture device 60 may have a fixed spatial position relative to the inspection position. The fixed spatial position of the review image capture device 60 preferably reduces calibration losses and calibration feedback losses associated with moving the image capture device or camera (note: the previous statement is to emphasize the advantages of review capture devices, possibly using tube mirrors), thereby enhancing the quality of the capture of review bright field images and review dark field images.
The system 10 further includes a vibration isolator 24, collectively referred to as a stabilization device. The system 10 is preferably mounted on a vibration isolator 24 or a stabilizer when the system is operating properly. Preferably, the system 10 includes four vibration isolators 24, each positioned at a different corner of the system 10. Vibration isolators 24 help support and stabilize system 10. Each vibration isolator 24 is preferably a compressible structure or tank structure that absorbs ground vibrations and thereby acts as a cushion to prevent transmission of ground vibrations to the system, the vibration isolators 24 helping to enhance the quality of the images acquired by each first image acquisition device 32 by preventing unwanted vibrations or physical movement of the system 10. The second image capture device 34, the 3D image camera 56 and the review camera 60 improve the inspection quality of the semiconductor wafer 12.
In accordance with an embodiment of the present invention, a preferred method 400 for inspecting a semiconductor wafer 12 is provided. Fig. 17 depicts a method flow diagram for implementing the method 400. The method 400 for inspection of a semiconductor wafer 12 performs at least one of detection, classification, and review of defects on the semiconductor wafer 12.
The method 400 for performing inspection of a semiconductor wafer 12 utilizes a reference image (also referred to as a golden reference) to compare the acquired image of the semiconductor wafer 12 for at least one of inspection, classification, and review of defects on the semiconductor wafer 12. For clarity, before describing the implementation of the method 400, a generation process 900 implementing a reference image is provided. A flowchart for implementing the reference image generation process 900 is depicted as 18.
Implementing the reference image generation process 900
In step 902 of the reference pattern generation process 900, the method includes loading a predetermined number of reference regions on the semiconductor wafer 12, the method preferably being generated by a computer software programming design, alternatively, the method may be generated manually, the method may be stored in a database of the CPU, alternatively, the method may be stored in an external database or memory space.
Each preset reference area is provided on the semiconductor wafer 12, the quality of which is unknown. The use of multiple reference regions helps compensate for the potential for surface vibration at different locations of the semiconductor wafer 12 or vibration between multiple wafers. Such surface vibrations include, but are not limited to, various degrees of flatness and illumination intensities. It will be understood by those skilled in the art that the predetermined number of reference regions represents the entire surface area of the semiconductor wafer 12. In addition, the preset number of the reference area may also represent a plurality of preset positions on a plurality of chips.
In step 904, a first reference region is selected, followed by step 906 in which a predetermined number ("n") of images is obtained for the first acquisition location of the selected reference region. More specifically, n images are obtained for each preset position of the selected reference area, and the number and position of the preset positions of the selected reference area can be changed as needed and can be changed conveniently by software programming or manual input.
Acquiring n-graphics may be performed by at least one of the first image acquisition device 32, the second image acquisition device 34, and the review image acquisition device 62, as desired. In addition, a different pattern capture device is used to obtain the n patterns. The illumination used to obtain the n images can be varied as desired, for example with one or a mixture of bright field illumination, DHA illumination and DLA illumination, and the color and intensity used to acquire the n images can be selected and varied as desired.
The acquisition of multiple images per location is preferably such that the vibration of the illumination is taken into account when generating the reference image, the visual setup and the use of the image method during the acquisition of the reference image. The method of reference image generation reduces unwanted vibrations or effects on defect detection and classification due to variations between lighting conditions. In addition, the number of images of the selectable reference area may be acquired for each particular lighting condition. Preferably, the acquisition of multiple images under each specific lighting condition compensates for and normalizes the lighting vibrations caused by the flash lamp and the valve.
The n images are preferably stored in a library of the CPU, or alternatively, the n images may be stored in an external database or memory space as desired, and the n images acquired at step 906 are aligned and processed at step 908. Preferably, the n images of subpixels acquired in step 906 are noted, and the recording of the n images of subpixels preferably uses existing methods, including but not limited to, forming trace bumps or pads on one or more wafers with one or more binary, grayscale, or geometric image matching.
In step 910, a reference intensity is calculated for each of the n images, and more particularly for each image acquired at a preset location of the selected reference region. Preferably, the calculation of the reference intensity for each of the n images helps to compensate for color variations at different locations and areas on the semiconductor wafer 12 (or wafers) to normalize them. More preferably, the calculation of the reference intensity for each of the n images helps to describe or compensate for other surface vibrations at different locations and regions on the semiconductor wafer 12 (or wafers).
The result of step 910 is to calculate n reference intensities, each n reference image corresponding to one of the n images, and in step 912, a quantity of static information for each pixel of each of the n images is calculated. The calculation of the amount of static information includes, but is not limited to, the mean, range, standard deviation, maximum intensity and minimum intensity of each pixel for each n images.
More specifically, the average value is a geometric mean of the reference intensities for each pixel of each of the n images. The geometric mean is an average or mean that represents the mean or mean of the center of a set of data or n numbers, derived from the set of data by first multiplying and then taking n number of repetitions, and the equation that yields the geometric mean is as follows:
the calculation of the geometric mean differs from the mathematical mean or mean value, which prevents the unreasonable influence of extreme values in the data set when calculating the average intensity of each pixel of each n images.
In addition, a range of absolute intensities (hereinafter referred to as Ri) of each pixel of n images of the n images is calculated, and Ri corresponding to each pixel of the n images is a value between a maximum value and a minimum value of the absolute intensities of each pixel of the n images.
As previously described, the standard deviation value for each pixel intensity of each of the n images of the first reference region acquired at step 906 may also be calculated. More specifically, the standard deviation is a deviation from a geometric standard that describes how a set of data with preferred values as geometric means is spread, and the formula for the standard deviation is as follows:
wherein mugIs a set of data { A }1, A2, ..., AnGeometric mean of.
In step 914, the acquired n images are temporarily stored with their corresponding information, such as the location on the semiconductor wafer 12 or on the first reference area. The static information calculated in step 912 is also preferably temporarily saved as described in step 914. Preferably, the data is stored in a database of the CPU. In addition, the data is stored in an optional database or storage space as needed.
In step 916, it is decided whether more images of the selected reference area are needed, step 916 is preferably performed by software control or automatically. Preferably, the execution of step 916 relies on the information obtained in steps 910 and 912. Preferably, step 916 is performed manually or controlled by techniques known in the art.
In step 916, it is determined that more images within the reference area need to be selected, and the steps 904 through 916 are repeated. Steps 904 through 916 may be repeated any number of times as desired. When it is determined in step 916 that the image within the first reference area is not needed, step 918 determines whether steps 904-916 need to be repeated for a preset number of reference areas (referring to the second reference area for the purposes of the description of the present invention) of the next reference area, step 918 is preferably performed using information obtained in at least one of steps 910, 912 or 916, and step 918 may be manually operated or controlled by one of the prior art techniques.
In step 918 it is determined that an image of the second reference region needs to be acquired, for example, for the second reference region, if steps 904 to 916 need to be repeated, a signal is generated for repeating steps 904 to 916, and steps 904 to 916 may be repeated as many times as necessary. The repetition of steps 904 through 916 is controlled by software or automated.
When it is determined in step 918 that steps 904-918 do not need to be repeated, for example, a picture of a next reference area of the preset number of reference areas is not needed, a golden reference picture (hereinafter referred to as a reference picture) is then calculated in step 920.
The calculation of the reference image is preferably controlled by software and carried out by a series of programmed commands. The following steps are implemented to calculate the reference image. It will be appreciated by those skilled in the art that other steps or techniques may be complementary to the following steps in the performance of the reference image calculation process.
In step 922, pixels having a reference intensity greater than a predetermined limit are determined, and in addition, in step 922, a pixel having a pixel range greater than a predetermined range is determined. The predetermined limits and ranges in step 922 may be selected and determined by software or determined manually. In step 924, pixels having a standard deviation of pixel intensities greater than a preset value are identified. The preset value of step 924 may be selected or determined by software or determined by manual operation. If it is determined in steps 922 through 924 that there are pixels having reference intensities that exceed a preset value or range, in step 926, the previously saved image, such as the image saved in step 914, is reloaded for repeating one or more of steps 904 through 924.
Steps 922 through 926 facilitate image recognition of pixels that include a particular pixel intensity. More specifically, steps 922 through 926 enable image identification including pixels having reference intensities outside predetermined limits and ranges, e.g., identification of "undesirable" image identification. More specifically, steps 922 through 926 eliminate "undesirable" pixels from the reference image calculation to help prevent the "undesirable" pixels from affecting the final reference pixel values of the reference image.
The "undesirable" image is discarded, which facilitates the exclusion of defective data or images, thereby preventing the occurrence of similar defective data that would otherwise have produced the reference image. In step 928, the image including pixels within the preset limits and the preset range is sorted.
Preferably, the reference image generation process 900 generates the following image data:
(a) normalized average of intensity per pixel for each image that was collated.
(b) Standard deviation of intensity of each pixel of each image collated.
(c) Maximum and minimum intensity for each pixel of each image that is sorted.
(d) The average reference intensity for each preset number of the reference regions determined in step 702.
The images sorted in step 928 represent reference images. The reference image, along with the image data, are further included in step 928, the reference image and its corresponding image data are preferably both included in a CPU-loaded database, and additionally, the reference images and their corresponding image data may optionally be stored in an optional database or memory space. It will be appreciated by those skilled in the art that steps 922 and 926 help reduce the amount and size of memory space required to store the reference images and their corresponding data, thereby enabling the method 400 to be implemented more quickly or more accurately.
The average intensity of each pixel is preferably normalized to 255 for display and visualization of the reference image. It will be appreciated by those skilled in the art that the average intensity of each pixel may be normalized to a selectable value for displaying and visualizing the reference image.
Steps 904 through 928 may be repeated a set number of times to cause at least one of the first image capture device 32, the second image capture device 34, and the review camera to capture a corresponding plurality of images. Additionally, steps 904 through 928 may be repeated to capture images with different illumination or illumination conditions, such as brightfield illumination, DHA illumination, DLA illumination, and thin line illumination, as desired. The repetition of steps 904 through 928 produces reference images for multiple illuminations or multiple illumination conditions and employs multiple image capture devices as needed.
As previously mentioned, the multiple reference regions of the semiconductor chip 12 (multi-layer chip) and the reference images generated by the multiple lighting conditions help ensure the specification and need to compensate for variations in the quality of the captured images that accompany variations in light conditions. For example, the capture of reference images at different reference regions of the semiconductor wafer 12 (e.g., different regions on the semiconductor wafer 12) preferably ensures accounting for and compensating for variations in color of the different regions on the semiconductor wafer 12.
Both steps 904 and 928 are preferably performed and controlled by the CPU, preferably at least one of steps 904 through 928 is performed or controlled by software programming. Additionally, at least one of steps 904-928 is manually assisted, if desired. The reference image generated by implementing the reference image generation process 900 is used to compare with images subsequently acquired on the semiconductor wafer 12, thereby enabling at least one of defect detection, classification, and review to be performed on the semiconductor wafer 12.
As previously mentioned, the present invention provides a method 400 for inspecting a semiconductor wafer 12 to perform at least one of defect detection, classification, and review on the semiconductor wafer.
In step 402 of method 400, semiconductor wafers 12 are loaded by system 10 onto wafer table 16 for inspection, and semiconductor wafers 12 are preferably removed from semiconductor stack 20 and transferred to wafer table 16 by robotic wafer handler 18. Suction or vacuum is applied to the wafer table 16 to ensure that the semiconductor wafer is positioned on the wafer 12 table.
The semiconductor wafer 12 preferably contains a wafer identification number or bar code. The wafer identification number or bar code is recorded or marked on the surface of the semiconductor wafer 12, specifically on the edge of the surface of the semiconductor wafer 12. The wafer identification number or bar code is used to identify the semiconductor wafer 12 and ensure that the semiconductor wafer 12 is properly positioned on the wafer table 16.
In step 404, a semiconductor wafer map loaded onto the wafer table 16 is obtained, which may be downloaded from a programmable controller. Alternatively, the wafer map may be retrieved by an external database or processor. Further, those skilled in the art will be aware of methods or techniques for preparing and generating wafer maps based on the loading of semiconductor wafers 12 onto a movable support table.
In step 406, one or more reference locations are acquired or determined on the wafer map, and at least one of the directional movement of the wafer X, Y and the theta rotation compensation are calculated using techniques well known to those skilled in the art.
In a subsequent step 408, a wafer semiconductor scan motion path and a plurality of image capture locations are calculated and determined. The wafer map obtained in step 404 preferably facilitates calculating a wafer semiconductor scan motion path and a plurality of image capture locations. Preferably, the wafer scan motion path is calculated based on one of a plurality of known parameters. Known parameters include, but are not limited to, rotational compensation, wafer size, wafer chip size, inspection area, wafer scan speed, and encoder position. Each of the plurality of image capture locations reflects or corresponds to a location on the semiconductor wafer at which an image is captured. Preferably, each of the plurality of image acquisition positions may be varied by techniques well known to those skilled in the art, or may be varied by techniques well known to those skilled in the art.
Preferably, system 10 automatically performs steps 404-408, and more particularly, via a programmable controller of system 10, and further, any of steps 404-408 may be implemented or assisted by an optional processor.
In step 410, the system 10 programmable controller determines the availability of the best reference (hereinafter referred to as the reference picture). If the reference image is not available, the reference image may be obtained by the template reference image fabrication process 900 described above in step 412.
Preferably, a reference image is first obtained or generated prior to performing a preferably two-dimensional (2D) wafer scanning process 400 as described in step 414. A process flow diagram of a preferred two-dimensional (2D) wafer scanning process 500 is shown in fig. 19.
A preferred two-dimensional (2D) wafer scanning process 500
The two-dimensional wafer scanning process 500 acquires bright field images and dark field images through the first image acquisition device 32 and the second image acquisition device 34.
In step 502 of the two-dimensional scanning process 500, the first image acquisition device 32 is exposed. At step 504, a first illumination is provided. For example, the first illumination may be brightfield illumination provided by a brightfield illuminator 26. DHA illumination provided by the high angle darkfield illuminator 30 or DLA illumination provided by the low angle darkfield illuminator 28. The selection of the first illumination provided in step 504 is preferably dependent on an illumination configuration (not shown). Preferably, the lighting configuration is an element of system 10 and is electrically coupled to the illuminators (28, 30, 52, 64 and 66) of system 10, and further, the lighting configuration is an element of the CPU.
The image capture devices 32 and 34 may be any combination of illuminators in the brightfield illuminator 26, the DHA illuminator 30, and the DLA illuminator 28. A few possible combinations of the first illumination provided by the image capturing device 32 and the second illumination provided by the image capturing device 34 are shown in the table of fig. 19. If the first image acquisition device 32 and the second image acquisition device 34 use exactly the same illumination, the throughput of such a structure will be the highest value of the throughput of all possible structures.
To achieve the described object of the invention, the configuration 1 is selected by the illumination configuration as shown in the table of fig. 20, and accordingly the first illumination is brightfield illumination provided by a brightfield illuminator 26.
Preferably, step 502 and step 504 are performed simultaneously. Execution of steps 502 and 504 causes the first image acquisition device 32 to acquire a first image, as shown in FIG. 22 a. In step 506, the first image captured by the first image capturing device 32 is converted into an image signal and transmitted to the CPU through a data transmission process, preferably, and stored in a database or storage system.
In step 508, the second image capture device is exposed. In step 510, a second illumination is obtained. Like the first illumination, the choice of the second illumination preferably depends on the illumination configuration. For the purposes of the present description, configuration 1 is selected by the illumination configuration as shown in the table of fig. 20, and accordingly the second illumination is DHA illumination provided by high angle dark field illumination 30. However, those skilled in the art will appreciate that the first illumination and the second illumination are alternatively illuminations as desired, such as differently configured illuminations within the table shown in FIG. 20.
Preferably, step 508 and step 510 are performed simultaneously. Preferably, step 506 is performed continuously with steps 508 and 510. Steps 508 and 510 cause the second image capture device 34 to capture a second image, as shown in fig. 22 b. In step 512, the second image captured by the second image capturing device 34 is converted into an image signal and transmitted to the CPU through a data transmission process, preferably, and stored in a database or a storage memory system.
FIG. 21 is a simplified diagram illustrating the process of exposing the first image capture device 32 to provide a first illumination and exposing the second image capture device 34 to provide a second illumination and data transfer. Steps 502 through 512 may be repeated any number of times for acquiring a plurality of first and second images corresponding to the semiconductor wafer 12. More specifically, steps 502 through 512 are preferably repeated to acquire an image with the first illumination and the second illumination at each of a plurality of image acquisition locations of the semiconductor wafer 12 along the wafer scanning motion path as calculated in step 408.
As described above, each of the first image and the second image is converted into an image signal and transmitted to the programmable controller and stored in the database or the storage memory system. Each of the capturing 502 through 512 is performed during movement of the semiconductor wafer 12, that is, the capturing of the first and second images is performed as the semiconductor wafer 12 moves along the wafer scanning movement path. Accordingly, those skilled in the art will appreciate that between steps 502, 504 (preferably occurring simultaneously) and steps 508, 510 (preferably also occurring simultaneously), the semiconductor wafer 12 is displaced along the wafer scanning motion path by a predetermined distance that depends on a number of factors including, but not limited to, the speed of displacement of the semiconductor wafer 12 along the wafer scanning motion path and the time required for any of steps 502-512. The preset distance may be controlled or changed as desired, for example by the CPU. The control and variation of the preset distance may be at least one of software or convenient manual operation.
Accordingly, the first image has a predetermined image offset when it needs to be superimposed or compared with the second image. As shown in fig. 22c, is a combined image of the first image and the second image used to display image compensation resulting from the acquisition of the first image and the second image as the semiconductor wafer 12 moves. The pre-set image compensation depends on several factors including, but not limited to, the speed of displacement of the semiconductor wafer 12 along the wafer scan motion path and the time required for any of steps 502 through 512. The control and variation of the pre-set image compensation can be at least one of software or convenient manual operation.
In step 514, XY encoded values are obtained, which preferably can be obtained in each of step 504 and step 510. Preferably, the XY-code value represents the position (X-Y displacement) of the semiconductor wafer 12 along the path of wafer scanning motion. The XY encoded values are obtained for use in calculating image compensation (coarse compensation) between the first image and the second image (e.g., relative compensation of the second image to the first image) in step 516. The final image compensation is calculated by sub-pixel alignment using a pattern matching technique. The final compensation is obtained by applying a preset mathematical formula to the coarse and final image compensation. The preset mathematical formula is adjusted as needed using techniques well known to those skilled in the art.
The two-dimensional wafer scanning process 500 in step 414 of the method 400 produces an acquisition of a plurality of images of the semiconductor wafer 12, preferably computing image positions along a wafer scan motion path.
In step 416 of the method 400, a preferred two-dimensional image processing procedure 600 is performed for at least one of defect identification, inspection, classification, sorting, or storage of the semiconductor wafers 12. FIG. 23 depicts a process flow diagram of a preferred two-dimensional image processing procedure 600.
Preferred two-dimensional image processing procedure 600
The two-dimensional image processing process 600 facilitates the processing of image acquisitions in the two-dimensional wafer scanning process 500. In addition, the two-dimensional image processing process 600 facilitates at least one of defect identification, inspection, classification, sorting, or storage on the semiconductor wafer 12.
In step 602 of the two-dimensional process 600, a first work image is selected and loaded into a memory work area. During the two-dimensional wafer scanning process, the first working image is selected from the plurality of first and second images that are acquired and saved. For the purposes of the present description. The first working image represents the acquisition of the first image by the first image acquisition device 32 during the two-dimensional wafer scanning process 500.
In step 604, sub-pixel alignment of the first working image is performed. Sub-pixel alignment is performed using one or more pattern matching techniques. It is performed using one of binary, grayscale, or geometric image matching methods. Once aligned, a reference intensity for each image is calculated from one or more preset regions of interest of the image as shown in step 606. Step 604 and step 606 may be collectively referred to as a pre-processing of the first working image. It will be readily appreciated that the pre-treatment process is not limited to the above steps. The pretreatment process may include other steps, if desired.
In a subsequent step 608, a first golden reference or reference image is selected. The first reference image selected in step 608 corresponds or matches the first working image. Preferably, the first reference image is selected from a database or reference images generated by the one-pot preferred reference generation process 900 in step 412 of the method 400. As shown in fig. 18, a preferred reference generation process 900 is described in detail above.
In step 610, a data value for each pixel of the first working image is calculated. In step 612, the calculated data value of each pixel of the first working image is referenced to a preset threshold value and increment or other factors.
In step 614, the first working pattern is then matched or evaluated with the image selected in step 608, the matching or evaluation of the first working pattern with the first reference image facilitating the detection and identification of defects on the semiconductor wafer 12. Preferably, the CPU is programmed for efficient automated matching between the first working image and the first reference image. The programmable controller preferably executes a series of computer operations or algorithms of matching the first working image and the first reference image to thereby detect or identify defects on the semiconductor wafer 12.
The presence of one or more defects is determined in step 616 of the two-dimensional image processing process 600. If more than one defect is found or identified in step 616, the algorithm will classify the defects from largest to smallest based on one or all of area, length, width, variance, closeness, fill, edge strength, and others. Further, the algorithm only selects the specified criteria that meet the user's requirements to calculate the defect region of interest (DROI). A defect (or defects) is found or identified in step 616 and then the dros of the semiconductor wafer 12 is calculated in step 618. Preferably, the DROI is dynamically computed by the CPU in step 618. The CPU is preferably programmable (e.g., includes or embodies a series of computing instructions or software) for the calculation of the dros.
In step 620, the corresponding DROI of the second working image is detected. More specifically, the second working image is a second image captured by the second image capturing device 34 in the two-dimensional wafer scanning process 400. That is, after performing the second working image sub-pixel alignment, the DROI of the second image (the image corresponding to the first image) is detected in step 620. The detection of the DROI of the second working image preferably facilitates the determination of defect detection in step 616. More preferably, step 620 facilitates classification of defect detection in step 606.
The system 10 processes the DROIs of the second working image rather than processing the entire image. Otherwise, in step 616, if no defect is found, the method continues by skipping step 618. This will further reduce the amount of resources or processing bandwidth required for the second working image. It can be readily appreciated that such intelligent processing sequences are dynamically determined based on the results of the above steps. This would be beneficial to improve system 10 throughput or wafer per hour.
In step 622, the detection of the defect, and more particularly, the location or position of the defect and its classification is saved. Preferably, the detection of defects, more particularly the location or position of defects and their classification is saved to a database of the CPU, and the detection of defects, more particularly the location or position of defects and their classification is saved to an optional database or memory storage space.
Steps 602 through 622 may be repeated or cycled through the two-dimensional wafer scanning process 500 for processing image capture, each image captured during the two-dimensional wafer scanning process 500 being subsequently loaded into a memory storage workspace and processed to facilitate detection of defects that may be present on the semiconductor wafer 12. Steps 602 through 622, and their repetition, facilitate at least one of the detection, identification, and classification of defects that may be present at any of a plurality of image capture locations on semiconductor wafer 12 along a wafer scan motion path.
In step 624, the plurality of defects and their locations and classifications detected by the two-dimensional image processing process 600 are sorted and saved, preferably into a CPU, or the defects and their locations and classifications may be sorted and saved in other databases or memory storage spaces.
The two-dimensional image processing process is preferably an automated process. Preferably, the CPU is used to program or be programmed with a series of instructions or a computer program for automatically performing the two-dimensional image processing procedure. In addition, the two-dimensional image processing process may have at least one manual input as desired for convenience.
Completion of the two-dimensional image processing procedure 600 in step 416 of method 400 causes the defects to be sorted and stored using brightfield illumination, DHA illumination, and DLA illumination, as well as their locations and classifications.
In a subsequent step 418 of method 400, a first preferred three-dimensional (3D) wafer scanning process 700 is performed. Preferably, the first 3D wafer scanning process 700 can acquire a 3D profile image of the semiconductor wafer 12 to facilitate the subsequent formation of a 3D profile of the semiconductor wafer 12. The semiconductor wafer 12 is moved along the calculated wafer scanning motion path to capture any one or more 3D images of the plurality of pattern capture locations on the semiconductor wafer 12 along the wafer scanning motion path calculated in step 408.
Preferred 3D wafer scanning Process 700
During the 3D wafer scanning process of step 702, the thin line illuminator 52 provides or emits thin line illumination, which is directed to the inspection location via the mirror arrangement 54 in step 704.
In a subsequent step 706, the thin line illumination is reflected off of the semiconductor wafer 12 or a portion thereof to be positioned at the inspection location. In step 708, the thin line illumination reflected from the semiconductor wafer 12 is transmitted through the 3D contour objective 58 with infinite correction aberrations, and the transmission of the thin line illumination through the 3D contour objective 58 with infinite correction aberrations calibrates the thin line illumination in step 708.
In step 710, the calibrated thin line illumination is then passed through the tube lens 60 and into the 3D contour camera 56 as described in step 712. The tube lens 60 preferably focuses the calibrated thin line illumination onto the 3D contour camera 56. In step 714, thin line illumination focused on the 3D image capture plane captures a first 3D profile image of the semiconductor wafer 12. The aiming of the thin line illumination between the 3D profile objective 58 and the tube lens 60 facilitates the introduction of optical components or accessories between them and facilitates flexible installation and reconfiguration of the 3D profile camera 56.
As previously described, thin line illumination is provided by a laser or broadband fiber optic illumination source. In addition, the thin line illumination may also preferably be directed at the inspection position at a specific angle to the horizontal plane in which the semiconductor wafer is disposed. The angle at which the thin line illumination is directed at the detection location can be varied as desired by those skilled in the art using well known techniques. It will also be appreciated by those skilled in the art that the wavelength of the thin line illumination may be selected and varied as desired. Preferably, broadband wavelength thin line illumination is selected to enhance one of defect detection, verification or classification.
In step 716, the first 3D image is converted to an image signal and transmitted to the CPU, and in step 718, the first 3D image is processed by the CPU including at least one of 3D height measurement, coplanarity measurement, inspection and classification of a defect.
Preferably, steps 702 to 718 may be repeated a plurality of times for acquiring a corresponding plurality of 3D images and transmitting the plurality of 3D images to the CPU. Steps 702 through 718 may be used to select image capture locations along a wafer scan motion path or across the wafer.
Preferably, the first 3D wafer scanning process 700 provides increased accuracy by which the preferred method 300 can detect a semiconductor wafer. More specifically, the first 3D wafer scanning process 700 improves the accuracy of defect detection performed by the method 300. Such inspection provides detailed 3D metrology detail, such as the height of coplanar, three-dimensional structures, such as solder balls, gold bumps, individual die, or warpage of the entire wafer.
Preferably, the results of step 718 and other repetitions and processing of the 3D images are stored in a database in the CPU. In addition, the results of step 718 and other repetitions and processing of the 3D images are stored in an optional database or memory storage space as needed.
A preferred second three-dimensional (3D) wafer scanning process 750 may also be used in place of the first preferred 3D wafer scanning process 700. The optical path of the preferred second 3D wafer scanning process 750 is shown in fig. 25, and a process flow diagram of the correspondingly preferred second 3D wafer scanning process 750 is shown in fig. 26.
In step 752 of the second 3D wafer scanning process 750, the thin line illuminator 52 provides thin line illumination. In step 754, thin line illumination is directed to the inspection location by a reflective combiner 80. The reflective assembly 80 is optionally a well-known set of prisms or a device including two mirrors or prisms.
In step 756, the thin line illumination is reflected by the semiconductor wafer 12. The thin line illumination reflected by the semiconductor wafer 12 may be reflected in different directions depending on the surface profile of the semiconductor wafer 12. For example, variations in the structure and geometry of the semiconductor wafer 12 can cause thin line illumination to be reflected by the semiconductor wafer 12 in different directions (or as a dispersion of illumination).
Depending on the surface profile of the semiconductor wafer 12, the thin line illumination reflected from the semiconductor wafer 12 may be dispersed in different directions. The dispersion of thin line illumination reflected from the semiconductor wafer 12 in multiple directions makes it difficult to obtain an accurate measurement of the surface profile of the semiconductor wafer 12. In other words, the dispersion of thin line illumination reflected from the semiconductor wafer 12 in multiple directions makes it difficult to acquire accurate 3D images of the semiconductor wafer 12. This is because the divergence of the thin line illumination reflected from the semiconductor wafer 12 in multiple directions can result in an improper reduction or increase of the thin line illumination entering the 3D profile camera 56, resulting in the capture of a darker or lighter image, respectively. It is difficult to measure accurately from images that are too dark or too bright. Accordingly, it is difficult to obtain an accurate surface profile of the semiconductor wafer 12 from an image that is too dark or too bright.
The reflector assembly 80 receives the reflected pair of thin line illumination from the semiconductor wafer 12 and more particularly the reflector assembly 80 is configured to collect the reflected thin line illumination in a plurality of directions, preferably the reflector assembly includes a first pair of mirrors or prisms 82 and a second pair of mirrors or prisms 84. In step 758, the reflected thin line illumination is transmitted along two optical paths, namely one first ray path is passed or directed by the first pair of mirrors or prisms 84 and a second ray path is passed or directed by the second pair of mirrors or prisms 84. Those skilled in the art will appreciate that the reflector assembly may be configured as desired to direct collected reflected thin line illumination along different numbered optical paths.
In step 760, the thin line illumination is transmitted through the objective lens 58 along each of the first and second ray paths, and the two thin line illuminations passing through the 3D contoured objective lens 58 are aligned. The first pair of mirrors or prisms 82 and the second pair of mirrors or prisms 84 are preferably symmetrically disposed.
In step 762, two aligned thin lines of light are passed through the tube mirror 60. In step 764, the two thin line illuminations then enter the 3D contour camera 56. The tube lens 60 facilitates the concentrated illumination of the two thin line illuminations onto the image capture plane of the 3D image camera 56. In step 766, two 3D outline images on the semiconductor wafer 12 may be acquired with two thin line illuminations focused on the image capture plane of the 3D image camera 56.
Without the use of the reflector assembly 80, the divergence of thin line illumination reflected from the semiconductor wafer 12 in multiple directions can result in an improper reduction or increase of thin line illumination entering the 3D profile camera 56, resulting in too dark or too bright of a captured image, respectively. Such images are typically discarded. The use of images that are too dark or too bright may result in an inaccurate 3D profile measurement of the semiconductor wafer 12 or an inaccurate measurement of the surface profile of the semiconductor wafer 12.
The system 10 for performing the second 3D wafer scanning process 750 may capture two 3D profiles of the semiconductor wafer 12 with a single 3D image capture device 56. Two 3D profile measurements or inspection of the wafer with improved accuracy. In addition, illumination reflected in different directions from the semiconductor wafer 12 may be redirected to be collected by the 3D image capture device 56 using two symmetrically disposed mirrors and prisms 82, 84. Those skilled in the art will appreciate that the reflector assembly 80 may be configured to direct illumination reflected from the semiconductor wafer 12 in multiple directions (e.g., two, three, four, and five directions) to be collected by the 3D image capture device 56 at one illumination time.
To receive two views of the same outline of the wafer, existing equipment employs expensive, bulky, and complex multiple image acquisition devices. Due to the discontinuity of the wafer profile, the reflected light is not continuous to return to the preset light path to enter the plurality of image acquisition devices. That is, illumination dispersion often results in inaccuracies in the acquisition of a single view image of the semiconductor wafer 12 due to variations in the structural and geometric images of the surface of the semiconductor wafer 12.
To overcome variations in the strengths and weaknesses of light reflected from the semiconductor wafer 12. Illumination reflected from the semiconductor wafer 12 in different directions during the inventive system 10 is collected by the 3D image capture device 56. This helps to improve the accuracy of the 3D profile measurement and the inspection of the semiconductor wafer 12. The use of a separate camera, and more particularly, the 3D image capture device 56, also increases the cost and space efficiency of the system 10. Still further, the ability to use a separate objective lens and a separate tube lens (in this case, objective lens 58 and tube lens 60) for acquiring multiple views of semiconductor wafer 12 facilitates alignment and improves the accuracy of the alignment.
After completion of the first preferred 3D wafer scanning process 700 or the second preferred 3D wafer scanning process 750, all detected defects on the semiconductor wafer 12 and their locations and classifications resulting from performing steps 416 and 418 are preferably collated. The consolidation of the defects and their locations and classifications facilitates the computation of a review scan motion path as described in step 420. Preferably, the review scan motion path is calculated based on the locations of defects detected on the semiconductor wafer 12 along the wafer scan motion path. In addition, the defect image location along the review scan motion path is calculated or determined, via step 420. In steps 416 and 418, the defect image capture location preferably corresponds to the location on the semiconductor wafer 12 where the defect was found (e.g., the DROI of the semiconductor wafer)
In step 422 of the preferred method 400, a preferred review process 800 is performed, which review process 800 can review the defect detections of steps 416 and 418. Preferably, the review process 800 is generated by at least one first mode 800a, one second mode 800b and one third mode 800 c. A process flow diagram of a preferred review process 800 is shown in fig. 27.
The preferred review Process 800
As previously mentioned, the review process 800 preferably includes three review modes, namely, a first mode 800a, a second mode 800b, and a third mode 800c, and one review mode (e.g., the first mode 800a, the second mode 800b, and the third mode 800 c) is selected in step 802.
First mode 800a of review Process 800
Step 804 in the first mode 800a of the review process 800 collates and saves the first and second images of all defects detected in the 2D image processing process 600 as described in step 416 of the method 400.
In step 806, the collated and saved first and second images of the defects detected in the semiconductor wafer 12 are uploaded or transferred to an external memory or reviewed off-line.
In step 808, the semiconductor wafers 12 (e.g., generic semiconductor wafers 12 on the wafer table 16) are unloaded and a second wafer is loaded by the robot arm from the wafer stack 20 to the wafer table 16, with each of steps 804 through 808 being repeated for the second wafer.
Steps 804 through 810 are then repeated a plurality of times, depending on the wafer number of the wafer stack 20. The repetition of steps 804 through 810 collates and saves the resulting first and second images for each wafer of the wafer stack 20. And the first image and the second image are uploaded to an external memory or reviewed as one offline. Those skilled in the art will appreciate that the first approach 800a enables steps 804 through 810 to be performed automatically and without requiring user intervention and without affecting yield. The method allows continuous production while the user can perform an on-line review of the saved image. This approach increases the utilization and throughput of the system 10.
Second mode 800b of the review Process 800
In step 820 of the second version 800b of the review process 800, a plurality of review images are acquired at each defect image acquisition location as calculated in each step 420. More specifically, one review bright field image and one review dark field image are acquired at each defect image acquisition position calculated as each step 420 by the review image acquisition device 60 shown in fig. 14. That is, the brightfield image reviewed with the brightfield illuminator 62 and the darkfield image reviewed with the darkfield illuminator 64 capture each defect detected by step 416 of the 2D image processing procedure 600. Each of the plurality of review images is acquired by the review image acquisition device 60, preferably a color image.
It will be understood by those skilled in the art that the intensity of the brightfield illumination and the darkfield illumination for acquiring the brightfield review image and the darkfield review image, respectively, disclosed herein can be determined and varied as desired. For example, the intensity of illumination used to capture the plurality of review images may be selected based on the type of various wafer defects that a user of the system 10 wishes to review or based on the material of the semiconductor wafer 12. Multiple review images can also be acquired with multiple mixes and multiple intensity levels of bright field illumination and dark field illumination as set by the user.
In step 822, the plurality of review images acquired at each defect image acquisition location as calculated in step 420 are collated and saved. The collated and saved review images acquired at each defect image acquisition location are then uploaded to external memory or as an off-line review in step 824.
In step 826, semiconductor wafers 12 (e.g., generic semiconductor wafers 12 on wafer table 16) are unloaded and a second semiconductor wafer 12 is loaded by robot 18 from wafer stack 20 to wafer table 16, and in step 828, each of steps 402 through 422 is repeated for the second wafer 12. The collated and saved first and second images of defects detected on the second semiconductor wafer 12 are uploaded to external memory or reviewed off-line.
In the second approach 800b of the review process 800, steps 820 through 828 may be repeated multiple times, depending on the number of semiconductor wafers 12 on the wafer stack 20. The repetition of steps 820 through 828 may collate and save the bright field review images and the dark field review images acquired for each wafer of the wafer stack 20. And uploading the first image and the second image to an external memory or as an off-line review.
The method allows continuous production while the user can perform an on-line review of the saved image. The method allows multiple images of each defect to be acquired under multiple mixed illuminations for offline review without affecting machine utilization and yield improvement.
Third mode 800c of the review Process 800
The third mode 800c of the review process 800 preferably utilizes a manual input, more preferably an input or command by the user. In step 840, the user acquires a first review brightfield image and a first review darkfield image at a first defect image acquisition location. In a step 842, the user manually detects or reviews the acquired first review brightfield image and first review darkfield image. Preferably, the first review bright field image and the first review dark field image are displayed on a display screen or detector for easy visual inspection by a user. The user can inspect the defect with a different combination of illumination from the brightfield illuminator and the darkfield illuminator.
In step 844, the user may accept or reject the reclassification of the defect based on the first defect image capture location, and steps 840 through 844 are repeated for each defect image capture location as calculated in step 420.
Steps 840 through 844 are then repeated for each defect image capture location, and the positive defects and their classifications are then collated and saved as described in step 846. The sorted and saved positive defects and their classifications are then uploaded or transferred to external storage or served in step 848. In the third mode 800c of the review process 800, semiconductor wafers 12 (e.g., generic semiconductor wafers 12 on the wafer table 16) are unloaded only after step 846 is completed. Accordingly, those skilled in the art will appreciate that the third mode of the review process 800c requires a user to be online to review and review each wafer.
In step 848 of the review process 800, semiconductor wafers 12 (generic semiconductor wafers 12 on the wafer table 16) are unloaded and a second semiconductor wafer 12 is loaded by the robot 18 from the wafer stack 20 to the wafer table 16, steps 840 through 848 being repeated a number of times depending on the number of semiconductor wafers 12 to be inspected (or the number of semiconductor wafers 12 on the wafer stack 20).
Those skilled in the art will appreciate from the disclosure set forth in the foregoing description that the first and second modes 800a, 800b of the review process affect relatively indiscriminate downloading of saved, stored and captured images to an external memory or server. The first mode 800a and the second mode 800b represent an automated review process. The user can access an external memory or server to review the captured images off-line as needed or as needed. The first mode 800a and the second mode 800b may be a sequential review of each wafer on the wafer stack 20, or a sequential image capture, sort, upload, or store.
It should be understood by those skilled in the art that the present invention further describes three review modes, i.e., a first mode 800a, a second mode 800b, and a third mode 800 c. Other review processes or different permutations or combinations of the three ways of the first way 800a, the second way 800b and the third way 800c may be applied by those skilled in the art. In addition, it should be understood by those skilled in the art that each step of the first mode 800a, the second mode 800b and the third mode 800c may be modified or changed by the techniques known in the art without departing from the scope of the present invention.
After the review process 800 is performed, the verified defects and their locations and classifications are sorted and stored in step 426. the verified defects and their locations and classifications may optionally be sorted and stored in a database or any of an external database or memory space. The wafer map is also uploaded in step 426.
As previously described, each of the acquired brightfield illumination, DHA image and DLA image is compared to the corresponding golden reference or reference image for identifying or detecting defects on the semiconductor wafer 12. The present invention provides a reference image generation process 900 (shown in fig. 18) that facilitates generating or producing the reference image. Those skilled in the art will appreciate that the reference image generation process 900 may also be used as a training process.
As previously described, the 2D bright field image, the 2D DHA image, and the 2D DLA image acquired during the 2D wafer scanning process 500 are preferably matched with their corresponding reference images generated by the reference image generation process 900.
The 2D image processing procedure 600 has been described as a preferred comparison procedure, however, for greater clarity, a summary of the matching between the working image and the reference image is provided below. First, the selected sub-pixels of the working image are implemented with known references including, but not limited to, templates, traces, bumps, pads, and other unique ways. Second, the reference intensity of the semiconductor wafer 12 at the image pickup position where the working image is picked up is calculated. An appropriate reference image is then selected for matching the working image. The appropriate reference image is preferably selected from a plurality of reference images generated by the reference image generation process 900.
The CPU is preferably programmed for selection and extraction of an appropriate reference image for matching with the working image. Preferably, the calculation, storage, normal or geometric mean, standard deviation, maximum or minimum intensity of each reference image, and the speed and accuracy of extracting the appropriate reference image to the working image to be compared are improved by the reference image generation process 900.
The corresponding data material for each pixel of the working image is then calculated. The data material includes, for example, a normal or geometric mean, a standard deviation, and maximum and minimum intensities for each pixel of the working image. Each pixel data value of the working image is then referenced or checked against the corresponding data value of each pixel of the selected reference image.
Comparison of data values between pixels of the working image and pixels of the reference image may allow for identification or detection of defects. Preferably, the user sets a predetermined threshold. The difference in data values between the pixels of the working image and the pixels of the reference image is compared with a preset threshold value in one of multiplicative, additive and invariant values. If the difference between the data value of the pixel of the working image and the pixel of the reference image is greater than a preset threshold value, a defect (defects) is/are marked.
The predetermined threshold value may be changed as desired, and preferably the predetermined threshold value is changed to adjust the method 400 to be tighter. In addition, the preset limit value is preferably changed as needed in accordance with the type of the defect to be detected, the material of the semiconductor wafer 12 for inspection, or the illumination condition. In addition, the preset threshold value may be varied according to the needs of a customer or more common semiconductor manufacturers.
One preferred system 10 and one preferred method 400 for semiconductor wafer inspection are described above. Those skilled in the art will appreciate from the foregoing description that modifications may be made to the system 10 and method 400 without departing from the intended scope of the invention. For example, the steps associated with method 400, and the steps associated with processes 500, 600, 700, 750, 800, and 900 may be modified without departing from the scope of the claimed invention.
It is an object of the system 10 and method 400 of the present invention to enable accurate and cost effective inspection of semiconductor wafers. The ability to automate inspection of semiconductor wafers by the system 10 and method 400 enhances the efficiency of the inspection of semiconductor wafers while the semiconductor wafers are in motion. This is because time is not wasted in decelerating and stopping the individual semiconductor wafers at the inspection position where image capture is performed, and in accelerating and transporting the semiconductor wafers at the inspection position after the image capture, which is required in the conventional semiconductor wafer inspection system. Known image offsets between multiple image acquisitions facilitate processing of the acquired images to inspect for possible defects therein. The offset for a particular set of images of the same semiconductor wafer allows the software to accurately determine the coordinates on the semiconductor wafer and then the position of the semiconductor wafer throughout the frame. The offset is preferably determined by simultaneously reading encoder values for the X-and Y-axis displacements and used to calculate the coordinates of the defect or defects. In addition, the advantage of using two images at each inspection location to fuse two different imaging techniques facilitates more accurate inspection of semiconductor wafers.
Those skilled in the art will appreciate that the synchronization of the acquired images may be varied as desired. More specifically, synchronization may be adjusted to enhance the ability of the programmable controller to compensate for image offsets between acquired images. The system 10 and method 400 of the present invention facilitate accurate synchronization between the illumination provided and the corresponding exposure of the image capture device used for image capture, minimizing degradation of inspection quality.
The illumination used by system 10 may be collected with the full visible spectrum of light to enhance image quality. The illumination intensity and combination thereof provided by the system 10 for image acquisition can be readily selected and varied as desired according to a number of factors including, but not limited to, the type of defect to be detected, the material and the stringent requirements of the semiconductor wafer to be detected. The system 10 and method 400 provided by the present invention can also perform height measurements of 3D elements on a semiconductor wafer, as well as analysis of 3D profile images, while the semiconductor wafer is in motion.
The system 10 of the present invention has an optical device (e.g., optical detection head 14) that does not require frequent spatial reconfiguration to accommodate changes in semiconductor wafer structure or characteristics. In addition, system 10 uses tube mirrors to facilitate reconfiguration and design of system 10, and more particularly of optical detection head 14. The use of a tube lens facilitates the introduction of optical elements and accessories into the system, more particularly between the objective lens and the tube lens.
The system 10 of the present invention includes a vibration isolator 24 (collectively referred to as a stabilizing mechanism) for dampening unwanted vibrations generated to the system 10. The vibration isolators 24 help to improve the quality of the images acquired by the first image acquisition device 32, the second image acquisition device 34, the 3D contour camera, and the review image acquisition device 62, thereby improving the accuracy of defect detection. In addition, the XY-displacement table 22 of the system 10 may enable precise displacement and alignment of the semiconductor wafer to a relative inspection position.
As discussed in the background, existing reference image generation or generation processes require manual selection of "good" semiconductor wafers, resulting in relative inaccuracy and inconsistency with the generated reference image. Therefore, the quality of semiconductor wafer inspection is adversely affected. The system 10 and method 400 of the present invention improve inspection quality by generating a reference image without the need for manually selecting (e.g., subjectively selecting) a "good" semiconductor wafer. The reference image generation process 900 allows for different threshold values of intensity to be applied at different locations on the semiconductor wafer to account for variations in non-linear illumination across the semiconductor wafer. Thus, the method 400 facilitates reducing the detection of spurious or unwanted defects and ultimately improves the quality of semiconductor wafer inspection.
The present invention enables automatic defect monitoring by comparing a reference image with a captured image of a semiconductor wafer of unknown quality using an analytical model. The present invention preferably enables automatic defect monitoring by using digital analysis on digital images, such as a working image and a reference image.
The present invention enables an automatic review mode (or an off-line review mode) without affecting the product and improves the use of the machine, despite the fact that existing equipment has only a manual review mode, requiring the operator to make a decision for each of a plurality of different lighting intensity defects to be used and seen.
In the foregoing method, embodiments of the present invention describe a preferred system and preferred method for inspecting semiconductor wafers and components thereof. While the preferred systems and methods discuss at least one of the problems identified in the background that present semiconductor inspection systems and methods face, in any event, those skilled in the art will appreciate that the present invention is not limited to the specific forms, solutions or arrangements of parts described in the foregoing embodiments. It will be apparent to those skilled in the art that numerous modifications and/or variations can be made to the present invention without departing from the spirit or scope of the invention.

Claims (20)

1. A detection system, comprising:
an illuminator for providing an incident thin line illumination to a detection position on the surface of the object to be measured;
a plurality of optical elements including a first reflector assembly for collecting discrete thin line light sources in a plurality of directions reflected off of the surface of the object under test, and further for directing the collected discrete thin line light sources along a plurality of optical paths to provide a plurality of views of the surface of the object under test; and
a single image capture device includes a 3D image camera for receiving discrete thin line light sources directed along multiple optical paths captured as one exposure to provide multiple images of the surface of an object to be measured.
2. The system of claim 1, wherein the thin line light source incident on the surface of the object to be measured is directed by a second reflector assembly comprising at least one reflector.
3. The system of claim 1, wherein the first reflector assembly comprises at least one of a mirror and a prism.
4. The system of claim 1, wherein the first reflector assembly comprises a first reflective structure and a second reflective structure for collecting the discrete thin line light source reflected by the surface of the object to be measured and directing the collected discrete thin line light source along a first optical path and a second optical path, respectively.
5. The system of claim 1, wherein the plurality of optical elements further comprises at least one objective lens for aligning the collected discrete thin line light sources directed along a plurality of optical paths.
6. The system of claim 1, wherein the plurality of optical elements further comprises at least one tube mirror for focusing a thin line light source directed along a plurality of optical paths onto an image plane of a single image capture device.
7. The system of claim 6, wherein the tube lens facilitates introduction of additional optical elements into the system.
8. The system of claim 1, wherein the plurality of images of the surface of the object to be measured provide or correspond to a 3D image of the surface of the object to be measured at the inspection position.
9. The system according to claim 1, wherein the single image acquisition device is configured to convert one exposure of a plurality of images of the surface of the object to be measured into an image signal, and the system further comprises a processing unit configured to receive and process the image signal.
10. The system of claim 9, wherein the processing of the image signals facilitates a detection process on the surface of the object to be measured, wherein the detection process includes at least one of 3D height measurement, coplanarity measurement, defect detection and classification.
11. The system of claim 1, wherein the single image capturing device is configured to capture a plurality of images of the surface of the object under test as one exposure during movement of the object under test.
12. A method of detection, comprising:
providing incident thin line illumination for a detection position on the surface of the object to be detected through an illuminator;
transmitting discrete thin line light sources in multiple directions reflected by the surface of the object to be measured along multiple optical paths to provide multiple images of the surface of the object to be measured, the transmitting being by a plurality of optical elements including a first reflector assembly; and receiving, by a single image capture device comprising a 3D image camera, as a single exposure, discrete thin line light sources directed along a plurality of optical paths to provide a plurality of images of the surface of the object to be measured.
13. The method of claim 12, wherein transmitting the dispersed thin line light sources in multiple directions reflected by the surface of the object to be measured along multiple optical paths to provide multiple images of the surface of the object to be measured comprises:
collecting scattered thin line light sources in multiple directions reflected by the surface of the object to be measured through a plurality of optical elements; and
the collected discrete thin line light sources are directed along a plurality of optical paths by a plurality of optical elements to provide a plurality of images of the surface of the object to be measured.
14. The method of claim 12, wherein the first reflector assembly comprises at least one of a mirror and a prism.
15. The method of claim 12, wherein transmitting the dispersed thin line light sources in the plurality of directions reflected by the surface of the object to be measured along the plurality of optical paths comprises transmitting the dispersed thin line light sources in the plurality of directions reflected by the surface of the object to be measured along the first optical path and the second optical path.
16. The method of claim 15,
the plurality of optical elements includes an objective lens; and
transmitting the dispersed thin line light sources in the plurality of directions reflected by the surface of the object to be measured along the first optical path and the second optical path includes collimating the dispersed thin line light sources reflected along the first optical path and the second optical path by the objective lens.
17. The method of claim 15,
the plurality of optical elements comprises a tube lens; and
transmitting the dispersed thin line light sources in the plurality of directions reflected by the surface of the object to be measured along the first optical path and the second optical path includes focusing the dispersed thin line light sources reflected along the first optical path and the second optical path by the tube mirror.
18. The method of claim 13, further comprising:
one exposure of a plurality of images is converted into an image signal by a single image pickup device.
19. The method of claim 18, further comprising:
the image signal is received and processed by a processing unit.
20. The method according to claim 13, characterized in that a plurality of images of the surface of the object to be measured are acquired as one exposure by a single image acquisition device during the movement of the object to be measured.
HK14103020.2A 2009-01-13 2014-03-28 System and method for inspecting a wafer HK1189941B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG200900229-6 2009-01-13
SG200901109-9 2009-02-16

Publications (2)

Publication Number Publication Date
HK1189941A HK1189941A (en) 2014-06-20
HK1189941B true HK1189941B (en) 2017-11-03

Family

ID=

Similar Documents

Publication Publication Date Title
US10876975B2 (en) System and method for inspecting a wafer
CN101853797B (en) For detecting the system and method for wafer
CN101924053B (en) Systems and methods for inspecting wafers
JP5866704B2 (en) System and method for capturing illumination reflected in multiple directions
HK1189941B (en) System and method for inspecting a wafer
HK1189941A (en) System and method for inspecting a wafer
HK1149367B (en) System and method for inspecting a wafer
HK1146332A (en) System and method for inspecting a wafer
HK1146332B (en) System and method for inspecting a wafer
HK1201982B (en) System and method for inspecting a wafer
HK1149632B (en) System and method for inspecting a wafer
HK40006797B (en) System and method for inspecting a wafer
HK40006797A (en) System and method for inspecting a wafer
SG185301A1 (en) System and method for inspecting a wafer
HK1165905A (en) System and method for capturing illumination reflected in multiple directions