CN1695166A - System and method for acquiring and processing complex images - Google Patents
System and method for acquiring and processing complex images Download PDFInfo
- Publication number
- CN1695166A CN1695166A CN 03824976 CN03824976A CN1695166A CN 1695166 A CN1695166 A CN 1695166A CN 03824976 CN03824976 CN 03824976 CN 03824976 A CN03824976 A CN 03824976A CN 1695166 A CN1695166 A CN 1695166A
- Authority
- CN
- China
- Prior art keywords
- image
- holographic image
- mrow
- holographic
- msub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
In digital holographic imaging systems, streamed holograms are compared on a pixel-by-pixel basis for defect detection after hologram generation. An automated image matching, registration and comparison method with feedback confidence allows for runtime wafer inspection, scene matching refinement, rotational wafer alignment and the registration and comparison of difference images.
Description
Technical Field
The present invention relates generally to the field of data processing, and more particularly to a system and method for acquiring and processing composite images.
Background
Holograms captured using digital acquisition systems contain information about the material properties and topology of the object being viewed. By capturing sequence holograms of different instances of the same object, variations between objects can be measured in dimensions. Digital processing of the hologram allows direct comparison of the actual image waves of the object. These image waves contain significantly more information about small details than conventional non-holographic images, since the image phase information remains in the hologram, whereas it is lost in conventional images. The ultimate goal of systems that compare holographic images is to quantify the differences between objects and determine if significant differences exist.
The process of comparing holograms is a difficult task due to the variables involved in the hologram generation process and object processing. In particular, in order to effectively compare corresponding holographic images, two or more holographic images must be acquired and registered or "matched" such that the images are in close correspondence. In addition, after acquiring and registering the holographic images, the images are compared to determine differences between the images. Existing techniques for registering and comparing corresponding images often require significant processing and time. Such time and processing requirements limit the throughput and overall efficiency of digital holographic imaging systems.
Drawings
A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
FIG. 1 is a flow chart illustrating a density-based registration method;
FIG. 2 is a flow chart illustrating a magnitude-based registration method;
FIG. 3 is a flow chart illustrating a registration method for holographic phase images;
FIG. 4 is a flow chart illustrating a registration method for a holographic composite image;
FIG. 5 is a flow diagram of a simplified enrollment system with confidence value calculations removed;
FIG. 6 is a flow chart illustrating a simplified registration system for holographic composite images;
FIG. 7 is an illustrative diagram of a wafer for determining location refinement;
FIG. 8 is a diagram of a digital holographic imaging system;
FIG. 9 is an image of a hologram taken from a CCD camera;
FIG. 10 is an enlarged portion of FIG. 10 showing edge details;
FIG. 11 is a holographic image transformed using a Fast Fourier Transform (FFT) operation;
FIG. 12 is a holographic image showing sidebands;
FIG. 13 is a quadrant of a holographic FFT centered on the carrier frequency;
FIG. 14 shows the sidebands of FIG. 14 after application of a Butterworth low pass filter;
FIG. 15 shows an amplitude image;
fig. 16 shows a phase image;
FIG. 17 shows a difference image;
FIG. 18 shows a second difference image;
FIG. 19 shows a threshold difference image;
FIG. 20 shows a second threshold difference image;
FIG. 21 shows images of two threshold difference images after a logical AND operation;
FIG. 22 shows a magnitude image with a defect; and
fig. 23 shows a phase image with defects.
Detailed Description
The preferred embodiments and their advantages are best understood by referring to fig. 1-23 of the drawings, wherein like reference numerals are used for like and corresponding parts.
The following invention relates to Digital holographic imaging Systems and applications, such as described in U.S. patent No. 6078392 entitled "Direct-to-Digital Holography and navigation", U.S. patent No. 6525821 entitled "Improvements to Acquisition and playback Systems for Direct Holography and navigation", U.S. patent application No. 09/949266 entitled "System and Method for corrected Noise Removal in Complex imaging Systems", and U.S. patent application No. 09/949423 entitled "System and Method for registration imaging Systems", which are all incorporated herein by reference.
The present invention includes automated image registration and processing techniques that were developed to meet the particular needs of Direct Digital Holography (DDH) defect inspection systems, as described herein. In a DDH system, streaming holograms may be compared pixel-by-pixel for defect detection after hologram generation.
One embodiment of the present invention includes a system and method for automatic image matching and registration using the following feedback confidence measures. The registration system provides techniques and algorithms for multiple image matching tasks in the DDH system, such as run-time wafer inspection, frame matching refinement, and rotated wafer calibration. In some embodiments, a system for implementing such a check-in system may include several main aspects including: search strategies, multiple data input capabilities, normalized correlation implemented in the fourier domain, noise filtering, correlation peak pattern search, confidence definition and calculation, sub-pixel accuracy modeling, and automatic target search mechanisms.
Image registration
The fourier transform of a signal is a unique representation of the signal, i.e. the information content is uniquely determined from each other in two different domains. Thus, two images f are given with some degree of overlap1(x, y) and f2(x, y) by Fourier transform F1 (w)x,wy) And F2 (w)x,wy) Their spatial relationship can also be uniquely represented by the relationship between their fourier transforms. For example, an affine transformation between two signals in the spatial domain can be uniquely represented by a fourier transform according to the shift theorem, the scale theorem, and the rotation theorem of the fourier transform. If at f1(x, y) and f2(x, y) there is an affine transformation between them, then their spatial relationship can be expressed as:
wherein
Representing rotation, zoom and tilt differences, anIndicating a translation. If a noise-free environment, the two images are related to each other by:
f1(x,y)=f2(ax+by+x0,cx+dy+y0);
and their fourier transforms are represented as follows:
wherein A isTDenotes the transpose of a, and | a | is its determinant. The importance of this derivation is that this equation divides the affine parameters into two groups in fourier space: translation and linear transformation, which tells us that the translation is determined by the fourier phase difference, while the amplitude is shift-invariant and related to each other by the linear component | a |.
In the simplest case: a translation model, i.e. one image is just a shifted version of the other, as is the case:
f1(x,y)=f2(x+x0,y+y0)。
their fourier transforms have the following relationship:
according to the fourier shift theorem, it corresponds to:
the left side of the above equation is the cross power spectrum normalized by the maximum power possible for both signals. It is also called coherence function. The two signals have the same amplitude spectrum but a linear phase difference corresponding to a spatial shift. Coherence function Γ for two images12(wx,wy) Their cross-correlation, also defined by the following equation with Power Spectral Density (PSD) and cross power spectral density (XPSD)
Wherein xpsd is the cross power spectral density of the two images, and psd1And psd2Are respectively f1And f2Of the power spectral density of (c). Assuming it is a fixed random process, its true PSD is the fourier transform of the true autocorrelation function. The fourier transform of the autocorrelation function of the image provides a sample estimate of the PSD. Similarly, the cross power density xpsd may be determined by dividing f2Two-dimensional Fourier transform of (1)1Are multiplied by the two-dimensional fourier transform of (a). Thus, the coherence function of the two images can be estimated by
The above coherence function is a function of spatial frequency, where its magnitude indicates the power magnitude present in the cross-correlation function. It is also a frequency representation of the cross-correlation (CC), i.e. the fourier transform of the cross-correlation, expressed by the correlation theorem of the fourier transform as follows:
f1(x,y)*f2(x,y)*F2(wx,wy)F1(-wx,-wy),
where * represents spatial correlation. For real signals, the Fourier transform is conjugate symmetric, i.e.
F1(-wx,-wy)=F1(wx,-wy)
The maximum possible correlation power isAnd (4) estimating. Amplitude square coherence | Γ12(wx,wy)|2Is a real function between 0 and 1, which providesA measure of the correlation between the two images at each frequency. At a given frequency, when the correlation power is the same as the maximum possible correlation power, the same pattern is observed for both images, and the power is changed by only one scaling factor. In this case, CC is 1. When the two images have different patterns, the power is out of phase in the two power spectral densities, and the cross power spectral density will have lower power than the maximum possible. To this end, a coherence function may be used for image matching, and the coherence value is a measure of the correlation between two images.
According to the theory described above, the matching positions of the two images, i.e., the registration points, can be derived by finding the position of the largest CC in the spatial domain. The inverse Fourier transform (i.e., the estimation of the coherence function) of CC is
It is a dirac delta function. This is a representation of the CC in the spatial domain and the location of the delta function is exactly where the registration is.
For real signals and hypothetical systems with limited bandwidth (finite size of the discrete fourier transform) and periodic extension of the spatial signal, the delta function becomes a unit pulse. Given two signals with some program superposition, the signal power in their cross power spectrum is mostly concentrated in the coherent peak in the spatial domain, at the registration point. The noise power is randomly distributed in certain coherence peaks. The magnitude of the coherence peak is a direct measure of the coincidence between the two images. More precisely, the power in the coherent peaks corresponds to the percentage of overlapping areas, while the power in the incoherent peaks corresponds to the percentage of non-overlapping areas.
Noise, feature space selection and filter effects
Theoretically, the coherence of the feature of interest should be 1 at all frequencies in the frequency domain and should be a delta pulse at the registration point in the spatial domain. However, noise typically distorts the correlation plane. These noises include: time-varying noise (a/C noise), such as back reflection noise, carrier drift, and process variation induced variations; fixed pattern noise (D/C noise) such as illuminance unevenness, defective pixels, camera scratches, dust on the optical path and focus difference, stage inclination; and (3) random noise.
If these noises are present, one image can be regarded as an overlap of three images in an additive and multiplicative manner:
fn(x,y)=Nm(x,y)*f(x,+y)Na(x,y),
wherein N ism(x, y) are multiplicative noise sources;Na(x, y) are additive noise sources; and fn(x, y) is a signal distorted by noise.
Fa(x,y)=Fm(x,y)*F(x,y)+Fa(x,y)
Wherein, Fm(x, y) is the Fourier transform of the multiplicative noise source; fa(x, y) is the Fourier transform of the additive noise source; and Fn(x, y) is the Fourier transform of the signal distorted by noise.
The observed signal having its Fourier transform FnF of (x, y)n(x, y). The purpose of the noise processing is to perform coherent peak convergence only on the signal. There are two main ways to achieve this goal: (1) reconstructing its original signal F (x, y) or its original fourier transform F (x, y) from the observed signal; (2) the noise is reduced as much as possible in order to increase the likelihood of convergence on the signal, even if the signal is partially cancelled or attenuated.
The first method of noise cancellation requires noise modeling with each noise source, usually requiring a different model. The second method focuses on noise cancellation by any means, even though it also cancels or attenuates the signal, which provides us with more operational margin. Therefore, we mainly use the second technique for the task of image matching. Furthermore, it is beneficial to consider this problem in the spatial as well as in the frequency domain. The following observations have been considered in the design of noise and anti-noise enrollment systems:
first, all frequencies generally work the same, and therefore, narrowband noise is easier to handle in the frequency domain.
Second, image data obtained under different illumination levels typically exhibit slowly varying differences. Luminance non-uniformities are typically manifested as low frequency variations across the image.
In addition, carrier drift in the frequency domain, that is, phase tilt in the spatial domain is low frequency, stage tilt, slow change in stage height, and process variation is mostly low frequency noise. The a/C noise is typically low frequency. The defocused dust is also on the lower side in the frequency domain. The back reflection noise is mostly lower frequencies.
Random noise is typically at a relatively high frequency. Low frequency noise as well as high frequency noise is detrimental to any mutual similarity measure and coherence peak convergence.
The high frequency content is independent of the contrast inversion. The frequency-based technique is relatively picture independent and is possible with multiple sensors because it is insensitive to variations in spectral energy. Only the frequency-phase information is used for correlation, which corresponds to whitening of the respective image, and the whitening is invariant to linear variations in brightness and makes the correlation measure irrelevant.
If white noise is present, the cross-correlation is optimal. Thus, the generalized weighting function may introduce phase differences before taking the inverse fourier transform. The weighting function may be selected according to the type of noise immunity desired. Thus, there are a range of correlation techniques, including phase correlation and conventional cross-correlation.
To this end, the feature space may employ protruding edges, contours of intrinsic structures, salient features, and the like. The edges characterize the object boundaries and are therefore useful for image matching and registration. The following are several alternative filters to extract these features.
The butterworth low pass filter is used to construct the BPF as follows:
wherein, the order is Butterworth order; r is the distance to DC; cutoff1And cutoff2Cut-off frequencies for the low and high ends, respectively; and weight is the filter coefficient for that point.
The BPF can be used to select any narrow band of frequencies.
Edge enhancement filter in spatial domain
Edge enhancement filters are used to capture information in edges, contours, and salient features. The edge points can be regarded as pixel positions where the gray level changes abruptly. For a continuous image f (x, y), its derivative is assumed to be a local maximum in the edge direction. Thus, one edge detection technique is to measure the gradient of f along r in the direction θ. When [ f/theta ] is 0, the maximum value of [ f/theta ] is obtained. This yields:
they can be rewritten in digital form as,
g(x,y)=g2 x(x,y)+g2 y(x, y) and
wherein, gx(x, y) and gy(X, Y) is the vertical gradient in the X and Y directions, obtained by convolving the image with the gradient operator. To save computation, amplitude gradients are often employed
g(x,y)=|gx(x,y)|+|gy(x,y)|
Some common gradient operators are listed below.
Gradient of gradient
The first derivative operator is most effective when the gray level transition is fairly abrupt, such as a step function. As the transition region becomes wider, it is more advantageous to apply the second derivative. In addition, these operators require multiple filters to pass, one for each principal direction. This directional dependence can be eliminated by employing a second derivative operator. In some embodiments, a direction-independent laplacian filter is preferred and defined as
Typical filters H have the form
Wherein C is a parameter for controlling the content. The value C-8 creates an edge-only (edge-only) filter, and sharp edges in the original appear as a pair of peaks in the filtered image. Values of C greater than 8 combine edges with the image itself in different proportions, creating an edge-enhanced image.
In some cases, it is also desirable to increase the edge thickness in order to increase the correlation peak height. However, this process also widens the correlation peak, thus reducing registration accuracy. It may be useful for low resolution matching in multi-resolution schemes.
In general, the purpose of an edge enhancement filter in the spatial domain is to: (1) controlling the information content to enter a registration process; (2) transforming the feature space; (3) capturing edge information of the salient features; (4) sharpening a correlation peak of the signal; (5) the problem of density inversion is solved; and (6) have a boundary wider than the edge detection or first derivative.
Determining thresholds in the spatial domain
The edge-enhanced image still typically contains noise. However, noise appears much weaker in edge strength than in the intrinsic structure, so the edge enhancement feature further determines the threshold to eliminate points with small edge strength. In some embodiments, determining the threshold of the filtered image may eliminate most of the A/C noise, D/C noise, and random noise.
The threshold can be selected automatically by calculating the standard deviation σ of the filtered image and using it to determine the location where noise can be removed optimally, but still retain enough signal for correlation. The threshold value is defined as
Threshold numSigma sigma
Where numSigma is a parameter that controls the content of information entering the registration system. This parameter is preferably set empirically.
After the threshold is determined, the points below the threshold are preferably disabled by clearing them, while the remaining points with strong edge strengths can pass through the filter and enter subsequent correlation operations. Notably, the concept of edge enhancement to improve robustness and reliability of region-based registration comes from feature-based techniques. However, unlike feature-based techniques, images are not limited to binary images. By maintaining the edge intensity values of these strong edge points, the filtered image remains grayscale data. This has the advantage that the edge strength values of the different edge points carry position information of the edge. Different location information will select the correlation process in different ways. Therefore, this technique preserves registration accuracy.
Confidence of image matching
The present discussion is related to the correlation surface and the coherence peak of the surface. As used in this discussion, a feature is an explicit feature, i.e., a dominant feature in a picture. There are two kinds of peaks on the correlation surface: coherent peaks and incoherent peaks. All peaks corresponding to features are coherent and all other peaks are incoherent, i.e. correspond to noise.
Some examples of coherence peaks are as follows:
the periodic signals with periods Tx and Ty of X and Y produce a plurality of periodic coherence peaks with the same period. These peaks have approximately equal intensity, with the highest peak most likely in the center and the peaks towards the edges having increasingly fading intensity.
Any locally repeated signal also produces a plurality of coherent peaks. The highest coherence peak is most likely at the registration point and all other secondary peaks correspond to local feature repeatability.
In many cases, the correlation surface exhibits the performance of a sinc function, often seen as a response characteristic due to the finite size of the discrete fourier transform in a system with a finite bandwidth. The main lobe has the highest peak that the algorithm should converge, but there are also multiple side lobes with peaks.
Incoherent peaks occur in the presence of noise. The random noise power is randomly distributed in certain coherence peaks. Both a/C and D/C noise will shift, distort and spread the coherent peak. Noise also causes coherent peaks to be harmonious, forked, and blurred.
The magnitude of the coherence peak is a direct measure of the coincidence between the two images. More precisely, the power in the coherent peaks corresponds to the percentage of the main features in the overlapping region, while the power in the incoherent peaks corresponds to the percentage of the noise and the non-overlapping region.
Thus, the following two metrics are developed and used together to assess the quality of image matching. First is the height of the first coherence peak. Followed by the difference in intensity between the first coherent peak and the coherent or incoherent second peak, i.e. the correlation coefficient.
Another advantage of using these measures is that they are calculated from correlation surfaces that are already available in real time, while also calculating calibration differences. Efficiency and real-time speed are critical in most image matching applications where a real-time confidence feedback signal is critical for a successful automatic object search system, such as wafer rotation calibration where an automatic multi-FOV (multi-FOV) search is required.
Search space and sub-pixel modeling
The task of the search strategy is often trivial in such implementations of enrollment, since the entire correlation surface is already available for searching after the inverse fourier transform. The registration point is the maximum peak of the amplitude correlation surface. One scan of the peak over the entire search space is usually sufficient. This is the detected integer registration.
To find the sub-pixel offset, sub-pixel modeling proceeds as follows. A two-dimensional parabolic surface can be defined as
Z=ax2+by2+cxy+dx+ey+f.
This second order polynomial fits to a 3 x 3 point correlation surface around the integer peak at (0, 0).
Where (x, y) is the coordinates of these 9 points, which can be reduced to [ -1, 0, 1] for x and y. The least squares solution to the above equation based on a matrix pseudo-inverse operation provides an estimate of the coefficients:
the sub-pixel position registered in this 3 × 3 block is at the peak position of the parabola, which is determined by taking the inverse partial number of the parabolic equation with respect to x and y and setting it to zero
It gives
The coordinates of the integer peak and the sub-pixel offset are used to determine the final registration offset for the entire image.
Figure 1 shows one implementation of a density-based registration method. The method begins by providing a test density image 10 (which may also be referred to as a first image) and a reference density image (12). The two images are edge enhanced (14 and 16) respectively and then noise is removed from the edge enhanced images using a thresholding operation (18 and 20). The images (22 and 24) are then transformed using a fourier transform.
The two transformed images are then used for coherence function calculation (26) and an inverse fourier transform (28) is applied thereto. Subsequently, an amplitude operation is performed (30) within the selected search range. Confidence calculations are then performed (32) and the matching of the images may be accepted or rejected (34) based on confidence values derived therefrom. If the confidence value is within the acceptable range, the registration process proceeds to integer translation and sub-pixel modeling (36), and matching of the images is accepted (38). If the confidence value is not within the acceptable range, a new search is initiated (40).
Figure 2 shows one implementation of an amplitude-based registration method. The method begins by providing a test hologram (50) and a reference hologram (52). The two holograms are transformed separately using fourier transforms (54 and 56) and sideband extraction is applied to each image (58 and 60). The two images are then filtered separately using band pass filters (62 and 64). The resulting images are then transformed separately using an inverse fourier transform (66 and 68), and amplitude operations (70 and 72) are performed on each resulting image. The results are thresholded (74 and 76) prior to being transformed (78 and 80) using a Fourier transform operation.
The two transformed images are then used for coherence function calculation (82) and an inverse fourier transform (84) is applied thereto. Subsequently, an amplitude operation is performed within the selected search range (86). Confidence calculations are then performed (88), and the matching of the images may be accepted or rejected (90) based on confidence values derived therefrom. If the confidence value is within the acceptable range, the registration process proceeds to integer translation and sub-pixel modeling (92), and the matching of the images is accepted (94). If the confidence value is not within the acceptable range, a new search is initiated (96).
Fig. 3 shows an implementation of the phase image based registration method. The method starts with providing a test hologram (100) and a reference hologram (102). The two holograms are transformed separately using fourier transforms (104 and 106) and sideband extraction is applied to each image (108 and 110). The two images are then filtered separately using low pass filters (112 and 114). The resulting images are then transformed separately using an inverse fourier transform (116 and 118), and phase operations (120 and 122) are performed on each resulting image. Phase-known enhancement (124 and 126) is then performed on the resulting image. The results are thresholded (128 and 130) before being transformed (132 and 134) using fourier transform operations.
The two transformed images are then used for coherence function calculation (136) and an inverse fourier transform (138) is applied thereto. Subsequently, an amplitude operation is performed within the selected search range (140). Confidence calculations are then performed (142), and the matching of the images may be accepted or rejected (144) based on confidence values derived therefrom. If the confidence value is within the acceptable range, the registration process proceeds to integer translation and sub-pixel modeling (146), and the matching of the images is accepted (148). If the confidence value is not within the acceptable range, a new search is initiated (150).
Figure 4 shows an implementation of a complex based registration method. The method begins by providing a test hologram (152) and a reference hologram (154). The two holograms are transformed separately using fourier transforms (156 and 158), and sideband extraction is applied to each image (160 and 162). The resulting image is then filtered (164 and 166) using a band pass filter.
The two filtered images are then used for coherence function calculation (168) and an inverse fourier transform (170) is applied thereto. Subsequently, an amplitude operation is performed (172) within the selected search range. Confidence calculations are then performed (174), and the matching of the images may be accepted or rejected (176) based on confidence values derived therefrom. If the confidence value is within the acceptable range, the registration process proceeds to integer translation and sub-pixel modeling (178), and the matching of the images is accepted (180). If the confidence value is not within the acceptable range, a new search is initiated (182).
In some embodiments, simplification may be made by eliminating confidence evaluation. Generally comprising: (1) image conjugate products are used instead of coherence function calculations, i.e., without normalizing the cross power spectral density by the maximum possible power of the two images, and (2) confidence calculations are eliminated and acceptance/rejection tests are accepted. The remainder of the process is essentially the same as its original form. For example, a simplified version of a complex number based registration system is shown in FIG. 5.
Figure 5 shows a simplified implementation of a complex-based registration method. The method starts with providing a test hologram (200) and a reference hologram (202). The two holograms are transformed separately using fourier transforms (204 and 206), and sideband extraction is applied to each image (208 and 210). The resulting image is then filtered (212 and 214) using a band pass filter.
The two filtered images are then used to determine an image conjugate product (216) and an inverse fourier transform is applied (218) thereto. Subsequently, an amplitude operation is performed within the selected search range (220). The registration process enters integer translation and sub-pixel modeling (222) and matching of images is accepted and reported (224).
The choice of a technique or combination of techniques for a particular application is a matter of system engineering choice and depends on many factors. Important factors are, among others, the required functionality, the overall optimization of the system, the available data streams, the convenience and feasibility of the filtering implementation, the noise filtering results and robustness, the overall system speed and cost, and the system reliability.
The following examples are provided to illustrate these principles.
Runtime defect detection
In run-time wafer inspection applications, system speed and accuracy are paramount. To this end, already available composite frequency data streams may be used in this regard. Thus, registration may be simplified, as shown in fig. 6.
Fig. 6 shows a simplified implementation of a method for registering a holographic composite image when sidebands are available in the data stream. The method begins by providing a test sideband (250) and a reference sideband (252). Bandpass filters (254 and 256) are used for each sideband.
The two filtered images are then used to determine the image conjugate product (258) and an inverse fourier transform is applied thereto 260. Subsequently, an amplitude operation is performed (262) within the selected search range. The registration process proceeds to integer translation and sub-pixel modeling (264) and matching of images is accepted and reported (266).
Wafer center detection (or die zero or other point location refinement.)
Fig. 7 shows how the registration process is applied to align the wafer coordinate system with the stage coordinate system. The wafer 300 is placed on the chuck and an image is taken at a coordinate location that may match the stored reference pattern. The procedure provided below is performed on the image in order to determine the offset (Δ x 302, Δ y 304) between the actual position of the reference pattern and the assumed position of the pattern. The second step is to repeat the registration procedure in order to determine and correct the rotation angle θ 306 between the die grid axis and the stage axis.
In a particular embodiment of the present application, a full form of the algorithm must be employed.
Registration (translation, confidence, image 1, image 2.)
It registers the two images (of complex frequency, complex space, amplitude, phase or density) by calculating their translational difference and returns a real-time confidence measure informing whether the match was successful, the following procedure being developed for wafer center detection and rotation angle detection.
Given an image slice as a template (e.g., 256 × 256), the following steps are performed:
step 1. take the FOV 308, image 1 (assuming it is an image segment with features near the actual wafer center) at the current position of the take template.
And 2, zero filling the template to the size of the image 1.
Step 3. call registration (translate, confidence, image 1, fill template.
Step 4. if (confidence. maxxCorrlst > ═ T1 and confidence. measure > - ═ T2)
It is stopped. The translation is output and the wafer center is calculated.
And 5, extracting 256 multiplied by 256 image slices from the image 1 based on the translated position detected in the step 4.
And 6, repeating the step 3 (performing 256 multiplied by 256 registration) by adopting the template and the extracted image slice.
And 7, repeating the step 4.
Step 8. perform a loop search 311 by extracting the FOV from its adjacent FOV with P% overlap, go to step 3.
Step 9. repeat steps 4, 5 and 6 until the condition in step 4 is met or it is signaled that it is outside the predetermined search range.
And step 10, if no match is found in the search range, outputting a failure signal and processing the condition.
The above steps utilize four parameters: t1, T2, numSigma and P%. T1 is the minimum Coors correlation coefficient; t2 is the minimum confidence value; numSigma is a noise threshold that controls the information content entering the registration system after edge enhancement; and P% is the overlap when extracting adjacent FOVs. In one embodiment, in the case of zero fill to the template, the overlap should be > 50% >. 256 pixels, since it only needs to cover a portion of the original template. From experiments, the following settings are typical for a successful search:
T1=0.4,T2=0.1,numSigma=3.5。
other parameters are similar to those in real-time registration.
In some embodiments, the filling scheme may also be replaced with a tilting scheme.
Rotation angle detection
To identify the rotation angle detection, given the wafer center, the following steps are performed:
step 1. take FOV 310, image 1, along the wafer center line on the left, (this may also be a one-step calibrated edge die).
Step 2. take FOV 312 along the center line of the right wafer symmetric to the left FOV with respect to the center of the wafer, image 2.
Step 3. call registration (translation, confidence, image 1, image 2.).
Step 4. if (confidence. maxxCorrlst > ═ T1 and confidence. measure > - ═ T2)
It is stopped. The translation is output and the rotation angle is calculated.
And 5, performing spiral search by taking another FOV with P% overlap above or below, and turning to the step 3.
Step 6. repeat steps 4 and 5 until the condition in step 4 is met or it is signaled that it is outside the predetermined search range.
And 7, if no match is found in the search range, outputting a failure signal and processing the condition.
Data should be taken along the wafer centerline detected above or along parallel lines near the center where features are guaranteed to be present, such as where template images are taken.
The parameters are the same as in the wafer center inspection. Note that a P% overlap of one direction (Y in the case of a spiral search) will guarantee a (50% + P%/2) overlap region between a pair of FOVs in the worst case of a grid (a grid is where data is actually extracted relative to the actual position corresponding to its matching FOV).
The techniques described above provide a number of advantageous features. Noise, including fixed patterns (D/C noise), time-varying patterns (a/C noise), and random noise, can be eliminated by 100% by the novel filter implemented in the spatial domain. The filters take different forms for different data used. Generally, the edges of the high frequency spatial features are enhanced first. Only strong features can pass through the filter and noise does not enter the process. The gray scale edge intensity data, rather than the raw density/phase, is then used in the following correlation process.
The correlation process is implemented in the fourier domain for speed and efficiency. In most embodiments, a Fast Fourier Transform (FFT) is used to implement the Fourier transform operation.
The use of confidence values for each match is advantageous. This confidence value is defined using the peak pattern of the two-dimensional correlation surface. Together with the correlation coefficient, this confidence value provides a reliable measure of the quality of the image match.
It would also be advantageous to provide a mechanism for fully automatic searching (in combination with mechanical translation of the target object) from a desired number of fields of view (FOVs) until the correct target is matched. The quality of each move is measured by the confidence defined during the registration calculation, and the confidence value can also be used to accept a match or reject it and start a new search.
Automatic wafer rotation calibration fully automates the correction of any wafer rotation roll-off. This is important for wafer placement in a wafer inspection system. It reduces operator setup time and achieves the required accuracy of wafer navigation. The registration system provides a robust, reliable, and efficient subsystem for the inspection system for wafer calibration.
The method improves the flexibility of accepting various input data. In the case of DDH wafer rotation calibration, this method can accept five main data formats, and calculate the registration parameters directly from these data: a. composite frequency data; b. compounding the spatial data; c. amplitude data extracted from the hologram; d. phase data extracted from the hologram; density-only data. This flexibility provides the possibility of developing a system that is more reliable and efficient as a whole.
Comparing holographic images
The invention also includes systems and methods for comparing holographic images in order to identify changes in objects or differences between objects. As shown in fig. 8, an imaging system, generally designated 340, includes major components: 1) a mechanical positioning system 380 having computer controls linked to the system control computer 350; 2) an optical system 370 for creating a hologram, including an illumination source; 3) a data acquisition and processing computer system 360; 4) a processing algorithm operable to run in the processing system 360; and may also include 5) a system for monitoring the subsystems (not explicitly shown).
The imaging system 340 works by positioning one example of an object in the field of view (FOV) of the optical system in a total of six degrees of freedom (x, y, θ, z, flip, tilt), and acquiring digital holograms with the acquisition system 360 and performing a first stage hologram processing. The intermediate representation of the resulting image wave may be stored in a temporary buffer.
The positioning system 380 is then instructed to move to a new position with a new object in the FOV, and the initial acquisition sequence is repeated. The coordinates for the new location of the positioning system are derived from the virtual mapping and inspection plan. This step and the acquisition sequence are repeated until a second instance of the first object is reached.
The distance measuring device is preferably used in conjunction with the positioning system 380 to generate a set of discrete samples representing the distance between the object and the measuring device. A mathematical algorithm is used to generate a mapping table with a look-up function for determining target values for a total of three degrees of freedom (z, tip, tilt) given as inputs to a total of three input coordinates (x, y, theta).
In this regard, the optical system 370 acquires a hologram of the second example of the object, which is processed to produce an intermediate representation of the image wave. The corresponding representation of the first example is retrieved from the temporary buffer and the two representations are aligned and filtered. By performing unique processing on the representation of the object in the frequency domain, a number of benefits can be achieved in this regard. A comparison between the two examples (described with reference to the difference image) may be made and the results stored in a temporary buffer. This process may be repeated for other FOVs containing the second instance of the object.
The positioning system 380 arrives at the third instance of the object and the previous two steps (intermediate representation and comparison with the second instance) are completed. The result of the comparison between the first and second examples is retrieved from the temporary buffer and noise suppression and source logic algorithms are preferably applied to the retrieved and current comparisons.
The results can then be analyzed and summary statistics generated. These results are communicated to the supervisory controller. This cycle repeats as new instances of the object are acquired.
Generating differences between composite images
The present invention contemplates generating a change in the difference between two composite images.
An amplitude difference may be used. First, the two composite images are preferably converted to a magnitude representation, and the magnitude of the difference between the resulting magnitudes is calculated (pixel-by-pixel). In one embodiment, this represents the difference in reflectivity between the two surfaces being imaged.
A phase difference may be used. First, the two composite images are preferably converted to a phase representation, and the effective phase difference (pixel-by-pixel) between the resulting phase values is calculated. This can be performed directly as described, or by computing the phase of the pixel-by-pixel ratios of the two images after they have been amplitude normalized. In one embodiment, this represents the difference in height between the two surfaces being imaged.
Vector differences may also be employed. First, the two composite images are directly subtracted in the composite domain, and then the magnitude of the resulting complex difference is calculated. This difference advantageously combines the representation of the amplitude difference with the phase difference. For example, where the phase difference may be noise, the amplitude may be smaller, thus mitigating the effect of the phase noise on the resulting vector difference.
Calibrating and comparing two successive difference images
The present invention also contemplates calibration and comparison of two successive difference images in order to determine which differences are common to both. The amount of shifting one difference image to match another is generally known from the previous step performed to initially calculate the difference image, i.e., image a is shifted by an amount a to match image B and produce difference image AB, while image B is shifted by an amount B to match image C and produce difference image BC. Thus, the appropriate number of shifts to match image BC to image AB is-b. Three alternative methods of determining which differences two difference images have in common are described below.
In one embodiment, the difference image is thresholded and then one of the two threshold-limited images is shifted by an appropriate amount, rounded to the nearest whole pixel. The common difference is then represented by the logical and (or multiplication) of the shifted and unshifted threshold-limited difference images.
In another embodiment, the difference image is first shifted by an appropriate (sub-pixel) amount before the threshold is determined, and then the threshold is determined for the image. The common difference is then calculated by the logical and (or multiplication) described above.
In another embodiment, one of the difference images is shifted by an appropriate (sub-pixel) amount and combined with the second image before determining the threshold. The combination of the two images may be any of several mathematical functions including a pixel-by-pixel arithmetic average and a pixel-by-pixel geometric average. After combining the two difference images, the results are thresholded.
Example operations
The following discussion provides a description of example operations of the present invention. First, the hologram is acquired using a CCD camera (as shown in fig. 9 and 10) and stored in a memory. Object wave is defined as
And the reference wave is
Ignoring camera non-linearity and noise, the density of the recorded holograms is:
phase difference between two wavesIs defined asAnd represents the vector difference of the angle between the two edgesIs composed of <math> <mrow> <mi>Δ</mi> <mover> <mi>k</mi> <mo>→</mo> </mover> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mover> <mi>k</mi> <mo>→</mo> </mover> <mi>A</mi> </msub> <mo>-</mo> <msub> <mover> <mi>k</mi> <mo>→</mo> </mover> <mi>B</mi> </msub> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math> Equation (1) reduces to:
wherein, mu0Representing the coherence factor. Edgar has demonstrated additional detail along these lines.
In a preferred embodiment, this step can be implemented as a direct image capture by the digital holographic imaging system itself and transferred to memory, or simulated in an off-line process by reading the captured image from disk. In this particular preferred embodiment, the image is stored as a 16-bit gray scale, but 12 bits of the actual range (0-4095) are used, since that is the full range of the camera.
The holographic image is then preferably processed to extract the composite wavefront returning from the object, as shown in FIG. 11. In a preferred embodiment, a Fast Fourier Transform (FFT) is performed on the captured (and reliably enhanced) hologram. The FFT of the hologram density is expressed as:
subsequently, the carrier frequency of the holographic image is found. In one embodiment, this first requires that the frequencies in the sideband concentrations (as shown in FIG. 12) must be set in order to properly isolate the sidebands. This may be done for the first hologram processed and the same location for all subsequent images, or the carrier frequency may be reset for each individual hologram. First, the location of the hologram FFT from equation (2) is found <math> <mrow> <mover> <mi>q</mi> <mo>→</mo> </mover> <mo>=</mo> <mi>Δ</mi> <mover> <mi>k</mi> <mo>→</mo> </mover> </mrow> </math> (or <math> <mrow> <mrow> <mover> <mi>q</mi> <mo>→</mo> </mover> <mo>=</mo> <mo>-</mo> <mi>Δ</mi> <mover> <mi>k</mi> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math> Since the modulus of the sidebands exhibits peaks at these two positions, it can be left by searching <math> <mrow> <mover> <mi>q</mi> <mo>→</mo> </mover> <mo>=</mo> <mn>0</mn> </mrow> </math> FFT { I ofholModulus of the location to find the desired location.
In some embodiments, the search area of the sideband is defined as a parameter. The modulus of the hologram FFT is calculated in a defined area and the position of the maximum point is selected as the carrier frequency. In all implementations, the search area may be specified as a region of interest (maximum and minimum x and y values).
In a particular embodiment, the carrier frequency is calculated to sub-pixel accuracy by interpolation of the FFT modulus in the region of the found maximum. To correct the sub-pixel position of the carrier frequency, the FFT is then modulated by a phase-only function after isolating the sidebands.
The search area of the sidebands may be specified as the area of interest in the fourier domain or as the number of pixels off the x and y axes that are not searched in the fourier domain. In some embodiments, this parameter may be selectively modified. Alternatively, the user may optionally set the manual position of the sidebands, which sets the carrier frequency position to a fixed value for all images. (in certain embodiments, the same effect may be achieved by setting the search area to a single point.)
For the test series, the carrier frequency can be assumed to be stable, so that no recalculation is required for each hologram. The carrier frequency can be found once and that frequency is used in the same examination for all subsequent holograms.
In the positioning of the side bandThereafter, the quadrant of the hologram FFT centered on the carrier frequency is extracted, as shown in fig. 13. This isolation of the sideband quadrants reduces one of the sideband terms from equation (3) and modulates it to eliminate the pairThe correlation of (a):
the implementation of this step is simple. Note that in some embodiments, the quadrants are not extracted from the FFT, but rather the FFT is refocused on the carrier frequency and retains its original resolution.
The extracted sidebands may then be filtered. In a particular embodiment, a butterworth low pass filter is applied to the extracted sidebands in order to reduce the effect of any aliasing from the autocorrelation band and to reduce noise in the image.
Low-pass filterApplied to the sidebands as shown in figure 14. The filtered sidebands are the composite image waves that we wish to reconstructFFT of (2):
the butterworth low pass filter is defined by the following equation:
wherein q iscIs the cut-off frequency of the filter (i.e., the distance from the center of the filter where the filter gain is reduced to half its value), and N is the order of the filter (i.e., the speed at which the filter cuts off).
In embodiments using off-axis illumination, the low-pass filter may need to be moved off-center in order to more accurately capture the sideband information. Is provided withIndicating that we wish to set the position of the filter center (offset vector), the equation for the butterworth filter is:
in a preferred embodiment, the Butterworth filter should be computed only once for a given parameter and image size and stored for each image.
In the preferred embodiment, a cutoff frequency, also referred to as the filter "size" or "radius", and the order of the filter must be specified.
If an off-axis filter is required, the offset vector of the filter center should also be specified; this parameter should also be selectively adjustable. In a preferred embodiment, a flag indicating whether to use a low pass filter or a band pass filter may be allowed to be used to select the type of filter employed in the processing software.
In some embodiments, the processing software program has the capability to employ a band pass filter instead of a low pass filter. The use of band pass filters has been shown to improve the defect detection performance of certain defective wafers. The band pass filter is implemented as a series of multiplications of a butterworth low pass and high pass filter; the high pass filter may be defined as a "reduced pass filter" and specifies the same parameter type as the low pass filter.
Then, inverse fourier transform (IFFT) is performed on the filtered sidebands to derive a composite image wave, thereby producing an amplitude image and a phase image, respectively, as shown in fig. 15 and 16. IFFT generation of filtered sidebands:
it has been assumed that the aperture of the low-pass filter completely isolates the sidebands. In practice, this is not possible, but the assumption is that it is necessary to achieve a tractable representation, and equation (7) does not properly represent the result.
If the phase of the resulting composite image is not sufficiently flat (i.e., there are several phase wraps on the image), then planar field correction can be applied to improve the results. This involves dividing the composite image by the reference plane (mirror image) to correct for variations in illumination intensity and, in particular, background phase.
First of all, the first step is to,a composite image representing a reference plane hologram (processed as described above). The flat field correction hologram is:
to accomplish this step, the flat field hologram is processed into a composite image in a previous inspection process. That image is stored and separated pixel by pixel from the process into composite images. The parameters used to generate the composite image (sideband search area and filter parameters) are typically the same for a flat field hologram as for a check hologram.
The reference plane corrects for density and phase, and thus, results from equation (8)Resulting modulus imageIt may not be useful to look at or just amplitude processing algorithms. This problem can be solved by taking the reference plane imageModified to have a unit modulus at each pixel to reduce. The planar field correction then corrects only the non-planar phase in the inspection image.
Difference operation
A difference operation is required to identify the difference between two corresponding composite images. One preferred method of performing the difference operation is as follows.
After the two composite images are obtained, the two images are aligned so that direct subtraction of the two images will reveal any differences between the two. In this embodiment, the registration algorithm is based on the cross-correlation of two images. Since the registration algorithm is based on the cross-correlation of two images, performance can be improved by eliminating DC levels and low frequency variations from the images. This allows the sharp edges and the high frequency content of the features to be more prominent than any alignment of low frequency variations.
Butterworth high-pass filterApplicable (in the frequency domain) to the composite image psi to be registered1And psi2Each of (a) to (b):
this effectively band-pass filters the image. High-pass filter HHPIs defined as:
the implementation of the high-pass filtering step is simple. The size of the high pass filter used may be user defined or determined as a fixed percentage of the size of the low pass filter applied above. The high pass filter is preferably calculated once and stored for application to each image.
High-pass filter HHPMay be specified by the user or fixed in a predetermined relationship to the low pass filter parameters. In some embodiments, it may be desirable to limit the parameters of this step to a fixed relationship with the low pass filter parameters in order to reduce the number of user variables.
After filtering, the cross-correlation of the two images is calculated. The peak of the cross correlation surface preferably occurs at the location of the correct registration shift between the images.
Cross-correlation between two band-pass filtered imagesCalculating by taking the inverse fourier transform of the product of the yokes of the first and second images:
the registered deviation between the two images corresponds to the position where the cross-correlation surface reaches its maximum. Registration deviation between two images as a valueIs shown asFor itIs the largest. The area centered on the origin of the cross-correlation is searched for the maximum. Once the location of the maximum is found, the quadric surface fits into a 3 × 3 neighborhood centered on that location, and the sub-pixel location fitting the peak of the surface serves as the sub-pixel registration deviation. The equation for the quadric surface is:
ax2+bxy+cy2+dx+ey+f=0 (13)
the values of the coefficients a, b, c, d, e, and f are calculated via a matrix solution routine. Computing for term x2A 9 × 6 matrix (A) of values in 3 × 3 neighborhood of xy, etc., andand form (unknown) coefficients <math> <mrow> <mover> <mi>z</mi> <mo>→</mo> </mover> <mo>=</mo> <msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>a</mi> </mtd> <mtd> <mi>b</mi> </mtd> <mtd> <mi>c</mi> </mtd> <mtd> <mi>d</mi> </mtd> <mtd> <mi>e</mi> </mtd> <mtd> <mi>f</mi> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> </mrow> </math> The 6 x 1 vector of (1). Substituting the cross-correlation values corresponding to each position into a 9 × 1 vector <math> <mrow> <mover> <mi>h</mi> <mo>→</mo> </mover> <mo>=</mo> <msup> <mrow> <mo>[</mo> <mo>|</mo> <mi>γ</mi> <mrow> <mo>(</mo> <mover> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> <mi>γ</mi> <mrow> <mo>(</mo> <mover> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mo>|</mo> <mo>·</mo> <mo>·</mo> <mo>·</mo> <mo>]</mo> </mrow> <mi>T</mi> </msup> <mo>.</mo> </mrow> </math> The matrix a is of the form:
the coefficients of the filter surface are then found by solving equation (14):
position (x) of maximum value of quadric surfacemax,ymax) Then by the coefficientIs calculated and used as the sub-pixel registration offset value.
The determination of the position at which the cross-correlation surface is a maximum can be achieved in several different ways. In one implementation, the interpolation may be performed by fitting a quadratic surface to a 3 × 3 neighborhood centered on the maximum and finding the location of the maximum that fits the surface. In another implementation, there is an option to perform such interpolation using three points in each direction (x and y), respectively.
The maximum registration deviation must typically be specified, often as the maximum number of pixels in any direction in which the images can be moved relative to each other to achieve calibration.
The registration shift determination described primarily completes the registration process. Note that this process generally conforms to the registration process described in more detail above.
After determining the registration shift between two images, the first image is shifted by thatIn an amount to align it with the second image. Image of a personNumber of shift registrations
Due to registration shiftTypically a non-integer value, and therefore a method of interpolating the sampled image must be chosen. Two preferred methods for interpolation are bi-linear interpolation and frequency domain interpolation. Bilinear interpolation is performed in the spatial domain using the four nearest whole pixels to the expected sub-pixel position. Assume that it is desired to find an interpolated value of ψ at a position (x + Δ x, y + Δ y), where x and y are integers, and 0 ≦ x < 1 and 0 ≦ y < 1. The bilinear interpolation is calculated as:
ψ(x+Δx,y+Δy)=(1-Δx)·[(1-Δy·ψ(x,y)+Δy·ψ(x,y+1)]
+Δx·[(1-Δy)·ψ(x+1,y)+Δy·ψ(x+1,y+1)](19)
frequency domain interpolation is performed using the basic shift property of the fourier transform:
ψ(x+Δx,y+Δy)=IFFT{ψ(u,v)·e-12π(Δx·u+Δy·v)} (20)
with equation (19), the range of Δ x and Δ y is not limited.
The two images being compared must be normalized so that when subtracted, their amplitudes and phases will align and produce a near zero result, except at the defect. There are two main methods for normalizing the composite image. In the first simplest method, called "composite normalization", the first image in a pair is normalized to the second by multiplying it by the ratio of the composite average of the two images. The composite mean of the images is defined as:
wherein N is2Is the number of pixels in the image. Image processing methodIs normalized toThe equation of (1) is:
in a second method, called "amplitude-phase normalization", the amplitude and phase of the image are directly aligned, rather than the real and imaginary parts. First, the average of the image amplitudes is calculated:
second, the phase offset between the two images is calculated. The phase difference between the two images is calculated:
in order to find the phase offset, it is necessary to calculate the phase offset of this phase difference image that will produce the least phase jumps in the image. Since this image is expected to be fairly uniform, the phase shift that results in the largest number of phase jumps is found more reliably, and the correct phase shift is then concluded to be the pi radius shift from that one. The result is a phase shift Δ φ, which will be used in conjunction with the amplitude-averaged ratio to normalize the first image to the second:
the implementation of this step is quite simple mathematically. Amplitude-phase normalization tends to be more computationally intensive and may not be necessary when employing a wavefront matching step. If wavefront matching is employed, no normalization step need be performed at all, as wavefront matching is a form of normalization.
Wavefront matching adjusts the phase of the second image by a filtered version of the phase ratio between the images to eliminate low frequency variations from the difference image caused by phase anomalies. First, the phase difference between the images is found by dividing the two composite images:
this comparison is then low-pass filtered in the frequency domain using a filter with a very low cut-off frequency:
wherein,is a third order butterworth filter with a cut-off frequency of six pixels. This filtering ratio is used to modify the second image such that low frequency variations in the phase difference are minimized:
the implementation of this step is simple using the above equation. The order and cut-off frequency of the low-pass filter used in this step are fixed. It is further noted that in a preferred embodiment, the second image is the one modified by this algorithm rather than the first. This will make the ratio
The number of pixels in the case where it is undetermined due to a zero value in the denominator on the boundary pixel is the smallest.
In some cases, differences between implementations that process boundary pixels when shifting an image may cause this step to propagate differences across the entire image. The wavefront matching step will result in differences in the entire image up to and unless the processing of the boundary pixels during shifting is the same in various implementations. These differences are usually quite small. In addition, wavefront matching may result in artifacts near the boundary due to periodic assumptions of the FFT. The effect of these artifacts may be beyond the border area that is excluded from the defect.
Then, a vector difference between the two registered normalized phase correction images is calculated as shown by the first difference image in fig. 17 and the second difference image in fig. 18. The difference between the images is
The implementation of this step is simple. Note that in alternative embodiments, the phase difference and amplitude difference may also be used to detect defects.
Pixels near the edges of the difference image are set to zero in order to exclude defect detection in those regions, which are prone to produce artifacts. Each pixel in the vector difference image within the specified number of pixels of the sides of the image is set to zero. This requires that the number of pixels of each edge to be cleared must be specified. In some embodiments, the number of pixels is taken to be equal to the maximum allowed registration shift in pixels.
Defect detection
Thresholds are determined for the vector difference images to indicate the location of possible defects between each pair of images, as shown in fig. 19 and 20. Computing vector difference imagesStandard deviation of (2). The threshold is set at a user-specified multiple of the standard deviation, and the difference image determines the threshold at that value:
the initial threshold is calculated from the standard deviation of the entire difference image. In one implementation, the threshold is repeatedly modified by recalculating to exclude the standard deviation of pixels above the threshold until there is no other change. This effectively lowers the threshold of images with many defects, sometimes quite many. In a preferred embodiment, the multiple of the standard deviation at which the threshold is determined for the image is specified by the user.
The two thresholded difference images, which are used to determine which image the defect originated from, are then aligned. Since the first image of any pair is aligned with the second image of that pair, the two resulting difference images are in different reference frames. In three composite images psi compared with each other1、ψ2And psi3In the sequence of (1), the first determined threshold difference value δ2,1And psi2Aligned and the second difference δ3,2And psi3And (4) aligning. Since these two thresholded difference images will produce the image psi2So that the image delta3,2Must be shifted so that it is aligned with psi2And (4) aligning. Due to the binary nature of the determination threshold image, the image only needs to be aligned to full pixel accuracy. Image psi2And psi3The registration shift in between is already known from previous calculations to sub-pixel accuracy; this shiftRounded to the nearest whole pixel (denoted asAnd applied to delta in the opposite direction to its previous application (shifting image 2 to align with image 3)3,2:
The implementation of this step is simple.
Subsequently, a logical and operation is applied to the aligned determined threshold difference images in order to eliminate any detected defects that do not appear in both images, as shown in fig. 21. This reduces the number of false positive defects and assigns the defects to the appropriate images in the sequence.
When the image psi2With corresponding image psi1And psi3Image psi found upon comparison2The defect in (a) is given by:
in one particular embodiment, the logical and is implemented as a multiplication of two determined threshold images, since their values are limited to 0 or 1.
In an alternative embodiment, the above steps may be reordered such that the calibration and logical AND steps are performed before the thresholds are determined, the sub-pixel calibration may be substituted, and the logical AND step becomes a true multiplication.
In some embodiments, the resulting defective areas may be discarded when they fall below a certain size threshold. In addition, morphological operations on defective areas can be used to "clean" their shapes. Shape modification can be implemented as mathematical form operations, i.e., form closure. This operator is described below.
Let K denote the structural element (or core) of an operator. Defining a symmetric set <math> <mrow> <mover> <mi>K</mi> <mo>~</mo> </mover> <mo>=</mo> <mo>{</mo> <mo>-</mo> <mover> <mi>r</mi> <mo>→</mo> </mover> <mo>:</mo> <mover> <mi>r</mi> <mo>→</mo> </mover> <mo>∈</mo> <mi>K</mi> <mo>}</mo> <mo>,</mo> </mrow> </math> It is a reflection of K near the origin. Aggregate to pointThe translation of (a) is indicated by the subscript; e.g. translated to a pointSet K of. The aggregation processing morphology evolution (erosion) and dispation (expansion) are defined by the following equations:
erosion of <math> <mrow> <mi>dΘ</mi> <mover> <mi>K</mi> <mo>~</mo> </mover> <mo>=</mo> <mo>{</mo> <mover> <mi>s</mi> <mo>→</mo> </mover> <mo>:</mo> <msub> <mi>K</mi> <mover> <mi>s</mi> <mo>→</mo> </mover> </msub> <mo>⊆</mo> <mi>d</mi> <mrow> <mo>(</mo> <mover> <mi>s</mi> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mo>}</mo> <mo>=</mo> <munder> <mrow> <mi></mi> <mo>∩</mo> </mrow> <mrow> <mover> <mi>r</mi> <mo>→</mo> </mover> <mo>∈</mo> <mi>K</mi> </mrow> </munder> <mi>d</mi> <mrow> <mo>(</mo> <mo>-</mo> <mover> <mi>r</mi> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>31</mn> <mo>)</mo> </mrow> </mrow> </math>
The symbols Θ and represent Minkowski subtraction and Minkowski addition, respectively. The erosion of the binary image d has real pixels, wherein the structuring element K can be translated while remaining completely in the original region of the real pixels. The extension of d is true, where K can be translated and still intersect the true point of d at one or more points.
The morphological opening and closing operations are sequential applications of erosion and expansion, as follows:
disconnecting: <math> <mrow> <mi>d</mi> <mo>·</mo> <mi>K</mi> <mrow> <mo>(</mo> <mover> <mi>r</mi> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <mrow> <mo>(</mo> <mi>dΘ</mi> <mover> <mi>K</mi> <mo>~</mo> </mover> <mo>)</mo> </mrow> <mo>⊕</mo> <mi>K</mi> <mo>]</mo> <mrow> <mo>(</mo> <mover> <mi>r</mi> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>33</mn> <mo>)</mo> </mrow> </mrow> </math>
closing: <math> <mrow> <mi>d</mi> <mo>·</mo> <mi>K</mi> <mrow> <mo>(</mo> <mover> <mi>r</mi> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <mrow> <mo>(</mo> <mi>d</mi> <mo>⊕</mo> <mi>K</mi> <mo>)</mo> </mrow> <mi>ΘK</mi> <mo>]</mo> <mrow> <mo>(</mo> <mover> <mi>r</mi> <mo>→</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>34</mn> <mo>)</mo> </mrow> </mrow> </math>
morphological closure with square kernels (K) is the most likely operation for shape modification of defect map d.
The size limit may be implemented by counting the number of pixels in each connected component. This step may be combined with connected component analysis. In one embodiment, the shape modification utilizes mathematical morphological operations, particularly morphological closure with a 3 x 3 square kernel.
In a preferred embodiment, the minimum defect size to be accepted must be specified for the size limiting operation. In some embodiments, this parameter may be user-modified. For shape modification operations, the size and shape of the kernel plus the type of morphological operator must be specified by the user. In addition, the user may also specify whether shape modification is used at all.
The resulting defect imageThe region of (a) with non-zero pixels is converted to a "connected component" description. The connected components routine preferably looks for defective clusters that are contiguous in the x-direction. Once the linear defect string is identified, it merges with other blobs that may touch in the y-direction. Merging involves redefining the smallest bounding rectangle that contains the defective cluster entirely. A limit of, for example, 50 defects may be imposed in the detection routine in order to increase efficiency. If at any point the defect signature exceeds the limit plus tolerance, the analysis is aborted. Once the entire image is scanned, the merging procedure is repeated continuously until defects are not added.
The connected components are then represented as either an amplitude image, as shown in fig. 22, or a phase image, as shown in fig. 23. In one embodiment, the connected components are mapped into a result file and the basic statistics of the defects are computed. In a particular embodiment, only the coordinates of the bounding rectangle of the defect are reported.
Although the disclosed embodiments have been described in detail, it should be understood that various changes, substitutions, and alterations can be made hereto without departing from the spirit and scope thereof.
Claims (33)
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US41024002P | 2002-09-12 | 2002-09-12 | |
| US60/410,152 | 2002-09-12 | ||
| US60/410,153 | 2002-09-12 | ||
| US60/410,157 | 2002-09-12 | ||
| US60/410,240 | 2002-09-12 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN1695166A true CN1695166A (en) | 2005-11-09 |
Family
ID=35353505
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN 03824976 Pending CN1695166A (en) | 2002-09-12 | 2003-09-12 | System and method for acquiring and processing complex images |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN1695166A (en) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101635049A (en) * | 2009-06-04 | 2010-01-27 | 北京中星微电子有限公司 | Image area clustering method, image area clustering device, outline searching method and outline searching device |
| CN102208106A (en) * | 2010-03-31 | 2011-10-05 | 富士通株式会社 | Image matching device and image matching method |
| CN102279191A (en) * | 2010-06-13 | 2011-12-14 | 中钞特种防伪科技有限公司 | Detection method and apparatus for defects in periodic texture images |
| CN102460129A (en) * | 2009-06-22 | 2012-05-16 | Asml荷兰有限公司 | Object inspection systems and methods |
| CN106528899A (en) * | 2015-09-10 | 2017-03-22 | 中芯国际集成电路制造(上海)有限公司 | Graph selection method used for light source-mask optimization |
| CN109506590B (en) * | 2018-12-28 | 2020-10-27 | 广东奥普特科技股份有限公司 | A Rapid Location Method of Boundary Jump Phase Error |
| CN113554636A (en) * | 2021-07-30 | 2021-10-26 | 西安电子科技大学 | Chip defect detection method based on generation of countermeasure network and computer generated hologram |
| CN114897797A (en) * | 2022-04-24 | 2022-08-12 | 武汉海微科技有限公司 | Method, device and equipment for detecting defects of printed circuit board and storage medium |
| US20230029274A1 (en) * | 2021-04-28 | 2023-01-26 | Canon Kabushiki Kaisha | Displacement meter and article manufacturing method |
| WO2023060797A1 (en) * | 2021-10-13 | 2023-04-20 | 东方晶源微电子科技(北京)有限公司 | Image processing method and device for semiconductor electron beam defect monitoring, and system |
| TWI900954B (en) * | 2022-12-28 | 2025-10-11 | 日商斯庫林集團股份有限公司 | Image processing method and training data generation method |
-
2003
- 2003-09-12 CN CN 03824976 patent/CN1695166A/en active Pending
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101635049B (en) * | 2009-06-04 | 2013-10-16 | 北京中星微电子有限公司 | Image area clustering method, image area clustering device, outline searching method and outline searching device |
| CN101635049A (en) * | 2009-06-04 | 2010-01-27 | 北京中星微电子有限公司 | Image area clustering method, image area clustering device, outline searching method and outline searching device |
| CN102460129B (en) * | 2009-06-22 | 2015-08-12 | Asml荷兰有限公司 | Object inspection system and method |
| CN102460129A (en) * | 2009-06-22 | 2012-05-16 | Asml荷兰有限公司 | Object inspection systems and methods |
| CN102208106B (en) * | 2010-03-31 | 2014-12-03 | 富士通株式会社 | Image matching device and image matching method |
| US8913837B2 (en) | 2010-03-31 | 2014-12-16 | Fujitsu Limited | Image matching device and image matching method |
| CN102208106A (en) * | 2010-03-31 | 2011-10-05 | 富士通株式会社 | Image matching device and image matching method |
| CN102279191A (en) * | 2010-06-13 | 2011-12-14 | 中钞特种防伪科技有限公司 | Detection method and apparatus for defects in periodic texture images |
| CN102279191B (en) * | 2010-06-13 | 2014-04-02 | 中钞特种防伪科技有限公司 | Detection method and apparatus for defects in periodic texture images |
| CN106528899A (en) * | 2015-09-10 | 2017-03-22 | 中芯国际集成电路制造(上海)有限公司 | Graph selection method used for light source-mask optimization |
| CN109506590B (en) * | 2018-12-28 | 2020-10-27 | 广东奥普特科技股份有限公司 | A Rapid Location Method of Boundary Jump Phase Error |
| US20230029274A1 (en) * | 2021-04-28 | 2023-01-26 | Canon Kabushiki Kaisha | Displacement meter and article manufacturing method |
| US12050096B2 (en) * | 2021-04-28 | 2024-07-30 | Canon Kabushiki Kaisha | Displacement meter and article manufacturing method |
| CN113554636A (en) * | 2021-07-30 | 2021-10-26 | 西安电子科技大学 | Chip defect detection method based on generation of countermeasure network and computer generated hologram |
| WO2023060797A1 (en) * | 2021-10-13 | 2023-04-20 | 东方晶源微电子科技(北京)有限公司 | Image processing method and device for semiconductor electron beam defect monitoring, and system |
| CN114897797A (en) * | 2022-04-24 | 2022-08-12 | 武汉海微科技有限公司 | Method, device and equipment for detecting defects of printed circuit board and storage medium |
| TWI900954B (en) * | 2022-12-28 | 2025-10-11 | 日商斯庫林集團股份有限公司 | Image processing method and training data generation method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN1260978C (en) | Image processing apparatus | |
| CN1305010C (en) | Method and system for modifying a digital image taking into account its noise | |
| CN1270277C (en) | Image distortion correction method and apparatus | |
| CN1671176A (en) | Image processing device for correcting distortion of image, photographing device for correcting distortion of captured image | |
| CN1208947C (en) | Image processing device and method, noise amount estimating device and method, and storage medium | |
| CN1254769C (en) | Image processing method and appts. thereof | |
| CN1695166A (en) | System and method for acquiring and processing complex images | |
| CN1173295C (en) | Direction of Motion Measuring Devices and Tracking Devices | |
| CN1809838A (en) | Method and system for video quality assessment | |
| CN1207895C (en) | Image processing method, apparatus and recording medium for recording & executing the same method program | |
| CN1735135A (en) | Resolution-converting apparatus and method | |
| CN1531711A (en) | Method and system for computationally converting images from digital images | |
| CN1703900A (en) | Imaging system and reproducing system | |
| CN101048691A (en) | Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and centrally obscured aperture | |
| CN1922883A (en) | Electronic watermarking method, electronic watermark detecting method, apparatus and program | |
| CN1870715A (en) | Means for correcting hand shake | |
| CN1929562A (en) | Flicker reduction method, flicker reduction circuit and image pickup apparatus | |
| CN1675919A (en) | Imaging system and image processing program | |
| CN1109243A (en) | Motion vector detecting apparatus and method | |
| CN1722852A (en) | High-Quality Gradient-Corrected Linear Interpolation for Color Image Demosaicing | |
| CN1857008A (en) | Image processing method | |
| CN1678056A (en) | Image processing apparatus and method, recording medium, and program | |
| CN1839411A (en) | Image correlation method, image correlation device, and program | |
| CN1662071A (en) | Image data processing in color spaces | |
| CN101051388A (en) | Magnetic resonant part K data image reestablishing method based on compound two dimension singular sprectrum analysis |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
| WD01 | Invention patent application deemed withdrawn after publication |