[go: up one dir, main page]

HK1119231B - Methods and apparatus for wavefront manipulations and improved 3-d measurements - Google Patents

Methods and apparatus for wavefront manipulations and improved 3-d measurements Download PDF

Info

Publication number
HK1119231B
HK1119231B HK08105578.1A HK08105578A HK1119231B HK 1119231 B HK1119231 B HK 1119231B HK 08105578 A HK08105578 A HK 08105578A HK 1119231 B HK1119231 B HK 1119231B
Authority
HK
Hong Kong
Prior art keywords
wavefront
phase
amplitude
image
plane
Prior art date
Application number
HK08105578.1A
Other languages
Chinese (zh)
Other versions
HK1119231A1 (en
Inventor
约埃尔‧阿里耶利
谢伊‧韦尔夫林
埃曼努埃尔‧兰兹曼
加夫里尔‧费金
塔尔‧库兹涅兹
约拉姆‧萨班
Original Assignee
Icos Vision Systems N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Icos Vision Systems N.V. filed Critical Icos Vision Systems N.V.
Priority claimed from PCT/IL2005/000285 external-priority patent/WO2005086582A2/en
Publication of HK1119231A1 publication Critical patent/HK1119231A1/en
Publication of HK1119231B publication Critical patent/HK1119231B/en

Links

Description

Method and apparatus for wavefront control and improved 3D measurement
Technical Field
The present invention relates to the field of using composite optical wavefront measurements in metrology applications, particularly in the field of integrated circuit measurements incorporating thin films, and imaging processing applications.
Background
PCT application nos. PCT/IL/01/00335, publication No. WO 01/77629, U.S. patent No.6,819,435, and PCT application nos. PCT/IL02/00833, publication No. WO03/062743, which are co-pending and commonly assigned, and which are hereby incorporated by reference, all of which are incorporated herein in their entirety, describe methods and systems for wavefront analysis and for surface mapping, phase change analysis, spectral analysis, object inspection, stored data information retrieval, three-dimensional imaging, and other suitable applications using wavefront analysis.
Some principles of these methods are described in fig. 1 and 2. Fig. 1 shows a simplified partial schematic diagram illustrating partially the wavefront analysis function. The functionality of this FIG. 1 may be generalized to include the following sub-functions:
I. obtaining a plurality of different phase change transformed wavefronts corresponding to an analyzed wavefront having an amplitude and a phase;
obtaining a plurality of intensity maps of a plurality of phase change transformation wavefronts; and
obtaining an output using a plurality of intensity maps indicative of at least one, and possibly both, of the phase and amplitude of the analyzed wavefront.
Referring to FIG. 1, the first sub-function, denoted "A", may be implemented by: a wavefront, which may be represented by a plurality of point sources, is generally designated by the reference numeral 100. The wavefront 100 has a typical spatially non-uniform phase characteristic, represented by a solid line and generally designated by reference numeral 102. Wavefront 100 also has a typical spatially non-uniform amplitude characteristic, represented by the dashed line and generally designated by reference numeral 103. Thus, in a conventional manner, a wavefront may be obtained by receiving light from any object, such as by reading an optical disc, such as a DVD or compact disc 104.
This method enables measurement in an enhanced manner with a phase characteristic, such as that denoted by reference numeral 102, and an amplitude characteristic, such as that denoted by reference numeral 103. It should be noted that by definition of phase, the phase characteristic is a relative characteristic from the point of view of the associated phase map or from the point of view of the phase difference between any two points in the wavefront. In general, all of these applications, and what is claimed herein, are all references to "phase" measurements or calculations, or similar statements, such as phase diagrams, are to be understood to refer to a phase shift, or phase difference, or measurement or calculation of the associated phase for the particular phase context under discussion.
A transformation, symbolically designated by reference numeral 106, is applied to the wavefront to be analyzed 100, thereby obtaining a transformed wavefront, symbolically designated by reference numeral 108. A plurality of different phase changes, preferably spatial phase changes, represented by optical path delays 110, 112 and 114, are applied to the transformed wavefront 108, thereby obtaining a plurality of different phase-change transformed wavefronts, represented by reference numerals 120, 122 and 124, respectively. It will be appreciated that the difference shown between a plurality of different phase change transformed wavefronts is that the partially transformed wavefront is delayed differently relative to the rest.
The second sub-function, denoted "B", may be realized by applying a transform, preferably a fourier transform, to a plurality of different phase change transformed wavefronts. Finally, function B requires detection of the intensity characteristics of a plurality of different phase change transition wavefronts. Such detection outputs are intensity maps, as in the example labeled with reference numerals 130, 132 and 134.
The third sub-function, denoted "C", may be implemented by: expressing a plurality of intensity maps, such as maps 130, 132 and 134, as at least one mathematical function of the phase and amplitude of the wavefront being analyzed, and a plurality of different phase changes, at least one and possibly two of which are unknown, and typically the plurality of different phase changes represented by the optical path delays 110, 112 and 114 to the transformed wavefront 108 are known, such as by using a computer 136; and using this at least one mathematical function, for example by means of a computer 136, to obtain an indication that at least one of the phase and amplitude of the wavefront being analyzed, represented here by a phase function denoted by reference numeral 138 and an amplitude function denoted by reference numeral 139, is also possible, which are seen to represent the phase characteristic 102 and the amplitude characteristic 103, respectively, of the wavefront 100. The wavefront 100 may represent a height map of the information contained or the measurand, such as a compact disc or DVD 104 in this example.
FIG. 2 depicts a simplified partial example of a partial block diagram of a wavefront analysis system suitable for implementing the functionality of FIG. 1. Referring to fig. 2, a wavefront, designated herein by reference numeral 150, is focused by a lens 152 onto a phase controller 154, the phase controller 154 preferably being located at the focal plane of the lens 152. The phase controller 154 generates phase changes and may be, for example, a spatial light modulator or a series of differently transparent spatially inhomogeneous objects. A second lens 156 is positioned so as to image the wavefront 150 onto a detector 158, such as a CCD detector. The second lens 156 is preferably positioned such that the detector 158 is located at its focal plane. The output of detector 158 is preferably provided to data storage and processing circuitry 160, which preferably performs function "C" as described above in connection with FIG. 1.
FIG. 3 depicts a simplified partial example partially illustrating a system for surface mapping using the functionality and structure of FIG. 1. Referring to fig. 3, a beam of radiation, such as light or acoustic energy, is provided from a radiation source 200, optionally via a beam expander 202, to a beam splitter 204, which beam splitter 204 reflects at least part of the radiation to a surface 206 to be detected. The radiation reflected from the inspected surface 206 is a surface mapped wavefront having amplitude and phase and containing information about the surface 206. At least part of the radiation incident on the surface 206 is reflected from the surface 206 and transmitted via the beam splitter 204 and focused via a focusing lens 208 onto a phase controller 210, the phase controller 210 preferably being located in the image plane of the radiation source 200. The phase controller 210 may be, for example, a spatial light modulator or a series of differently transparent spatially inhomogeneous objects. A second lens 212 is positioned so as to image surface 206 onto a detector 214, such as a CCD detector. Preferably, the second lens 212 is positioned so that the detector 214 is in its focal plane. An example of the output of the detector 214 is a set of intensity maps, designated by reference numeral 215, the output of the detector 214 is preferably provided to a data storage and processing circuit 216, which circuit 216 preferably performs the function "C" described in fig. 1, and provides an output indicating whether one or both of the phase and amplitude of the surface-mapped wavefront may be present. This output is preferably further processed to obtain information about the surface 206, such as the geometric variations and reflectivity of the surface. The phase controller 210 is depicted as applying a plurality of different spatial phase changes to the radiation wavefront reflected from the surface 206 and fourier transformed by the lens 208. The application of the plurality of different spatial phase changes provides a plurality of different phase change transformed wavefronts that can then be detected by detector 214.
The basic principles of the algorithm and computational method are described in fig. 4, which depicts a simplified functional block diagram example of a portion of the functionality of fig. 1. In the exemplary setup shown in fig. 4, the transformation applied to the wavefront being analyzed is a fourier transformation, at least three different spatial phase changes are applied to the wavefront so transformed, and at least three intensity maps are used to obtain an indication of at least one of the phase and amplitude of the wavefront. As shown in fig. 4, the intensity map is used to obtain an output indicative of at least one, and possibly both, of the phase and amplitude of the analyzed wavefront, with reference to the sub-function "C" described above in fig. 1.
Referring to FIG. 4, the analyzed wavefront is represented as a first complex-valued functionWhere "x" is a general indication of a spatial location. The complex function has an amplitude distribution A (x) and a phase descriptionEquivalent to the amplitude and phase of the wavefront being analyzed. The first complex-valued functionDesignated by reference numeral 300. Each of a plurality of different spatial phase changes is applied to the transformed wavefront, preferably by applying a spatially uniform spatial phase delay of a known value to a given spatial region of the transformed wavefront. Referring to fig. 4, the spatial function controlling the different phase changes is denoted by "G", an example of which is denoted by reference numeral 304 for a phase delay value θ. The function "G" is a spatial function of the phase change applied at each spatial position of the transformed wavefront. Is denoted by reference numeral304, having a value of theta, is applied to a spatially central region of the transformed wavefront, as identified by the middle portion of the function having a value of theta that is greater than values elsewhere in the function.
A plurality of desired intensity maps, labeled as spatial functions I1(x), I2(x), and I3(x), each of which is expressed as a function of the first complex valued function f (x) and the spatial function G, as labeled at reference numeral 308. Next, a second complex-valued function s (x), having an absolute value | s (x) and a phase α (x), is defined as the convolution of the first complex-valued function f (x) and the fourier transform of the spatial function "G". This second complex-valued function, labeled with reference numeral 312, is represented as a formulaWherein the symbol "+" represents a convolution, andis the fourier transform of the function "G".Wavefront phase, and α (x), the difference between the phases of the second complex valued function is denoted by ψ (x), denoted by reference numeral 316.
The expression of each desired intensity map as a function of f (x) and G, is denoted by reference numeral 308, the definition of the absolute value sum s (x) phase, is denoted by reference numeral 312, and the definition of ψ (x), is denoted by reference numeral 316, such that each desired intensity map can be expressed as a third function of the amplitude of the wavefront a (x), the absolute value | s (x) of the second complex-valued function, the difference between the wavefront phase and the phase of the second complex-valued function ψ (x), and the known phase delay caused by one of at least three different phase transitions, one corresponding to each of the at least three intensity maps. This third function is denoted by the reference number2 or 3. Of these three functions, θ 1, θ 2, and θ 3 are the uniformityKnown values of spatial phase delay, each applied to a spatial region of the transformed wavefront, thus affect a plurality of different spatial phase transitions, which produce intensity maps I1(x), I2(x), and I3(x), respectively. It should be appreciated that preferably, at any given spatial location x0, the third function is a function of a, ψ and | S | at the same spatial location x0 only. The intensity map is labeled with reference numeral 324.
By solving at least three equations, the third function is used to solve for each specific spatial position x0, for at least three intensity values I1(x0), I2(x0) and I3(x0) at least three different phase delays θ 1, θ 2 and θ 3, thereby obtaining at least a portion of the three unknowns a (x0), | S (x0) | and ψ (x 0). This process is generally repeated for all spatial positions and thus the amplitude of the wavefront a (x), the absolute value of the second complex valued function | s (x) | and the difference in wavefront phase from the phase of the second complex valued function ψ (x), as denoted by reference numeral 328. Thus, once a (x), (x) and ψ (x) are known, the formula defining the second complex-valued function, denoted by reference numeral 312, is used to solve the actual number of spatial locations "x" in its entirety to obtain α (x), the phase of the second complex-valued function, as denoted by reference numeral 332. Finally, the phase of the analyzed wavefront is obtained by adding the phase α (x) of the second complex-valued function to the difference ψ (x) between the phase of the wavefront and the phase of the second complex-valued functionAs indicated by reference numeral 336.
A wavefront analysis system may include two functionalities-an imaging functionality and an imaging wavefront analysis functionality, as shown in fig. 5 below. Through the imaging functionality 520, the wavefront 510 to be analyzed is imaged to form an imaged wavefront 530. The imaging wavefront is analyzed by an imaging analysis function 540 and the resulting information about the wavefront is subsequently processed and stored by a data storage and processing component 550. It should be noted that imaging functionality 520 and imaging wavefront analysis functionality 540 may be implemented as two sub-functionalities of the same combined system, and in some cases the imaging wavefront 530 may be generated internally within the same combined system.
Disclosure of Invention
The present invention seeks to provide improved methods and apparatus for performing wavefront analysis and 3D measurements, and in particular methods and apparatus based on analyzing the output of an intermediate surface, such as an image plane, of an optical system. These provided methods and devices can be applied to different wavefront analysis and measurement methods, such as those provided in the above-mentioned PCT application No. PCT/IL/01/00335, and PCT application No. PCT/IL/02/00096, as well as other wavefront analysis methods known in the art. The present invention also provides smart, improved and enhanced methods and systems for wavefront analysis.
In addition, the present invention seeks to provide a new apparatus and method for measuring surface topography for thin film coatings that overcomes some of the drawbacks and deficiencies of the prior art methods. There have been many prior art methods for analyzing wavefronts reflected from or transmitted through an object, such as by interferometry and the wavefront analysis methods described in the above-mentioned patent documents. However, the presence of a thin film coating on the surface of the object adds an additional phase change to the reflected or transmitted wavefront due to multiple reflections. This phase change causes errors in calculating the topography of the surface reflecting the wavefront. Knowing the thin film coating thickness and the refractive index of the constituent layers, whether measured directly by prior knowledge or by using known methods, this additional phase change due to multiple reflections can be calculated by using known formulas. This additional phase can be eliminated or clipped from the reflected or transmitted light for proper computation of the surface topography.
According to a preferred embodiment of the present invention, a phase measurement system is provided that integrates the ability to make accurate measurements on multilayer objects, such as thin film coatings present. Existing ellipsometry methods for performing these operations typically use large illumination points that provide only low spatial resolution and low two-dimensional imaging capability due to the limitation of depth of field across such large illumination points when using large angles of incident light. By adding a broadband light source and a filter wheel or a spectrometer to the imaging optics of a measurement device such as those described in the above-referenced patent documents, the ability to detect and measure multilayer objects can be improved. Using one filter wheel, the light reflected from the entire field of view is spectrally analyzed for each pixel or each segment of the object independently. Using one spectrometer, light reflected from one or more selected regions of the object is spectrally analyzed independently for each pixel or segment. Adding a spectrometer or filter wheel allows the thickness of the transparent or translucent multilayer to be measured on each segment or pixel of the object. By combining the phase measurements, the surface topography of the top can be obtained. By using the novel spectral analysis method described above, knowing the refractive index and the thickness of the layers in the multilayer stack in advance, and performing known reflectance measurement or ellipsometry algorithms, the thickness of the thin film coating can be accurately calculated. Alternatively and conversely, the refractive index of the film can be accurately calculated using the above-described spectral analysis method, the precise thicknesses of layers in the multilayer stack known in advance, and known reflectance measurement or ellipsometry algorithms. As calculated by the above method, using the known thickness of the thin film coating and the refractive index per pixel or segment of the object, the phase change due to the presence of the thin film coating can be accurately calculated by a known formula. This phase change, calculated from the real and complex elements of the refractive index, can be eliminated or attenuated from the measured phase of the reflected or transmitted light, so that the surface topography is correctly obtained.
According to a further preferred method of the invention, by combining the above-described wavefront analysis method with Fourier transform spectroscopy and using a broadband light source, the phase change due to multiple reflections can be calculated when measuring an object comprising multiple layers. The fourier transform spectroscopy is performed by:
1. a moving mirror is added as a reference mirror and interference is generated between light striking the object and light reflected from the reference mirror and then an intensity map of the interference pattern is obtained for each motion.
2. In a similar manner to fourier transform spectroscopy, the accumulated intensity data for each pixel is fourier transformed to obtain the spectral reflectance for each pixel.
3. The thickness of the layers in each pixel of the object is obtained by using the spectral reflectance of each pixel, predetermined data about the material, and an existing spectrophotometric or reflectometric model.
4. The phase change due to the multilayer stack in each pixel is calculated by using known algorithms and the obtained data on the thickness and the refractive index of the material of each layer on each pixel.
5. The phase and amplitude of the reflected wavefront are obtained by using the wavefront analysis method described above. This phase data also includes the phase change due to the multilayer stack at each pixel.
6. The calculated phase change due to the multilayer stack at each pixel (as described in paragraph 4 above) is subtracted from the phase data obtained using the wavefront analysis method (as described in paragraph 5) to obtain the true surface topography.
This is provided in accordance with a preferred embodiment of the present invention, and in accordance with another preferred embodiment of the present invention there is further provided an optical device for measuring the thickness of an object, comprising:
(i) an objective lens disposed above a plane of said object with its optical axis perpendicular to said plane;
(ii) a light source having a range of emission wavelengths, said source being disposed above said lens and substantially in the focal plane of said lens such that said lens thereby generates a collimated beam, said source being laterally (laterally) offset from said optical axis such that said collimated beam illuminates said object at a non-normal (non-normal) angle of incidence;
(iii) a first polarizing element disposed between said source and said lens;
(iv) a detector element disposed substantially in an image plane of said object generated by said lens and laterally offset from said optical axis; and
(v) a second polarizing element disposed between the lens and the detector.
The lens preferably has a numerical aperture greater than 0.5. Furthermore, the light source is preferably a broadband light source. In addition, it may have a large number of discrete wavelengths. The detector element is preferably a detector array.
According to a further preferred embodiment of the present invention, there is provided a method of measuring the surface topography of an object having transparent layers, comprising the steps of:
(i) illuminating the object and thereby measuring the amplitude and phase of a reflected wavefront, by the steps of: (a) obtaining a plurality of different phase change transformed wavefronts corresponding to the wavefront whose amplitude and phase are being measured; (b) obtaining a plurality of intensity maps of the plurality of phase change transition wavefronts; and (c) using said plurality of intensity maps to obtain an output indicative of said amplitude and measured phase of said wavefront; and
(ii) measuring the thickness of the transparent layer by broadband illumination of the object and analyzing the reflected intensity of the object at least two wavelengths;
(iii) calculating a calculated phase map of said reflected wavefront resulting from multiple reflections of said transparent layer from said thickness measurement; and
(iv) comparing the calculated phase map with the measured phase to obtain the surface topography of the object.
In the above method, the comparing step preferably comprises subtracting phase values obtained from the calculated phase map from measured phases at the same location as from the object.
According to another preferred embodiment of the present invention, there is provided an optical device for measuring the thickness of a transparent layer in an object, comprising:
(i) a coherent light source for illuminating said object;
(ii) a detector for measuring reflectance from said transparent layer;
(iii) an interferometer for measuring the phase reflected from the object by coherent illumination; and
(iv) a processing unit for describing in a mathematical model the desired reflection phase and the desired reflection amplitude as a function of the thickness and the optical properties of the transparent layer using the measured phase and the reflectance so as to obtain the thickness of the transparent layer in the object.
In this embodiment a combination of phase derived from coherent source illumination, amplitude derived from the reflectivity of the coherent source, and/or reflectometry analysis using different techniques of broadband illumination is used. This combination of phase and amplitude provides a better measurement of the thickness of the transparent layer. The phase analysis may start with an interferometric method using coherent illumination. The reflectometry analysis may be provided from broadband illumination and standard analytical techniques (filter wheel, spectrophotometer), or from amplitude analysis of some coherent light source.
Further, according to still another preferred embodiment of the present invention, there is provided a method of measuring a thickness of a transparent layer in an object, including the steps of:
(i) illuminating the object with coherent light at least one predetermined wavelength;
(ii) providing an interferometer and measuring the phase of said coherent light reflected from said object;
(iii) illuminating the object with a plurality of further predetermined discrete wavelengths of light;
(iv) measuring reflectance of said light at said plurality of predetermined discrete wavelengths;
(v) using a mathematical model to describe desired phase and amplitude characteristics of said reflected light at said plurality of predetermined discrete wavelengths as a function of thickness and optical properties of the transparent layer, an
(vi) Using the measured phase and reflectance values in the mathematical model to obtain the thickness of the transparent layer in the object.
In the above method, the plurality of predetermined discrete wavelengths may preferably be obtained by using a filter wheel or using a spectrophotometer. In addition, the plurality of predetermined discrete wavelengths is preferably obtained from at least one coherent light source.
According to another preferred embodiment of the invention, at least one point in the object may have a known structure such that the desired phase characteristic delay at the at least one point is absolutely known, and the method also comprises the step of using the absolutely known phase characteristic to determine the absolute phase difference over the whole object.
According to another preferred embodiment of the present invention, there is provided a method of obtaining a focused image of an object, comprising the steps of:
(i) illuminating the object;
(ii) obtaining amplitude and phase information of a wavefront of said illumination emanating from said object in an arbitrary plane where said wavefront does not necessarily produce a focused image;
(iii) calculating the morphology of the wavefront on a series of further planes on a downward propagation path of the wavefront by means of a mathematical solution of the propagation properties of the wavefront; and
(iv) determining in which of the further planes the wavefront has the form of a focused image.
In this method, the step of determining at which of said further planes said wavefront has the form of a focused image preferably comprises calculating at each of said further planes an entropy of a complex-valued function of at least one optical property of the wavefront, wherein said entropy is determined from measurements of cumulative surface areas of said complex-valued function of the wavefront; and determining a propagation step, wherein the entropy is minimal. The complex function of the wavefront is preferably at least one of a complex amplitude function, a complex phase function, and a complex amplitude and phase function.
According to another preferred embodiment of the present invention, there is provided a method of measuring a height difference between first and second sections of an object, comprising the steps of:
(i) illuminating the two sections of the object;
(ii) obtaining amplitude and phase information of a wavefront of said illumination emanating from said object in an arbitrary plane where said wavefront does not necessarily produce a focused image;
(iii) calculating the morphology of the wavefront on a series of further planes on a downward propagation path of the wavefront by means of a mathematical solution of the propagation properties of the wavefront;
(iv) determining in which of said further planes said wavefront has the morphology of a focused image of said first segment;
(v) determining in which of said further planes said wavefront has the form of a focused image of said second segment; and
(vi) the height difference is obtained by subtracting the distance between the further plane in which the wavefront has the form of one focused image of the second segment and the further plane in which the wavefront has the form of one focused image of the first segment.
In the above method, the height difference between the two segments is preferably used as an estimated height difference to reduce phase ambiguity that occurs in other measurement methods.
There is also provided in accordance with another preferred embodiment of the present invention, a method for resolving 2 pi ambiguity in a phase measurement system, including the steps of:
(i) illuminating an object at a first wavelength and determining phase information of a first wavefront impinging on the object;
(ii) illuminating the object at a second wavelength and determining phase information of a second wavefront impinging on the object;
(iii) defining at least two segments on the object;
(iv) designating a first set of points in the first segment and a second set of points in the second segment, defining one of the first set of points as a first anchor point and one of the second set of points as a second anchor point;
(v) unwrapping at least one of the first and second phase information to obtain a height difference between the first anchor point and the first point set and a height difference between the second anchor point and the second point set;
(vi) calculating height differences between points in the first set of points and points in the second set of points using the first and second phase information, determining a set of height differences corresponding to the set of points;
(vii) obtaining a set of approximate height ambiguities, each approximate height ambiguity corresponding to a height difference in the set of height differences;
(viii) determining a set of approximate height ambiguities between the first and the second anchor points using the set of approximate height ambiguities;
(ix) determining, from the set of approximate height ambiguities between the first and the second anchor points, a most likely value of the height ambiguities between the first and the second anchor points; and
(x) The 2 pi ambiguity between the first and second phase information measurements is solved by using the value of the most likely ambiguity.
In this method, the most likely value of the height ambiguity between the first and second anchor points is preferably taken from the value closest to the average of the set of approximate height ambiguities between the first and second anchor points. Alternatively and preferably, the most likely value of the height ambiguity between the first and second anchor points is the maximum of a histogram taken from the set of approximate height ambiguities between the first and second anchor points.
According to yet another preferred embodiment of the present invention, there is further provided a set of filters for spatial filtering in an optical system, each filter having an opening of a characteristic size and characteristic spectral properties, wherein the opening and the spectral properties of each filter are selected to increase image contrast (contrast) in the system. The opening and the spectral properties of each filter are preferably selected to mutually offset (offset) the effects of increasing spatial propagation of imaging light with increasing wavelength and decreasing spatial propagation of imaging light with increasing aperture size. Also, for each said filter, the ratio of said opening of said filter to the wavelength at which said filter operates is preferably substantially fixed. Any of the above groups of filters, wherein said spatial filtering is preferably performed between a central region and a peripheral region of a field of view of said imaging system. The use of these sets of filters allows different pore sizes to be obtained for different wavelengths without the need for mechanical movement.
There is also provided in accordance with another preferred embodiment of the present invention, a method for increasing contrast for spatial filtering in an imaging system, including the steps of:
(i) providing a birefringent spatial light modulator having at least two independently controllable phase modulation ranges and a principal axis;
(ii) disposing a linear polarizer in front of said birefringent spatial light modulator, wherein a polarization direction of said linear polarizer is not aligned with said principal axis of said spatial light modulator;
(iii) a linear polarizing element is arranged after the birefringent spatial light modulator;
(iv) determining a required transmittance between the two phase modulation ranges such that the output image contrast of the image is optimised;
(v) obtaining a plurality of wavefront outputs from the system by rotating at least one of the linear polarizing elements and adjusting phase delays over at least one of the modulation ranges such that: (a) in each wavefront output, a different phase delay is obtained between the two phase modulation ranges; (b) all wavefront outputs have the same transmittance between the two phase modulation ranges, and (c) said same transmittance is equal to said desired transmittance.
There is also provided in accordance with another preferred embodiment of the present invention a method for reducing coherent noise in an optical system, including the steps of:
(i) illuminating an object to be imaged;
(ii) measuring amplitude and phase information of a wavefront of illumination emanating from said object in a first plane along a propagation path of said wavefront, said wavefront producing a focused image in said first plane;
(iii) defocusing said image in said system by a defocus distance;
(iv) obtaining defocus amplitude and phase information of a wavefront of an illumination emanating from the object in a second plane that is spaced from the first plane by the defocus distance;
(v) calculating refocused amplitude and phase waveform information on the first focal plane at the defocus distance from the second plane using the defocus amplitude and phase waveform information with a mathematical solution of propagation properties of the wavefront; and
(vi) combining the measured amplitude and phase waveform information with the refocused amplitude and phase waveform information to reduce coherent noise in the imaged object.
In this method, the combining step is preferably performed by at least one of averaging, comparing, and image processing.
There is also provided in accordance with another preferred embodiment of the present invention a method of reducing noise in a wavefront in a first given plane, the noise caused by a disturbance lying in a second plane, the method including the steps of:
(i) measuring amplitude and phase information of the wavefront at the given plane;
(ii) calculating amplitude and phase information of the wavefront in a further plane on the propagation path of the wavefront by means of a mathematical solution of the propagation properties of the wavefront;
(iii) determining in which of the further planes the wavefront is the image containing the perturbation is optimally focused;
(iv) altering the wavefront at the optimally focused location such that the disturbance is eliminated; and
(v) using said modified wavefront, new amplitude and phase waveform information is calculated on said first plane by means of a mathematical solution of the propagation properties of said wavefront, from which an image free of noise caused by said local perturbations can be obtained.
In this method, the disturbance is caused by dust or a defect on the propagation path of the wavefront. In this case, the disturbance appears as a concentric stripe from an out-of-focus dust particle. Moreover, the disturbance is preferably eliminated by image processing.
According to another preferred embodiment of the present invention, there is provided a method of reducing aberrations in a wavefront at a given plane in an optical system, said aberrations arising elsewhere in said optical system, comprising the steps of:
(i) measuring the amplitude and phase of the wavefront on the given plane;
(ii) calculating amplitude and phase information of the wavefront on a further plane on the propagation path of the wavefront by means of a mathematical solution of the propagation properties of the wavefront;
(iii) determining in which of the further planes the wavefront is the source of the aberration located;
(iv) altering the wavefront at the location of the aberration source such that the aberrations are eliminated; and
(v) using the modified wavefront, new amplitude and phase waveform information on another plane is calculated by means of a mathematical solution of the propagation properties of the wavefront, from which an aberration-free image can be obtained.
There is also provided in accordance with another preferred embodiment of the present invention a method for reducing coherent noise in an image of an object, including the steps of:
(i) providing an imaging system comprising an optical path including a coherent light source, a phase controller and a plurality of optical elements;
(ii) measuring amplitude and phase information of a wavefront representing an image of the object at an image plane;
(iii) moving and refocusing the position of at least one of the object, the light source and at least one of the optical elements;
(iv) measuring amplitude and phase information of a wavefront representing an image of the object after the moving and refocusing steps; and
(v) averaging amplitude and phase information of the wavefront before and after the moving step so as to reduce the coherent noise.
In the above method, said moving preferably comprises moving said source in at least one axis and correspondingly moving the phase controller to maintain it in the image plane of the moving light source, wherein the image is integrated in the time domain. In addition, the phase controller is held in the image plane of the source, the same point on the source is preferably imaged on the same point of the phase controller independent of the movement. Alternatively and preferably, said moving comprises moving said phase controller within said optical path to produce a plurality of phase change transformed wavefronts, or moving the object to different focus and defocus states along the Z axis, or moving the object to different off-axis positions or different tilt angles. The method also preferably includes an image registration step.
An example of the steps of the above method may preferably comprise:
(i) taking an image at a given location of the light source and PLM;
(ii) moving the PLM along any axis thereof;
(iii) moving the light source accordingly so that the image of the light source falls in the same location of the PLM it had fallen in before the movement;
(iv) another image is taken at this new location of the PLM and light source. The result is that all of the required information remains the same in both images, since it is only required that the light source and PLM be conjugate, but that the light beam travel different paths within the system, resulting in different spatial noise patterns, i.e. different sets of fringes.
(v) Averaging the two images to improve the signal-to-noise ratio
(vi) This process is repeated for a plurality of images and thus further improves the signal-to-noise ratio, and finally
(vii) The "averaged image" is used as an input to the phase measurement system and a phase with less noise is obtained.
According to a further preferred embodiment of the invention, said optical path preferably comprises a rotating wedge, which is positioned in such a way that said optical path performs a spatial movement with the rotation of said wedge, but without any other said movement of the optical element.
According to yet another preferred embodiment of the present invention, there is provided a method of reducing coherent noise in an imaging system, comprising the steps of:
(i) imaging an object using a moderately broadband light source to obtain a smooth image having a first level of accuracy;
(ii) determining a preliminary calculated height of a feature of the object to be within a limit of phase ambiguity, the first level of accuracy being limited by a short coherence length of the broadband source;
(iii) imaging said object with a coherent light source to obtain an image that is noisier than said smoothed image but has a second level of accuracy that is better than said first level of accuracy; and
(iv) using the preliminarily calculated heights of the features of the object as an initial input to the phase obtained by the coherent imaging to determine the heights of the features with increased accuracy.
According to yet another preferred embodiment of the present invention, there is provided a method of determining the position of an edge of a feature of an object using an imaging system at a resolution better than the resolution of the imaging system, comprising the steps of:
(i) generating a series of images of the feature at a plurality of different defocus distances around an optimal focus and generating a record of the illumination level as a function of lateral distance across the images; and
(ii) the recording is examined for a point at which the illumination levels converge at a common lateral distance across the image, the point being the location of an edge of the feature.
Finally, according to another preferred embodiment of the present invention, there is provided a method of performing an overlay measurement in a multilayer structure, comprising the steps of:
(i) illuminating the multilayer structure and generating amplitude and phase information of a first complex wavefront map representing an image of a plane within a first layer of the multilayer structure;
(ii) calculating amplitude and phase information of a second complex wavefront map representing an image of a plane within a second layer of said multilayer structure by means of a mathematical solution of propagation properties of said wavefronts; and
(iii) comparing the first and second complex wavefront maps to provide information about the overlap of the first and second layers.
In this method, the overlap measurement is preferably performed in a single imaging process without the need for refocusing by the imaging system. Furthermore, by this method, the use of the amplitude and phase information in the overlay measurement preferably results in an enhanced contrast measurement compared to an imaging method without phase information. It may also obtain three-dimensional information about the multilayer structure, thus improving the misregistration measurement compared to imaging methods that do not use phase information. Furthermore, using phase information in the overlay measurement enables an increased depth of focus (depth of focus) measurement compared to an imaging method without phase information, thus enabling imaging of multiple layers in one single imaging process.
Drawings
The present invention will be understood and appreciated more fully from the following detailed description, taken in conjunction with the accompanying drawings. Fig. 1 to 5 are briefly described in the background, and the other figures are described in detail below. These drawings can be summarized as follows:
FIG. 1 shows a simplified partial example, partially schematically illustrating wavefront analysis functionality;
FIG. 2 is a partial example illustrating a partial block diagram of a wavefront analysis system suitable for implementing the functionality of FIG. 1;
FIG. 3 is a partial example illustrating a system for surface mapping using the functionality and structure of FIG. 1;
FIG. 4 depicts the basic principles of algorithms and methods of operation used in various embodiments of the present application, and depicts a simplified functional block diagram illustrating portions of the functionality of FIG. 1;
FIG. 5 depicts an exemplary wavefront analysis system including two functionalities-an imaging functionality and an imaging wavefront analysis functionality;
FIG. 6 schematically depicts elements of a first preferred embodiment of the invention to enable detection and measurement of a multilayer object by ellipsometry, using the phase measurement strategy described in the apparatus and method shown in FIGS. 1 to 5;
FIG. 7 is a graphical illustration of the phase and amplitude of light reflected from multiple layers of stacked silicon dioxide on silicon for three different wavelengths;
FIG. 8 schematically illustrates one method of reducing the effects of phase change due to multiple reflections with appropriate illumination conditions when illuminating an object comprised of multiple layers;
FIG. 9 schematically shows how the phase measured data can be used to distinguish between different multilayer stacks having the same reflectance and different phase transitions and which cannot be distinguished by the "white light" method;
FIG. 10 schematically illustrates the propagation of a wavefront in the Z direction, depicting that by this wavefront analysis method, any known wavefront in a certain plane can be propagated to any other desired plane using known propagation equations;
FIG. 11 shows an entropy diagram of an arbitrary wavefront as a function of its propagation position along the focusing distance;
FIG. 12 is a schematic illustration of the method of "best focus" independently applicable to different segments of an image or a wavefront;
FIG. 13 illustrates another preferred method of the present invention using the best focus and height measurements obtained by applying the stereoscopic wavefront propagation method;
FIG. 14 is a schematic illustration of an interferometric device based on the combined use of white light and coherent light interferometry;
FIG. 15 schematically illustrates how the aperture size for different wavelengths can be varied by means of an aperture comprising a central ring with different transmittance for different wavelengths, for example by using different spectral filters;
FIG. 16 shows diagrammatically a preferred method of reducing the effect of disturbances in the wavefront, for example disturbances caused by dust or imperfections in the optical path;
FIG. 17 illustrates a preferred apparatus for implementing the method of optical movement in an imaging system that has no mechanical movement of any of its components;
FIG. 18 illustrates a coherent imaging system using line light sources to reduce the spatial coherence of the light to increase the lateral resolution; in the configuration described in fig. 18, the spatial coherence in the Y direction is eliminated;
FIG. 19 is an illustration of a microscopic image taken from a microscope with an x50 objective lens to illustrate a method of increasing resolution in such an image;
FIG. 20 shows a magnified portion of the image of FIG. 19, illustrating how details of the edges in the image are obscured due to limitations in the resolution of the microscope;
FIG. 21 is a graph showing cross-sectional curves of illumination across the structure edge in the image of FIG. 20 for different defocus levels; and
fig. 22 schematically illustrates a cross-section of a periodic subwavelength structure, the details of which are to be resolved and described by means of another preferred embodiment of the invention.
Detailed Description
Reference is now made to fig. 6, which schematically illustrates elements of a first preferred embodiment of the invention, enabling detection and measurement of a multilayer object by means of ellipsometry, using the phase measurement method described above. The irradiation light source 600 is disposed in an optical system in such a manner that: through an imaging system, such as a microscope objective 604, the surface 602 of the object is illuminated by a tilted parallel beam 606 at a known angle of incidence relative to the normal to the object surface. The illumination beam 608 is reflected from the object surface and is refocused by a high numerical aperture objective lens 604 into a detector 610, preferably an array of pixels. In this way, the reflected beam creates an image in the detector containing information for the entire large field of view. Due to the angle of incidence at which the measurement is performed, the reflection for the s and p polarizations is different and therefore the thickness per pixel can be determined by means of ellipsometry. A polarizing element 612 is placed in the incident light beam, preferably between the light source 600 and the objective lens 604, and a polarizing element 614 is placed in the reflected light beam, preferably between the objective lens 604 and the detector element 610, with the aid of the polarizing element 614, using polarizers and compensators, to analyze the polarization in the reflected light beam. The measurement is performed over a relatively large field of view at a time, and a higher spatial resolution can be achieved using imaging ellipsometry. Using these measurements, knowing the angle of incidence, knowing the refractive index, knowing the nominal thickness of the layers in the multilayer stack, and known algorithms, the thin film coating thickness at each pixel or segment of the object can be accurately calculated. Alternatively and in contrast, using the spectral analysis methods described above, previously known thicknesses of layers in a multilayer stack and known algorithms, the refractive index of the thin film at each pixel or segment of the object can be accurately calculated. Knowing the thin film coating thickness and the refractive index, the phase change due to the presence of the thin film coating on each pixel or segment of the object can be calculated by known formulas. This phase change can be eliminated or cut from the phase of the reflected or transmitted light to properly obtain the surface topography. The illumination light may be a coherent light source comprising a single wavelength, a number of related light sources or a broadband light source. The reflected light can be spectrally analyzed to provide more information for calculating the thin film coating thickness or refractive index.
The wavefront reflected from the object is measured twice for each of the two polarizations. The phase change due to surface topography is compensated for by indexing the measured complex amplitude of one polarization with the measured complex amplitude of the second polarization. Using these measurements, knowing the angle of incidence, knowing the refractive index, knowing the thickness of each layer in the multilayer stack, and knowing the algorithm, the thin film coating thickness per pixel or segment of the object can be accurately calculated. Alternatively, using the above measurements, knowing the exact thicknesses of the layers in the multilayer stack and the known algorithms, the refractive index of the thin film of the object per pixel or segment can be accurately calculated. Knowing the thin film coating thickness and the refractive index, the phase change of the object due to the presence of the thin film coating per pixel or segment can be calculated by known formulas. This phase change can be eliminated or cut from the phase of the reflected or transmitted light to properly obtain the surface topography. The illumination light may be a coherent light source comprising a single wavelength, a number of related light sources or a broadband light source. The reflected light can be spectrally analyzed to provide more information for calculating the thin film coating thickness or refractive index.
According to a further preferred embodiment of the invention, the spectral information of the reflected light is used in combination with a measured reflected wavefront phase to obtain the thickness of the layers in a multilayer stack. The reflected light from a multilayer object is analyzed using a broadband light source by means of a filter wheel or spectrometer. In addition, the phase and amplitude of the reflected wavefront can be obtained by using a coherent light source having one or more wavelengths and a phase measurement system. The phase data obtained by the phase measurement system adds additional data to the spectral analysis described above. The thickness of each layer in the multilayer stack is obtained by combining the phase data obtained by the phase measurement system with the spectral analysis described above. Since the relevant phase data, i.e. the relevant phase difference between different positions, is available, not an absolute phase shift, it is desirable to have a position in the field of view at which the thin film coating thickness is known with high accuracy. The absolute phase shift may be determined by a measurement performed at this location. Alternatively, a location in the field of view without a transparent layer may also be used as the location where the thickness is known with high accuracy. Fig. 7 shows an example of the phase and amplitude of three different wavelengths of light reflected from a multi-layer stack of silicon dioxide on silicon.
Note that the thickness shown in fig. 7 depends on the phase and amplitude of the 3 wavelengths, that is to say corresponds to the case of phase analysis by interferometry of 3 coherent light sources. To obtain more data, each phase provides different data, and these same 3 light sources are analyzed for reflectivity by analyzing their amplitude. Another thing to note from this image is that the type of ambiguity in the phase and amplitude measurements of the thickness is different-when the amplitude ambiguity is periodic. Therefore, according to FIG. 7, when an amplitude of 0.6 is obtained, one does not know whether the thickness is 0nm, 180nm, 350nm, or the like. The "uncertainty range" in the phase measurement is the thickness range, that is, when one obtains a phase "1" in one wavelength, the thickness of 400-500 nm is the result of this thickness. The combination of these two types of data with different uncertainties or ambiguities allows accurate positioning of the thickness with little ambiguity.
According to yet another preferred embodiment of the present invention, an improved algorithm is described for phase reconstruction and surface topography measurement in the presence of thin films. The presence of the thin film coating adds a phase change in the reflected or transmitted wavefront due to multiple reflections. This phase change causes errors in the calculation of the surface topography of the reflected wavefront (i.e., deviations from the wavefront generated from the reflecting object). Knowing the thin film coating thickness and reflectivity, the incremental phase change can be calculated by known equations, which can be eliminated or subtracted from the phase of the reflected or transmitted light to correctly calculate the surface topography. According to this preferred embodiment of the invention, at least one anchor point is provided in the field of view in which the thin-film coating thickness is known with high accuracy. A location in the field of view where there is no thin film coating may also serve as an anchor point. In addition, phase data or amplitude data or a combination of phase and amplitude data of a wavefront reflected by the object at one or more wavelengths is also given. These anchor points are used to obtain the thickness at other points or in other areas of the stacked structure in the field of view, regardless of where the anchor points are located.
According to yet another preferred embodiment of the present invention, a method is provided for reducing the effect of phase changes due to multiple reflections by means of suitable illumination conditions when illuminating an object composed of a plurality of layers. This is shown in fig. 8. According to a first embodiment of the invention, an object 800 comprising a plurality of layers is illuminated by an oblique light beam 802 at a large angle of incidence. Due to the large angle of incidence, the amplitude of the multiple reflections 808 is reduced, and only one reflection from each side of each layer is dominant. In the example shown in fig. 8, there is a reflection 804 from the front surface of the outermost layer and a reflection 806 from the back surface of the outermost layer. Thus, a simplified model of one reflection can be used, assuming that there are only reflections from each side of each layer. The deviation of the phase change calculation using the simplified 2-beam model is reduced relative to the phase change calculation using the complete ellipsometric model with multiple reflections. According to another embodiment, the object is illuminated by a tilted, parallel beam of light at the Brewster angle (if one is present) between the outermost layer and the layer immediately below the outermost layer. In this case, there is no reflection for the p-polarization of the light coming from the surface between the two layers, but only the s-polarization is reflected out of this surface. Thus, all of the multiple reflections 804, 808, etc. are s-polarized. If a crossed polarizer is placed in the reflected path, only p-polarized light is transmitted and measured, and because the s-polarization occurs only from the first reflection, the measurement method can easily measure the outer surface profile without interference from the underlying layers.
According to the method of this embodiment, an algorithm is provided for performing topography measurements using white light interferometry for the presence of one or more transparent layers. This algorithm comprises the following steps:
A. "standard" white light interferometry image intensity data was acquired.
B. In a similar manner to fourier spectroscopy, the intensity data per pixel is fourier transformed to obtain the spectral reflectance per pixel.
C. Using the existing "spectrophotometric" model, known data on the thickness and refractive index of the material per pixel and the calculated spectral reflectance per pixel as described in step B above are compared to obtain the exact thickness of the layers per pixel.
D. The phase change due to the multilayer stack at each pixel is calculated using known algorithms and data on the thickness and refractive index of the layers of material at each pixel.
E. Using intensity data obtained by white light interferometry, the profile of the object is obtained by "best focus" wave packet. These profiles include errors per pixel due to phase changes caused by the multi-layer stack.
F. Using the calculated phase change per pixel due to the multilayer stack (as described in step D above), the error of the coherence envelope peak caused by this phase is corrected and a corrected surface topography is obtained.
In order to increase the prior knowledge to increase the range of height measurements or to operate with objects consisting of different multi-layer stacks, the field of view is preferably divided into different segments, each having different characteristics, so that for each different segment its prior knowledge of the different characteristics can be increased. Some "white light" methods are known in the art, which are segmented based only on amplitude data. However, according to this preferred embodiment of the invention, the phase and amplitude data obtained by the phase measurement system are used in a combined manner, improving the process of obtaining the surface segmentation of the object. In a preferred embodiment, at least two wavefronts are obtained that represent images of an object at two different wavelengths, where each wavefront has phase and amplitude data that are used to perform segmentation of the image. This method can be used to correct the segmentation obtained from "white light" by known methods. Alternatively and preferably, this data can be used to distinguish between different multilayer stacks having the same reflectance and different phase transitions (which cannot be distinguished by the "white" approach), as shown in FIG. 9.
In the above wavefront analysis method, each of the plurality of different spatial phase changes is applied to the transformed wavefront, preferably by applying a spatially uniform phase delay of a known value to a given spatial region of the transformed wavefront. The spatial function that manages these different phase transitions is labeled "G" as described above in connection with the depiction of fig. 4. The function "G" is a spatial function of the phase change applied at each spatial position of the transformed wavefront. In a preferred embodiment, the spatial phase change is applied to the central part of the transformed wavefront and acts as a low pass filter for the transformed wavefront. However, when the spatial dimension of the function "G" is large, it cannot act as a true low pass filter. In this case, it is difficult to reconstruct the imaged wavefront. Moreover, the spatial dimension of the function "G" is proportional to the wavelength used and therefore cannot act as a low pass filter for shorter wavelengths. According to yet another preferred method of the invention, an improved algorithm is implemented. According to the improved algorithm, a basic reconstruction with "false" G "with small spatial dimensions is implemented. From this reconstructed wavefront, a new "S" function is obtained by digital low-pass filtering, corresponding to the "true" spatial dimension, and correction values for α (x) and ψ (x) are calculated. A corrected reconstruction is obtained using the correction values. Continuation of the iteration this process may increase the accuracy of the reconstruction.
According to yet another preferred method of the present invention, there is also provided a method for improving phase and surface topography measurements by wavefront propagation and refocusing. Because, as is known, Maxwell's equations have unique solutions, when a particular solution in one arbitrary plane and all of its boundary conditions are known, the solution in any other plane can be determined absolutely. Thus, the complex amplitude of the radiation can be analyzed or recovered in one arbitrary plane by the wavefront analysis method described above, or by any known wavefront recovery (retrieval) method in a certain plane, and can be propagated to any other desired plane by known formulas. Reference is now made to fig. 10, which schematically illustrates the propagation of a wavefront in the Z-direction. A wavefront, in the form of a box 1000 and of given amplitude, travels a distance Z1 to reach plane P1. In plane P1, the complex amplitude of the propagated radiation may be represented by a function A (x, y) eiφ(x,y)|To describe. The amplitudes in P1 are no longer consistent. The complex amplitude of the radiation in plane P1 further propagates a distance Z2 to plane P2. With complex vibration of the wave frontAmplitude and phase of the amplitude are varied and a function A' (x, y) e is obtained in plane P2iφ′(x,y)|Different complex amplitudes are described. If the wavefront is known in one plane, it can be calculated in any other plane. In the above-mentioned PCT international publication No. WO03/062743, a method is described for calculating the physical propagation of the wavefront by software propagation, i.e. by using an algorithm based on the solution of Maxwell's equations to obtain different "focus" states. Using this method, if the measuring device is not focused on the object to be measured, the (unfocused) measured complex amplitude can be propagated from the measurement plane to any other desired plane to obtain a wavefront corresponding to the focused image.
In this way, the plane of best focus can be obtained from a series of propagated wavefronts or a series of images by finding the wavefront or image with what is called "minimum entropy". An example of useful entropy in this sense is the cumulative "surface area" of the complex amplitude function of the wavefront. Preferably, the surface area may be obtained, for example, by integration of a complex amplitude function of the wavefront. Another example of possible entropies is the cumulative surface area of the amplitude function of the wavefront alone, or the cumulative surface area of the phase function of the wavefront. Using known propagation equations, a series of wavefronts can be obtained by software propagation of one measured complex wavefront to a different plane. A series of images can be obtained from a software refocusing or from any other light source, such as from different focus positions of the object. For intensity images, one possible definition of entropy is the cumulative surface area of the intensity function of the image. Referring now to FIG. 11, there is shown an entropy diagram of an arbitrary wavefront as a function of its position of propagation along the focal length. The focal length at the zero point of the abscissa represents the starting plane where the wavefront is measured. It can be seen that as focus is raised via best focus 1100, entropy also passes through a well-defined minimum. This local minimum 1102, on the right hand side of the figure, is an artefact due to the position of the beam limiting aperture into focus.
In accordance with another preferred method of the present invention, and with reference to the example of the method shown in FIG. 12, the "best focus" is applied independently to different segments of the image, or the wavefronts 1200, 1202. By using wavefront propagation from the plane of the "best focus" 1204 of one segment to the plane of the "best focus" 1206 of another segment, the height difference between the two segments can be determined as the propagation distance between the two focal planes, as shown in FIG. 12. In addition, the entropy of a segment itself can be used as a measure or initial estimate of the amount of defocus for that segment. In other words, by measuring the entropy of different segments, one can calculate and estimate the difference in focus position, and the height difference between different segments, from a predetermined knowledge of the convergence rate of the entropy function. Thus, all 3 steps can be integrated into one height measurement method, namely:
(i) obtaining a plurality of complex wavefronts and corresponding images through wavefront propagation of a measured wavefront, each complex wavefront corresponding to a different focus state,
(ii) determining the "best focus" complex wavefront for each segment by applying a minimum entropy algorithm to each segment, an
(iii) The height difference between any two segments is calculated by the "propagation distance" between the best-focus complex wavefront corresponding to the first segment and the best-focus complex wavefront corresponding to the second segment.
It should be noted that an image can be constructed by propagation in which both the first and second segments are in focus, even if no height difference between the segments is calculated.
According to another preferred method of the invention, the best focus and height measurements can be obtained by applying a stereoscopic wavefront propagation. Referring now to fig. 13, there is shown an object 1300 observed using this method, in which the wavefront is imaged according to the direction it is propagated, and only propagates in a particular direction, using only a portion of the angular spectrum of the wavefront. In this desired direction, propagation can be achieved by means of an effective aperture stop 1302. By effectively moving the aperture stop 1302 to its dashed position in fig. 13, the wavefront is then propagated again in a different direction by the software to image another portion of the angular spectrum, as shown by the dashed line in fig. 13, depending on the direction in which it was propagated. In this way, two different wave front propagation in different directions can be obtained. This is similar to two different images obtained in stereo viewing. Using these two different wavefronts, depth or height data can be derived for the object in a manner similar to how depth perception can be determined for both eyes.
Further applications of the preferred method according to the invention using this best focus position are now provided. To increase the range of surface topography measurements using the multiwavelength wavefront determination method, as described above, prior data regarding the heights of the different segments in the field of view is often required to overcome imaging noise, which limits the ability of a multiwavelength measurement scheme to overcome 2 π ambiguity. According to the invention, this previous data on the height of the different segments in the field of view can be obtained from the "best focus" of each segment. In a similar manner, the previous data for solving 2 π ambiguity interference can be obtained from the "best focus" of each segment in the field of view.
In accordance with yet another preferred embodiment of the present invention, an apparatus and method are provided for increasing the range of surface topography measurements. In surface topography measurements, it is often desirable to measure height over a large area. The range of interferometry for height measurement is limited due to 2 pi ambiguity. One known method to increase the height range is to use several different wavelengths to solve the problem of 2 pi ambiguity. However, this method is sensitive to noise.
According to this method, the magnitude of the 2 pi ambiguity (order) between different pixels in different segments is calculated by combining the wavelength reconstructions of at least two phases, using the following algorithm:
A. the phase of the reconstruction wavefront is spread out over at least one wavelength (where unambiguous phase measurements are obtained only in regions away from the step).
B. An anchor point is selected in each segment in the field of view (hereinafter FOV).
C. The unambiguous height differences of the pairs of points in each segment are calculated, one of each pair of anchor points for the segment, using the phase of the reconstructed wavefront. The height difference can be calculated unambiguously because each of the pairs of points in each segment are close to each other.
D. From the magnitude of the ambiguity for each pair of points in the two segments, from one of the segments, the magnitude of the ambiguity for the two anchor points of the two segments can be repeatedly obtained for each pair of points.
E. A histogram of the magnitudes of these ambiguities to the anchor points is set and a value for that magnitude is selected. This value may be the most likely value, the value closest to the mean or any statistical value derived from histograms of the order of self-ambiguity.
F. The selected magnitude values are again used to derive the ambiguity levels for the points with higher accuracy again.
This aspect may be repeated for different pairs of anchor points to improve accuracy and robustness to noise.
According to another approach, the magnitude of the 2 π ambiguity between different pixels within different segments in the FOV can be calculated by combining at least two wavelength reconstructions of the phase, using a second algorithm that is mathematically equivalent to the previous algorithm as follows:
A. the phase of the reconstruction wavefront is spread out over at least one wavelength (where unambiguous phase measurements are obtained only in regions away from the step).
B. For any two segments in the FOV, S, T, multiple pairs of points (M) are selectedi,Ni) Wherein M isiIs the point of S, NiIs a point of T, and is a point of each pair (M)i,Ni) Combining at least two wavelength reconstructions to calculate the point MiHeight of (2)And point NiIs determined by the height of the sensor.
C. At point NiIncreasing the height of the spread (obtained in step A), at point MiThe height of the spread (obtained in step A) is subtracted, obtaining the point MiAnd at point MiIs obtained in step a) is the difference in height Δ between the developed (obtained in step a) heightsiIs measured.
D. Set difference ΔiAnd selecting a value for the magnitude. This value may be the most likely value, the value closest to the mean or any statistical value derived from histograms of the order of self-ambiguity.
According to a further preferred method of the invention, the magnitude of this 2 π ambiguity between different pixels in the field of view can be calculated by combining at least two wavefront reconstructions to obtain their phases at two wavelengths, using the following algorithm:
A. several reference points and an anchor point are selected in the field of view.
B. Using the phase of the reconstructed wavefront, the ambiguity level between each pixel in the field of view and the reference point is calculated.
C. And repeatedly deducing the ambiguity level between a certain pixel and the anchor point by using the calculated ambiguity level between each pixel and the reference point.
D. A histogram of the ambiguity levels for that pixel is set and the largest possible level is selected.
This method can be reused for different anchor points to increase accuracy.
When two or more wavelengths are used to generate the surface topography of an object, two or more single-wavelength reconstructions can be obtained, one for each wavelength. In summary, a single wavelength phase function is used to determine the phase of the wavefront, while the 2 π ambiguity of the phase of this wavelength is resolved in combination with the phase of the other wavelengths. However, the two or more solved single wavelength reconstructions can be used to generate an improved reconstruction in the following manner. At each location in the field of view, only one of these solved single wavelength reconstructions is used to determine the phase of the wavefront, the local one with the highest quality. Thus, for different segments, different single wavelength reconstructions can be used, depending on the quality of the data at each wavelength on each segment. The phase at other wavelengths, which lack accuracy locally, is combined with a 2 pi ambiguity in finding the phase of the more accurate wavelength. Alternatively, some average of all solved single wavelength reconstructions may be used, wherein the weight for calculating this average is determined by the quality map of the respective single wavelength reconstruction, which may be different for different positions in the FOV.
In "white light interferometry", the fringe pattern is only visible at heights having optical path differences relative to a reference mirror that are less than the optical coherence wavelength. Thus, when a white light source is used with a coherent light source, the "white light interferometry" fringe pattern can be used as an anchor height to resolve interferometry ambiguity with the coherent light source. As an example, two regions with a height difference of 1 μm in the FOV, using multiphasic dry wavelengths, can be seen as a 1 μm difference and also as a 4 μm difference, but using white light, it can be unambiguously determined whether the two regions differ by 1 μm. Alternatively, white light interferometry is used with coherent light interferometry, which can be provided with existing data. Reference is now made to fig. 14, which is a schematic illustration of an interferometric device based on the use of combined white light and coherent light interferometry. The white light 1400 and coherent light 1402 are directed to the object 1401 by beam splitters 1404, 1406, and the reflected light is imaged onto the CCD 1408.
The use of broadband illumination for wavelength analysis causes errors in the height calculation due to the finite coherence length of the broadband light. According to yet another preferred embodiment of the invention, a measurement using broadband illumination with sufficiently low error can be used as a data generator and provide a priori data for coherent light source interferometry.
According to yet another preferred method of the present invention, apparatus and optical elements are provided for improving the contrast of wavefront reconstruction. Among the different contrast methods, such as the Zemike phase contrast and methods such as those described above with international patent application publication No. WO03/062743, the imaging contrast is dependent on the aperture size and wavelength, and due to interference between light passing through the central and peripheral regions of the Phase Light Modulator (PLM), the contrast, determined by comparable light levels, passes through both regions. The closer the energy levels of these two regions are, the greater the contrast of the image. The longer the wavelength, the greater the spatial propagation of light in the plane of the PLM. In addition, the smaller the aperture, the greater the spatial propagation of light in the plane of the PLM. Therefore, it is desirable to scale the aperture as a function of wavelength in order to obtain an optimal contrast for each wavelength.
The pore size for different wavelengths can be adjusted by means of pore sizes comprising concentric circles with different transmittance for different wavelengths, for example using different spectral filters, as shown in fig. 15. Thus, each wavelength or wavelength range is provided with its own aperture. Such an aperture structure may optimize the contrast for different wavelengths or wavelength ranges, as well as the spatial dimensions of the aperture scaled according to the wavelength used.
According to another preferred embodiment of the invention, instead of using a spectrally sensitive filter as the system aperture, it can be placed close to the PLM in order to change the transmittance of the peripheral portion of the PLM compared to the central portion. When the transmittance of this region is reduced, the contrast can be enhanced. This may improve the contrast differently for each wavelength. If this contrast is low, then the adjustment function of the spatial spectral transmission of the PLM can be used to improve this contrast, and in particular the relative spectral transmission of the central and peripheral regions of the PLM.
According to a further preferred method of the invention, there is provided a method of adding a polarizer and a second rotating polarizer in an optical system before and after a phase controller, wherein the phase controller is made of birefringent material to control and optimize the contrast obtained in the image plane with different spatial filtering of the microscope. The phase controller has a plurality of different spatial components. On each element, a different optical path difference can be chosen for the two light polarizations depending on the control signal applied to the element. The polarization state of the polarizer and the path differences on the components affect the transmittance and phase retardation of the light. Therefore, changing the optical path difference and rotating the second polarizer can control the transmittance and phase retardation of light on each spatial component of the phase controller.
In a preferred embodiment, the phase controller has two spatial components and the optical axis of the birefringent material is at 45 degrees to the axis of the first polarizer. If the axis of the first polarizer is parallel to the X-axis, the transmittance τ of the spatial components of the phase controller can be expressed as:
wherein
θi-a phase delay between the two polarizations at a certain spatial position i generated by the phase controller.
Alpha-the angle of the rotating polarizer relative to the X-axis (the axis of the first polarizer).
The phase delay of the light after passing through the rotating polarizer at the phase controller components is given by:
the phase delay difference between the two spatial components of the phase controller is:
Δθ=θ1′-θ2′| (3)
wherein the phase delay difference is used to obtain a plurality of different phase-change wavefronts for use in wavefront analysis.
For any desired transmittance τ12There are four different solutions for Δ θ, where different values of α may also be required. Thus, these four solutions can be used to obtain four different images, which are required to provide a complete wavefront determination, as described in the background section of the present application. Conversely, if the second polarizer is held fixed, i.e. for a fixed α, applied by the phase retardation adjustment in different spatial components of the PLM to obtain the required phase difference in transmittance, then there are two solutions for θ, and therefore four solutions for θ', and therefore at least four solutions for Δ θ. Thus, these four solutions can be used to obtain the four different images required to provide a complete wavefront determination, as described in the background section of the present application.
According to the invention, for any given transmittance τ12By using one fixed phase controller, four different phase delays can be found between the two components of the phase controller. In a preferred embodiment, the stationary phase controller is composed of birefringent material with a polarizer in front of it and a rotating polarizer behind it. The optical axis of the birefringent material of the phase controller is at 45 degrees to the first polarizer. One component of the phase controller functions as a lambda/4 wave plate and the other component functions as a lambda wave plate. In this case, the transmittance of the λ/4 wave plate part of the phase controller is always 0.5. The transmittance of the phase controller in the λ -plate section can be controlled by the rotation of the second polarizer and is given by:
the phase retardation in the λ waveplate section of the phase controller is always zero, but the phase retardation in the λ/4 waveplate section of the phase controller is given by:
using equation (5), one can target any given transmittance τ12Four different phase delays are found between the two components of the phase controller.
According to a more preferred method of the present invention, in some different embodiments, there are provided means and algorithms for improving the image quality of wavefront reconstruction by reducing the noise introduced by coherent illumination of the object. According to such a first embodiment, comparing or combining the phase and amplitude components of the propagating wavefronts of different measurements in different planes can correct the wavefront measurement and reduce noise, since the difference between them is only caused by noise and not real data. According to this method, noise reduction can be achieved by taking one measurement at a focal plane, including full wave pre-reconstruction, and another measurement at which the image is defocused by a known amount by the system hardware, also including full wave pre-reconstruction. The defocused wavefront is then refocused by propagation software by a known amount of defocus applied, as described in the methods above, thus creating a second in-focus wavefront, and the combination of the two wavefronts by averaging, comparison, or any other known imaging processing function can be used to reduce noise.
According to yet another embodiment of the invention, a noisy wavefront, wherein the noise is caused by a local disturbance in a different plane, such as dust or a defect in the optical path, is propagated to the plane, wherein the disturbance is local, i.e. the disturbance is in focus. In that plane, the perturbation may then be eliminated, for example by interpolation or homogenization of neighboring regions, or by any other method. The modified wavefront is then propagated back to the original plane, or to any other defined plane, to produce a disturbance-free wavefront. The same method can be used to correct image aberrations, a wavefront can be propagated to a plane in which the source of the aberration is placed and has a known form where the aberration is removed and the wavefront propagates back to produce an aberration-free wavefront.
Referring now to figure 16, there is shown diagrammatically a preferred method of reducing the effect of disturbances in the wavefront, for example caused by dust or imperfections in the optical path, using the frequency and position of the annular fringes of the disturbance by directly calculating the position of the source of the disturbance, i.e. the plane in which the disturbance is in focus. This knowledge of the positioning can be used to cancel the disturbance, for example by adding a virtual disturbance source at the same location, which cancels the real disturbance source. The disturbance may be any point source disturbance or any other type of disturbance, such as a disturbance generated by an optical assembly. FIG. 16 shows the location 1602 of the perturbation source, its emitted wavefront 1606, and the resulting fringe pattern 1604.
Further preferred methods of the invention can be used to reduce noise in imaging, especially in coherent imaging, and in the results of the wavefront analysis described above, by obtaining a number of images of the object to be inspected via movement of any element in the optical imaging path. This movement may be one or a combination of the movements described below:
I. the object illuminates the movement of the light source in all three axes, and the corresponding movement of the PLM in the optical path maintains it in the image plane of the moving light source, thus compensating for the light source movement, wherein the images are combined in the time domain.
Using movement of the PLM in the optical path to generate a plurality of phase change transformed wavefronts.
Movement of the object along the Z-axis to different focus and defocus states.
Movement of the object at different off-axis positions or different tilts and image registrations.
V. movement of any optical component in the optical path.
According to these methods, compensation for movement and averaging of a plurality of images is performed to reduce the influence of noise because image information is additive and noise is spatially different for each movement position and is therefore averaged. Compensation may be accomplished by registration of the multiple images and homogenization of the registered images. Alternatively and preferably, the movements are compensated and the plurality of images are averaged by means of hardware. Alternatively, these movements are preferably compensated for by means of software, such as wavefront propagation and control software. These registrations, compensations and homogenizes can be performed both on the basis of the image intensities and on the basis of the measurements resulting from the reconstruction of the object.
Referring now to fig. 17, there is shown a preferred apparatus, constructed and operative in accordance with another preferred embodiment of the present invention, for performing the method of varying the optical path between the light source 1702 and a particular point 1722 on the PLM upon which the light beam impinges, without mechanical movement of any element of the imaging system itself. During the passage of the light path through the system, the collimated beam illuminating the object is steered in such a way that it always impinges on the same point 1722 on the PLM. The beam passes through the imaging system between the light source and the PLM along a different optical path as indicated at 1704 in fig. 17. The collimated beam of light illuminating the object is mounted while illuminating the object, and on its return path, it is mounted back to a path parallel to its incident path. In FIG. 17, a rotating wedge 1706 is used to generate the motion of the beam. The wedge is preferably rotated about an axis 1702 parallel to the optical axis of the illumination system. Rotation of wedge 1706 results in input illuminating beam 1710, the path of which is shown in phantom, creating a path that oscillates from the rotating wedge at its output point 1712, describing a path that is cyclically directed as the wedge is rotated, the direction of propagation from the output point being cyclically dependent on the rotational position of the wedge. In FIG. 17, this path shows only one position of the rotating wedge. In the preferred embodiment of fig. 17, the oscillating beam is directed into a roof pentaprism 1708, and the beam reflected therefrom is further reflected on a beam splitter 1714 toward the imaging portion of the system 1704 and then to the PLM 1722. The penta prism 1708, which is used to create an odd number of reflections in the X and Y directions, in combination with the emission in the beam splitter 1714, causes an even number of reflections, thus creating a retro-reflection effect, and the returned beam is always parallel to the incident beam. Any beam entering the wedge at a given angle returns at the same angle after passing through the entire optical path. Thus, the imaging relationship between the light source and the point of impact on the PLM is invariant, even though the optical path between them undergoes spatial motion as the wedge rotates.
According to a further preferred embodiment of the present invention there is provided a method of reducing coherent noise in an imaging system by comparing or combining the strengths of calculated and measured fourier transforms of an imaged object to correct the measurements and thereby reduce the noise. A preferred way of implementing such a method preferably comprises the steps of (i) performing a measurement at the image plane of an object, including the full wavefront reconstruction of the object, and calculating the fourier transform of the wavefront, (ii) obtaining the actual intensity image of the fourier plane of the imaging system imaging the object by directly imaging the fourier plane, (iii) combining or averaging or processing the calculated fourier transformed intensities obtained from the reconstructed wavefront by image processing with the actual intensity image obtained at the fourier plane, while leaving the original phase of the calculated fourier transform unchanged, and (iv) performing an inverse fourier transform using the same phase function to produce an altered wavefront reconstruction with minimal noise.
In addition, coherent noise in the imaging system can preferably be reduced by using a combination of light sources, such as a broadband light source and a coherent light source. The broadband light source is used to implement a smooth image to define different segments in the field of view and to determine that the preliminary calculated segment height is within the limits of phase ambiguity, although the calculated height is less accurate due to the finite coherence length of the white light. These preliminarily calculated heights are used as initial inputs to the phase obtained by the coherent light source to determine the correct height for each segment, which can be accurately determined using the coherent light source.
According to another preferred embodiment of the invention, coherent noise in the imaging system can be reduced by using two or more wavelengths to generate the surface topography of one object, and two or more single-wavelength reconstructions can be obtained, one at each wavelength. In summary, one single wavelength phase function is used to determine the phase of the wavefront, with the other wavelength phase functions being combined to solve for the 2 π ambiguity of the phase of this wavelength. However, these two or more solved single wavelength reconstructions can be used to generate an improved reconstruction in the following manner. At various positions in the field of view, different single-wavelength reconstructions are compared, and when one or more of the solved single-wavelength reconstructions give a smooth pattern at a certain position, other patterns of other single-wavelength reconstructions are smoothed in the same way. The smoothing may also be influenced by more complex weighting algorithms, such as weighting by means of the smoothed single-wavelength reconstructed quality map.
According to another preferred embodiment of the invention, coherent noise in an imaging system can be reduced by using a combination or averaging of two images obtained from two different polarizations of light.
An imaging system operating with spatially correlated light may be noisy due to the fringes caused by many light sources, especially interference patterns between different layers in the light path. It may be desirable to reduce the spatial coherence of the light to eliminate such fringes and increase the lateral resolution. However, in order to obtain a plurality of intensity maps other than the transformed wavefront of the plurality of phase changes, spatial coherence on the wavefront to which the spatial phase change is applied is preferable. According to this preferred method, a light source with spatial coherence in only one dimension is preferably used. This may be achieved by using, for example, a line light source rather than a point light source. The linear light source may be used to reflect light from an inspected object or to transmit light through a partially transparent inspected object. In addition, the spatial function of the phase change applied at each spatial position of the transformed wavefront (labeled "G" above) is preferably a linear function that generates a spatially uniform spatial phase delay across the central region of the transformed wavefront in a region having a linear-like, relatively small width elongated shape. This, combined with the linear spatial function of the line source, reduces the algorithms very similar to those described above. This linear phase delay can be introduced, for example, by a filter in the fourier plane, as shown in fig. 18. In the preferred embodiment of FIG. 18, the light exits one linear light source 1800 and passes through the object 1802. The resultant wavefront is focused by a lens 1804 onto a linear phase controller 1806, preferably at the focal plane of lens 1802. A second lens 1808 is positioned to image the wavefront to the detector 1810.
In the structure of fig. 18 described above, spatial coherence in the Y direction is eliminated. The convolution of the fourier transform of the object and filter is obtained in the image plane, as obtained on the camera surface in the preferred embodiment of fig. 18, for only one dimension (X) and not for the other dimension (Y). Thus, the calculations required for the measurement of the object to be examined, i.e. the phase and amplitude of the wavefront to be analyzed, need only be performed in one dimension, not in two dimensions. In addition, the measurement and analysis system is much less sensitive to tilt of the inspected object in the Y-axis, whether the measurement is performed by reflection or transmission. The inspected object may then be rotated to reduce tilt sensitivity in other dimensions. It should be clear that a line shape is only an example of a preferred shape of the light source, and that any shape other than a point light source affects the coherence and can therefore be used. The object under examination can be reconstructed independently in two dimensions by using a combination of two images each time the spatial coherence of the light is destroyed in turn in one dimension. The two reconstructions may be combined to reconstruct a third dimension of the 2-D image of the object.
These two one-dimensional reconstructions can preferably be obtained by rotating the light source and the phase plate in the fourier plane in the same way. Alternatively, the two one-dimensional reconstructions can be obtained by using two different polarizations of light. Each polarization has its own 1-dimensional light source and a one-dimensional phase plate in the fourier plane. A rotating polarizer preferably sends the intensity map to the camera over time. More preferably, the light source may comprise two intersecting 1-dimensional light sources (line sources), each having a different polarization. The phase plate in the fourier plane comprises birefringent material with a pattern of crossings, one line in the crossing performing a suitable phase shift for only one polarization, and the other orthogonal line in the crossing performing a suitable phase shift for the other polarization. A rotating polarizer preferably sends an intensity map to the camera over time.
In many applications, it is desirable to measure small topographical features that can only be resolved by exactly one optical system, or even much smaller than the smallest dimension that can be resolved by the optical system. The accuracy required for such measurements may require orders of magnitude better than the resolution of an optical system that has become obscured or even invisible to the topographical features by conventional imaging. Referring now to fig. 19, an image of an integrated optical waveguide structure obtained by a microscope with an x50 objective lens is depicted. An accuracy of 0.05 mu needs to be measured, the width of the features or spaces between the waveguides, using an optical system with a 0.5 mu resolution.
Referring now to fig. 20, a magnified portion of the image of fig. 19 is shown, as indicated by the marked area of the image in fig. 19, depicting how details of the edges in the image are obscured by the limited resolution of the microscope. When a target image to be measured is obtained at different defocus positions, the degree of blur of the image is changed according to the defocus level.
Referring now to FIG. 21, a graph shows a plot of the cross-section of the illumination across the edge of the waveguide in the image of the device for different defocus levels. It can be seen that all cross sections pass through the same height position, where the indicated width is the true width of the black lines.
According to this method, several images of the object to be measured are preferably taken at different defocus positions, and the cross-sections of the illumination across different topographical features in the images are marked. An accurate measurement of the spacing between the edge and the different topographical features can be obtained by finding the point at which the illumination plots total convergence at a fixed point in the function of intensity as a function of lateral position for different focus positions. The illumination source may have any degree of coherence. A higher accuracy of determining the true width of the lines is obtained when the narrow lines are positioned at an angle of rotation relative to the main axis of the imaging sensor. This is indicated in fig. 19 and 20 by the diagonal narrow lines relative to the axis of the imaging sensor that span the X and Y axes.
According to another preferred embodiment of the present invention, several measurements of an object under test are taken at different defocus positions using a wavefront analysis system. Sections of the intensity or phase or both across the edges of different topographical features in the image are plotted. An accurate measurement of the spacing between the edge and the different topographical features can be obtained by finding the half-height point at these plots.
Reference is now made to fig. 22, which schematically illustrates a cross-section of a periodic subwavelength structure, the details of which are to be resolved and described by means of another preferred embodiment of the invention. According to the method of this embodiment, ellipsometry with the aid of a spectroscope is used to perform such sub-wavelength measurements. This periodic structure is mathematically split into several virtual layers. Each layer has a different average optical parameter, n and k, since the percentage of different materials comprising the slice is different. In the preferred embodiment of fig. 22, the material is air and another material that makes up the structure itself. If the sliced periodic sub-wavelength structure is considered to be a regular multi-layered stack, the average optical parameters n and k for each slice can be obtained by spectroscopic ellipsometry and related algorithms. Thus, different percentages of different materials on each slice of the topographical feature may be obtained. These calculated percentages can be compared to expected percentages of different materials on the slice, depending on the structure being designed. Any deviation from the desired percentage of different materials on each slice may be interpreted as a deviation of the fabricated structure from the intended structure.
Alternatively and preferably, the measured average optical parameters, n and k, of each slice are compared to the desired average optical parameters, n and k. Any deviation can be interpreted as a deviation of the manufactured structure from the designed structure. Alternatively, the measured average optical parameters, n and k, for each slice are preferably compared to data for n and k stored in a library of many analog periodic sub-wavelength structures. Any deviation from the stored data can be interpreted as a deviation of the fabricated topographical features from the simulated structure.
According to another preferred embodiment of the invention, the periodic subwavelength structures are measured by spectroscopic ellipsometry using the wavefront analysis system of the present invention, as described above. In addition, in this case, each pixel on the image wavefront can be considered to correspond to a different periodic sub-wavelength structure. The ellipsometry with the aid of a spectroscope as described above is then applied independently to each pixel of the image.
In the semiconductor Integrated Circuit (IC) industry, higher circuit packing density is increasingly required. This demand has led to the development of new materials and processes to achieve increased packing densities and sub-micron device sizes. Fabricating ICs at such tiny sizes adds more complexity to the circuitry and raises the need for improved methods to inspect the integrated circuit at different stages of production. An IC is built up of many layers that make up the device and circuit conductors. Overlay is a misregistration between layers generated in a lithographic process, and overlay measurements are used to monitor the lithographic process.
Some preferred methods of performing improved overlay measurements are now described, based on the use of methods that use phase data for film alignment and measurement. These methods have some potential uses and advantages over existing methods for measurement of overlapping targets. By propagating the complex amplitude of the measured wavefront from the top surface of one of the overlapping materials to any other desired plane, in accordance with the method of the present invention, focused images of the different layers can be obtained. The images of these different planes are derived by software operation of a single wavefront, preferably obtained in a single imaging procedure and a short time frame, and are thus free from noise and mechanical disturbances due to multiple images taken at different focus levels imaging different layers. The different images in the different layers can then be measured, compared or registered with each other.
Another preferred method is used to enhance contrast. Some overlapping objects are difficult to view using conventional bright field imaging schemes. These targets include overlay targets after Chemical Mechanical Polishing (CMP), or targets comprising very thin layers, such as only a few nanometers. Such contrast enhancement may allow for better identification of such objects by the method of the present invention, since low contrast due to phase differences between imaging layers may be enhanced. Moreover, the method makes it possible to distinguish very thin layers, generally as low as less than 10 nm.
Another preferred method utilizes 3D information that can provide additional real world information about the complete topography of the object under examination to improve data analysis and misregistration calculations. The 3D data may indicate an asymmetry of the process, such as a tilt of the box (box) or a different tilt of the box edges. Information about the tilt of a layer, at the microscope level, can be used for step-wise feedback or for controlling any chemical/tool process. If the tilt phenomenon is visible to the naked eye, simple tilt elimination via software can improve the accuracy and repeatability of misregistration calculations.
The method of the invention, as a phase analysis tool, allows the reconstruction of a height map of a FOV with a relatively large focal depth, since a slice, which may be out of focus in the intensity map, can be better focused by the phase method. This feature allows to detect several layers in a single-grab (singlegrap) format, i.e. without the need to focus on the respective layer in succession. Such multifocal imaging is known as "double grab" (double grap), a prior art process that is prone to errors, such as misregistration of the image due to mechanical motion. Moreover, the additional time required for each imaging step is avoided, thereby increasing throughput.
This 3D information can be obtained even at small defocus. This means that the effective depth of focus for 3D measurements is larger than that of a conventional 2D system using the same optical system.
By propagating the complex amplitude of the reconstructed wavefront from one plane to any other desired plane using known formulas, an extended 3D and object surface mapping range is obtained without requiring further scanning.
There is no need to focus the measuring device on the measured object. The complex amplitude of the measured wavefront at one plane can be propagated from the measurement plane to any other desired plane to obtain an image of a focused object.
By propagating the complex amplitude of the measured wavefront from the measurement plane to any other desired plane to obtain an image of a focused object, the absolute distance between the two planes can be calculated.
By propagating the complex amplitude of the measured wavefront from the measurement plane to any other desired plane to obtain a focused image, an image of a focused object of large depth of focus can be obtained.
According to the present invention, a 3D sensor can be added to an existing 2D overlay inspection system.
The 3D sensor added to the existing 2D system provides 3D information that can be used to find an optimal focus for 2D measurements.
The 3D sensor used as a focus system can also handle semi-transparent layers, especially if there is prior knowledge of the refractive index and the nominal thickness of such an insulating layer.
This 3D information may also provide data that can predict Tool Induced Shift (TIS) problems and allow data analysis and therefore focus correction.
Using 3D information, in combination with 2D measurements, for better analysis of better misregistration for pass/fail (or any 0/1) decisions in the same idea of "majority vote (majpriority vote)".
Since the 3D sensor requires a single wavelength (or narrow bandwidth), an optical system with better chromatic distortion removal performance can be designed.
Images of the object may be obtained from different defocus positions. By finding the line width at high resolution using different focus positions and finding the intersection of the contours as described above, the position of each target can be determined with better accuracy.
The above-described methods and implementations have been described, and sometimes specific details or elements of the implementations have not been set forth. Some possible ways of expanding the present method and apparatus, as well as some possible method details and possible equipment elements for carrying out the method are described in PCT No. PCT/IL/01/00335 and U.S. patent No.6,819,435, and PCT application No. PCT/IL 02/00833.
It should be noted that the specific examples or details of specific numbers are given merely to illustrate one possible implementation of the invention and the method of the invention is not limited to them.
It should be noted that the details and particulars of the method and apparatus of the present invention, including any combination of these features, described in this document are merely exemplary of some possible systems and implementations of the method of the present invention and the method of the present invention is not limited thereto.
It is appreciated that various features of the invention which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for clarity, described in the context of separate embodiments, may also be provided separately or in any suitable sub-combination.

Claims (21)

1. A method for obtaining a focused image of an object, comprising the steps of:
illuminating the object;
obtaining amplitude and phase information of said illuminating wavefront emanating from said object in an arbitrary plane where said wavefront does not necessarily have to produce a focused image;
calculating the morphology of the wavefront on a series of further planes on a downward propagation path of the wavefront by means of a mathematical solution of the propagation properties of the wavefront; and
determining in which of the further planes the wavefront has the form of a focused image.
2. The method of claim 1, wherein the step of determining at which of the further planes the wavefront has the morphology of a focused image comprises:
calculating, in each of said further planes, an entropy of a complex-valued function of at least one optical property of the wavefront, wherein said entropy is determined from measurements of cumulative surface areas of said complex-valued function of the wavefront; and
determining that the entropy is the smallest propagation step.
3. The method of claim 2, wherein the complex-valued function of the wavefront is at least one of a complex amplitude function, a complex phase function, and a complex amplitude and phase function.
4. The method of claim 1, wherein
The object having a first section and a second section with a height difference between the first and second sections;
determining in which of said further planes said wavefront has the morphology of a focused image of said first segment;
determining in which of said further planes said wavefront has the form of a focused image of said second segment;
the method further comprises the steps of:
the height difference is obtained by subtracting the distance between the further plane in which the wavefront has the form of one focused image of the second segment and the further plane in which the wavefront has the form of one focused image of the first segment.
5. The method of claim 4, wherein the steps of determining at which of the further planes the wavefront has the morphology of a focused image of the first segment and determining at which of the further planes the wavefront has the morphology of a focused image of the second segment are applied separately to each segment.
6. The method of claim 5, wherein the steps of determining at which of said further planes said wavefront has the morphology of a focused image of said first segment and determining at which of said further planes said wavefront has the morphology of a focused image of said second segment are accomplished by applying a minimum entropy algorithm to each segment.
7. The method of claim 6, wherein applying a minimum entropy algorithm to each segment comprises calculating a cumulative surface area of a complex function modulus of the wavefront for each segment.
8. The method of claim 4, wherein said height difference between said two segments is used as an estimated height difference to reduce phase ambiguity arising in other measurement methods.
9. The method of claim 8 wherein the other measurement methods include a multi-wavelength wavefront determination method.
10. The method of claim 1, wherein
Generating a defocused image of said wavefront on said arbitrary plane, an
During said determining step, obtaining focused amplitude and phase wavefront information in one of said additional planes, said plane being at a refocusing distance from said arbitrary plane, said method further comprising the steps of:
re-focusing the defocused image using the re-focusing distance;
obtaining refocused amplitude and phase information of the wavefront of illumination emanating from the object within the plane a refocusing distance from the arbitrary plane; and
combining the refocused amplitude and phase waveform information with the focused amplitude and phase waveform information to reduce coherent noise in the imaged object.
11. The method of claim 10, wherein the combining step is performed by at least one of averaging, comparing, and image processing.
12. The method of claim 1, wherein
The wavefront on the arbitrary plane exhibits noise generated by a perturbation located on a second plane;
determining in which of the further planes the wavefront is such that the image containing the perturbation is optimally focused;
the method further comprises the steps of:
altering the wavefront at the location of the optimal focus such that the perturbation is cancelled; and
using said modified wavefront, calculating new amplitude and phase waveform information on said arbitrary plane by means of a mathematical solution of the propagation properties of said wavefront, with which an image free of noise caused by said local perturbations can be obtained.
13. The method of claim 12, wherein the perturbations appear as concentric fringes caused by dust particles that are not in focus.
14. The method of claim 12, wherein the disturbance is eliminated by image processing.
15. The method of claim 12, wherein the disturbance is caused by dust or a defect in a propagation path of the wavefront.
16. The method of claim 1, wherein the wavefront at the arbitrary plane exhibits aberrations in an optical system containing the object, the method further comprising the steps of:
determining in which of the further planes the wavefront has a morphology in which the aberration source is located;
altering the wavefront at the location of the aberration source such that the aberrations are eliminated; and
using the modified wavefront, new amplitude and phase waveform information on another plane is calculated by means of a mathematical solution of the propagation properties of the wavefront, with which an image without aberrations can be obtained.
17. The method of claim 1, wherein
The object is a multilayer structure having a first layer and a second layer;
said step of obtaining amplitude and phase information of a wavefront in an arbitrary plane comprises obtaining amplitude and phase information of a first complex wavefront map representing an image of a plane within said first layer; and
said step of calculating the morphology of the wave fronts on said sequence of further planes by means of a mathematical solution of the propagation properties of said wave fronts comprises calculating amplitude and phase information of a second complex wave front map representing an image of one plane within said second layer; and
the method further comprises the steps of: comparing the first and second complex wavefront maps to provide information about the overlap of the first and second layers.
18. The method of claim 17, wherein information about the overlap of the first and second layers is provided in a single imaging process without imaging system refocusing.
19. The method of claim 17, wherein using the amplitude and phase information enables an increased contrast measurement compared to an imaging method that does not use phase information.
20. The method of claim 17, wherein three-dimensional information about the multilayer structure can be obtained using the amplitude and phase information, thereby improving a measure of misregistration compared to an imaging method that does not use phase information.
21. The method of claim 17, wherein using the phase information enables increased depth of focus measurements compared to an imaging method that does not use the phase information, thereby enabling imaging of multiple layers in a single imaging process.
HK08105578.1A 2004-03-11 2005-03-11 Methods and apparatus for wavefront manipulations and improved 3-d measurements HK1119231B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US55257004P 2004-03-11 2004-03-11
US60/552,570 2004-03-11
PCT/IL2005/000285 WO2005086582A2 (en) 2004-03-11 2005-03-11 Methods and apparatus for wavefront manipulations and improved 3-d measurements

Publications (2)

Publication Number Publication Date
HK1119231A1 HK1119231A1 (en) 2009-02-27
HK1119231B true HK1119231B (en) 2009-12-04

Family

ID=

Similar Documents

Publication Publication Date Title
CN100485312C (en) Method and apparatus for wavefront control and improved 3D measurement
US7684049B2 (en) Interferometer and method for measuring characteristics of optically unresolved surface features
EP1476715B1 (en) Improved spatial wavefront analysis and 3d measurement
JP5443209B2 (en) Profiling complex surface structures using scanning interferometry
JP5827794B2 (en) Profiling complex surface structures using scanning interferometry
JP4885212B2 (en) Method and system for analyzing low coherence interferometer signals for information about thin film structures
US7869057B2 (en) Multiple-angle multiple-wavelength interferometer using high-NA imaging and spectral analysis
TWI448661B (en) Interferometer utilizing polarization scanning
US20060274325A1 (en) Method of qualifying a diffraction grating and method of manufacturing an optical element
TW201825864A (en) Characterized scanning white light interferometry system for patterning semiconductor features
US20090262335A1 (en) Holographic scatterometer
HK1119231B (en) Methods and apparatus for wavefront manipulations and improved 3-d measurements
JP2003065709A (en) Interference fringe analysis method for transparent parallel plates
WO2025141587A1 (en) Vertically-resolved metrology with asymmetry-sensitive measurement
US7898672B1 (en) Real-time scanner-nonlinearity error correction for HDVSI