[go: up one dir, main page]

WO2026002519A1 - Methods for inspecting images - Google Patents

Methods for inspecting images

Info

Publication number
WO2026002519A1
WO2026002519A1 PCT/EP2025/064966 EP2025064966W WO2026002519A1 WO 2026002519 A1 WO2026002519 A1 WO 2026002519A1 EP 2025064966 W EP2025064966 W EP 2025064966W WO 2026002519 A1 WO2026002519 A1 WO 2026002519A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
received image
defects
reference image
comparing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2025/064966
Other languages
French (fr)
Inventor
Aiqin JIANG
Thomas I. Wallow
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ASML Netherlands BV
Original Assignee
ASML Netherlands BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ASML Netherlands BV filed Critical ASML Netherlands BV
Publication of WO2026002519A1 publication Critical patent/WO2026002519A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Testing Or Measuring Of Semiconductors Or The Like (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

A method of inspecting an image includes receiving the image, and generating a reference image based on the received image. The method also includes comparing the received image with the reference image to identify one or more defects in the received image or confirm that the received image is free of defects.

Description

METHODS FOR INSPECTING IMAGES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority of US application 63/665,897 which was filed on 28 June 2024 and which is incorporated herein in its entirety by reference.
TECHNICAL FIELD
[0002] The embodiments provided herein generally relate to methods for inspecting images, for example, for detecting defects in a sample.
BACKGROUND
[0003] In manufacturing processes of integrated circuits (ICs), unfinished or finished circuit components are inspected to ensure that they are manufactured according to design and are free of defects. Charged particle beam based systems such as charged particle (e.g., electron) beam microscopes, such as a scanning electron microscope (SEM) can be employed to obtain images of patterns or structures in the IC. These obtained images may be compared with reference images of the structure to identify defects. As the physical sizes of IC components continue to shrink, accuracy and repeatability in image inspection and defect detection becomes more and more important. Defect detection involves inspection of images of semiconductor device structures during the device fabrication processes, and comparison of these images with reference images to identify possible defects on the structures.
SUMMARY
[0004] In some embodiments, a non-transitory computer readable medium that stores a set of instructions that is executable by at least on processor of a computing device cause the computing device to perform operations for inspecting an image. The operations may comprise receiving the image and generating a reference image based on the received image. The operations may also include using a filtering technique associated with the received image, and comparing the received image or a representation of the received image with the reference image or a representation of the reference image to identify one or more defects in the received image or confirm that the received image is free of defects.
[0005] In some embodiments, a method of inspecting an image may comprise receiving the image, generating a reference image based on the received image using a filtering technique associated with the received image, and comparing the received image or a representation of the received image with the reference image or a representation of the reference image to identify one or more defects in the received image or confirm that the received image is free of defects. [0006] In some embodiments, an inspection system may include a controller having one or more processors and a memory. The controller may include circuitry to cause the one or more processors to perform operations for inspecting an image. The operations may comprise receiving the image, generating a reference image based on the received image using a filtering technique associated with the associated received image, and comparing the received image or a representation of the received image with the reference image or a representation of the reference image to identify one or more defects in the received image or confirm that the received image is free of defects.
[0007] Other advantages of the embodiments of the present disclosure will become apparent from the following description taken in conjunction with the accompanying drawings wherein are set forth, by way of illustration and example, certain embodiments of the present invention.
BRIEF DESCRIPTION OF FIGURES
[0008] The above and other aspects of the present disclosure will become more apparent from the description of exemplary embodiments, taken in conjunction with the accompanying drawings. [0009] Fig. 1 is a schematic diagram illustrating an example charged-particle beam system, consistent with embodiments of the present disclosure.
[0010] Fig. 2A is a schematic diagram illustrating an example multi -beam tool, consistent with embodiments of the present disclosure that can be a part of the example charged-particle beam system of Fig. 1.
[0011] Fig. 2B is a schematic diagram illustrating an example single -beam tool, consistent with embodiments of the present disclosure that can be a part of the example charged-particle beam system of Fig. 1.
[0012] Fig. 3 is a schematic illustration of identifying defects based on image inspection.
[0013] Figs. 4A-4D illustrate an exemplary technique to detect defects based on inspection of images, consistent with some embodiments of the present disclosure.
[0014] Fig. 5 is a flow chart of an exemplary method of detecting defects based on inspection of images, consistent with some embodiments of the present disclosure.
[0015] Fig. 6 illustrates a two-dimensional Fast Fourier Transform of an image of a sample, consistent with some embodiments of the present disclosure.
[0016] Figs. 7A-7C illustrate different reference images generated from an image of a sample, consistent with some embodiments of the present disclosure.
[0017] Figs. 8A-8B illustrate identifying defects based on image inspection, in an example case consistent with the present disclosure.
DETAILED DESCRIPTION
[0018] Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the disclosed embodiments as recited in the appended claims. For example, although some embodiments are described in the context of utilizing electron beams, the disclosure is not so limited. Other types of charged particle beams (e.g., including protons, ions, muons, or any other particle carrying electric charges) may be similarly applied. Furthermore, other imaging systems may be used, such as optical imaging, photon detection, x-ray detection, ion detection, etc.
[0019] Electronic devices are constructed of circuits formed on a piece of semiconductor material called a substrate. The semiconductor material may include, for example, silicon, gallium arsenide, indium phosphide, or silicon germanium, or the like. Many circuits may be formed together on the same piece of silicon and are called integrated circuits or ICs. The size of these circuits has decreased dramatically so that many more of them can be fit on the substrate. For example, an IC chip in a smartphone can be as small as a thumbnail and yet may include over 2 billion transistors, the size of each transistor being less than l/1000th the size of a human hair.
[0020] Making these ICs with extremely small structures or components is a complex, timeconsuming, and expensive process, often involving hundreds of individual steps. Errors in even one step have the potential to result in defects in the finished IC, rendering it useless. Thus, one goal of the manufacturing process is to avoid such defects to maximize the number of functional ICs made in the process; that is, to improve the overall yield of the process.
[0021] One component of improving yield is monitoring the chip-making process to ensure that it is producing a sufficient number of functional integrated circuits. One way to monitor the process is to inspect the chip circuit structures at various stages of their formation. Inspection can be carried out using a scanning charged-particle microscope (“SCPM”). For example, an SCPM may be a scanning electron microscope (SEM). A SCPM can be used to image these extremely small structures, in effect, taking a “picture” of the structures of the wafer. The image can be used to determine if the structure was formed properly in the proper location. If the structure is defective, then the process can be adjusted, so the defect is less likely to recur.
[0022] As the physical sizes of IC components continue to shrink, accuracy and yield in defect detection becomes more and more important. Defect inspection involves inspection (and in some cases, measurements) of device structures using inspection images during wafer fabrication processes, and then the images are further processed to identify possible defects on the wafer. In some cases, critical dimensions (CDs) of patterns/structures measured from a SEM image can be used for identifying defects of manufactured ICs. The term critical dimensions refer to any geometric parameters or features (e.g., line width, space width, thickness, aspect ratio, overlay accuracy, etc.) of the device structure that, for example, may affect the functionality and performance of the device. For example, shifts between patterns or edge placement variations, which are determined from measured critical dimensions, can be helpful in identifying defects. Without accurate metrology for image inspection, accurate defect identification is hardly possible. Therefore, accuracy and yield in defect detection is fundamentally based on accurate inspection of images of device structures. However, accurate image inspection and repeatability of the inspection is limited by metrology tool error (e.g., calibration ruler error), process variations (e.g., leading to line-width roughness or trench-width roughness), measurement error (e.g., alignment variation), a measurement tool noise (e.g., a limited number of electrons when inspecting a line/edge), limitations of the metrology, etc.
[0023] Conventional methods for inspecting images and image data, for example, methods used in analyzing electron microscope images and data, rely on comparing a test image (or data derived from the test image) to a reference image (or data derived from the reference image). When both the test image and the reference image are derived from experimental images, the method is typically called a die-to-die method. When the test image is an experimental image and the reference image is obtained through simulation or by prior knowledge of what information is expected in the test image, the method is typically called a die-to-database method. Both methods are widely used in the semiconductor industry to identify deviations and flaws (generally referred to as “defects”) in lithographically defined patterns on semiconductor wafers.
[0024] The goal of these image inspection methods is to detect differences between the test image (or data, such as gray level values (or GLV), extracted from the test image) and the reference image (or data extracted from the reference image) and the disposition the observed differences in a way that captures all relevant defects, and distinguishes between “true defects” (e.g., those that represent real on-wafer patterning failures) and “false defects” (e.g., those that represent differences due to various sources of image noise and analysis uncertainties rather than real on-wafer patterning errors). In general, die-to-die methods require comparison of an experimental test image with an experimental reference image. In this case, the reference image is obtained from a lithographically produced pattern that is similar (but maybe not identical) to the pattern imaged in the test image. In this case, the method must distinguish between expected variability due to sources such as lithographic process variation and local variations (such as, for example, line-edge roughness) and true defects. Major drawbacks to this approach include the need to compensate for image noise present in both the test and reference image, possible inclusion of across-wafer lithographic process variations between the test and reference image, possible inclusion of variations in the behavior of the microscope (or imaging tool) between the test and the reference image, and possible inclusion of uncertainties arising from the alignment of the test and reference image. Many other sources of uncertainties are known.
[0025] In general, die-to-database methods require comparison of an experimental test image to a reference image (or data) derived from simulation of the expected pattern or generated based on prior knowledge of the pattern (such as, for example, pattern size and pattern pitch). In this case, the method must compare many different experimental variants of the test pattern that include lithographic process variation and local variations such as line-edge roughness with a reference pattern that is invariant. Drawbacks of this approach include the need to compensate for image noise present in the test image, the need for highly accurate models to generate expected pattern shapes, the inability to distinguish between true differences between test and reference data arising from process window variations versus differences due to true defects, the inability to distinguish between the differences arising from variations in the behavior of the imaging tool (such as, for example, magnification errors and other grid distortions) from differences due to true defects, and possible inclusion of uncertainties arising from the alignment of the test and reference image. Many other sources of uncertainties are known.
[0026] According to some embodiments of the present disclosure, an image analysis method is provided that is relatively insensitive to expected variations in the performance of the tool and the expected variations in the fabrication process. For instance, a reference image can be created based on a representation of the test image that reduces the information content of the reference relative to the test image itself. In some embodiments, the representation of the test image can be filtered by, for example, selectively removing higher spatial frequency content from the test image while preserving lower spatial frequency content. This type of filtering allows for adjusting the sensitivity of the filtering to identify different types of defects.
[0027] Relative dimensions of components in drawings may be exaggerated for clarity. Within the following description of drawings, the same or like reference numbers refer to the same or like components or entities, and only the differences with respect to the individual embodiments are described. Other objects and advantages of the disclosure may be realized by the elements and combinations as set forth in the embodiments discussed herein. However, embodiments of the present disclosure are not necessarily required to achieve such exemplary objects or advantages, and some embodiments may not achieve any of the stated objects or advantages.
[0028] Without limiting the scope of the present disclosure, some embodiments may be described in the context of providing scanning deflection systems and scanning deflection methods in systems utilizing electron beams (“e -beams”). Some scanning deflection systems may use electric fields to influence a charged particle beam. However, the disclosure is not so limited. Other types of charged particle beams may be similarly applied. For example, systems and methods may be applicable with optics, photons, x-rays, and ions, etc. Deflection may be used to scan a beam over a surface in, for example, cathode ray tubes (CRTs), lithography machines, scanning charged-particle microscopes (SCPMs), or other analytical instruments. While some embodiments are discussed with reference to deflection systems that use electric field to influence a beam, deflection may also be achieved with magnetic fields, for example.
[0029] As used herein, unless specifically stated otherwise, the term “or” encompasses all possible combinations, except where infeasible. For example, if it is stated that a component includes A or B, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or A and B. As a second example, if it is stated that a component includes A, B, or C, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C. Expressions such as “at least one of’ do not necessarily modify an entirety of a following list and do not necessarily modify each member of the list, such that “at least one of A, B, and C” should be understood as including only one of A, only one of B, only one of C, or any combination of A, B, and C. The phrase “one of A and B” or “any one of A and B” shall be interpreted in the broadest sense to include one of A, or one of B.
[0030] Fig. 1 illustrates an example electron beam inspection (EBI) system 100 consistent with embodiments of the present disclosure. EBI system 100 may be used for imaging. As shown in Fig. 1, EBI system 100 includes a main chamber 101, a load/lock chamber 102, a beam tool 104, and an equipment front end module (EFEM) 106. Beam tool 104 is located within main chamber 101. EFEM 106 includes a first loading port 106a and a second loading port 106b. EFEM 106 may include additional loading port(s). First loading port 106a and second loading port 106b receive wafer front opening unified pods (FOUPs) that contain wafers (e.g., semiconductor wafers or wafers made of other material(s)) or samples to be inspected (wafers and samples may be used interchangeably). A “lot” is a plurality of wafers that may be loaded for processing as a batch.
[0031] One or more robotic arms (not shown) in EFEM 106 may transport the wafers to load/lock chamber 102. Load/lock chamber 102 is connected to a load/lock vacuum pump system (not shown) which removes gas molecules in load/lock chamber 102 to reach a first pressure below the atmospheric pressure. After reaching the first pressure, one or more robotic arms (not shown) may transport the wafer from load/lock chamber 102 to main chamber 101. Main chamber 101 is connected to a main chamber vacuum pump system (not shown) which removes gas molecules in main chamber 101 to reach a second pressure below the first pressure. After reaching the second pressure, the wafer is subject to inspection by beam tool 104. Beam tool 104 may be a single -beam system or a multi -beam system.
[0032] A controller 109 (or control unit) is electronically connected to beam tool 104. In the context of a charged particle beam apparatus, the controller may be responsible for managing and regulating various aspects of the beam generation, manipulation, and delivery processes of EBI system 100. Controller 109 may include circuits (or circuitry) that enables various functionalities. Some exemplary circuits and their related functionalities will be described. For example, in some embodiments, controller 109 may include circuitry that enables it to control one or more of the intensity, focus, energy, and direction of the charged particle beam in beam tool 104. The controller may adjust these parameters according to the requirements of the specific application. Alternatively, or additionally, the controller may also include circuitry that enables it to manage mechanisms for steering and deflecting the charged particle beam. These steering or deflecting circuits may activate electromagnetic fields or use other techniques to manipulate the trajectory of the beam. For example, these circuits may include power supplies, electromagnetic coils, electrostatic lenses, or other mechanisms for controlling the trajectory and direction of the beam.
[0033] In some embodiments, controller 109 may additionally or alternatively include circuits that enable it to continuously monitors the stability of the beam and adjust parameters to maintain suitable performance of beam tool 104. These circuits may include sensors, detectors, and feedback loops to measure parameters such as beam current, position, energy, intensity, etc. in real-time. These circuits may incorporate feedback systems to detect deviations from desired beam characteristics and make real-time corrections. For example, the feedback system may compare measured beam parameters with desired setpoints and adjust control signals to minimize deviations and ensure consistent beam quality. In some embodiments, controller 109 may also include circuits that enable data acquisition and analysis, allowing users to collect and analyze data generated by the charged particle beam interactions with the sample.
[0034] It should be noted that it is not a requirement that controller 109 include circuits corresponding to all, or any, of the above-described exemplary functionalities. In other words, controller 109 include circuitry that enables it to control some aspects of EBI system 100. In some embodiments, controller 109 may be a computer configured to execute various controls of EBI system 100. While controller 109 is shown in Fig. 1 as being outside of the structure that includes main chamber 101, load/lock chamber 102, and EFEM 106, it is appreciated that controller 109 may be a part of the structure.
[0035] In some embodiments, controller 109 may include one or more processors (not shown). A processor may be an electronic device capable of manipulating or processing information. For example, the processor may include any combination of any number of a central processing unit (or “CPU”), a graphics processing unit (or “GPU”), an optical processor, a programmable logic controllers, a microcontroller, a microprocessor, a digital signal processor, an intellectual property (IP) core, a Programmable Logic Array (PL A), a Programmable Array Logic (PAL), a Generic Array Logic (GAL), a Complex Programmable Logic Device (CPLD), a Field-Programmable Gate Array (LPGA), a System On Chip (SoC), an Application-Specific Integrated Circuit (ASIC), a neural processing unit (NPU), and any other type of circuit capable of data processing. The processor may also be a virtual processor that includes one or more processors distributed across multiple machines or devices coupled via a network.
[0036] In some embodiments, controller 109 may further include one or more memories (not shown). A memory may be an electronic device capable of storing codes and data accessible by the processor (e.g., via a bus). Lor example, the memory may include any combination of any number of a random-access memory (RAM), a read-only memory (ROM), an optical disc, a magnetic disk, a hard drive, a solid-state drive, a flash drive, a security digital (SD) card, a memory stick, a compact flash (CL) card, or any type of storage device. The codes and data may include an operating system (OS) and one or more application programs (or “apps”) for specific tasks. The memory may also be a virtual memory that includes one or more memories distributed across multiple machines or devices coupled via a network.
[0037] Fig. 2A illustrates a schematic diagram of an example multi -beam beam tool 104A (also referred to herein as apparatus 104 A) and an image processing system 290 that may be configured for use in EBI system 100 (Fig. 1), consistent with embodiments of the present disclosure.
[0038] Beam tool 104A comprises a charged-particle source 202, a gun aperture 204, a condenser lens 206, a primary charged-particle beam 210 emitted from charged-particle source 202, a source conversion unit 212, a plurality of beamlets 214, 216, and 218 of primary charged-particle beam 210, a primary projection optical system 220, a motorized wafer stage 280, a wafer holder 282, multiple secondary charged-particle beams 236, 238, and 240, a secondary optical system 242, and a charged-particle detection device 244. Primary projection optical system 220 can comprise a beam separator 222, a deflection scanning unit 226, and an objective lens 228. Charged-particle detection device 244 can comprise detection sub-regions 246, 248, and 250.
[0039] Charged-particle source 202, gun aperture 204, condenser lens 206, source conversion unit 212, beam separator 222, deflection scanning unit 226, and objective lens 228 can be aligned with a primary optical axis 260 of apparatus 104 A. Secondary optical system 242 and charged-particle detection device 244 can be aligned with a secondary optical axis 252 of apparatus 104A.
[0040] Charged-particle source 202 can emit one or more charged particles, such as electrons, protons, ions, muons, or any other particle carrying electric charges. In some embodiments, charged- particle source 202 may be an electron source. For example, charged-particle source 202 may include a cathode, an extractor, or an anode, wherein primary electrons can be emitted from the cathode and extracted or accelerated to form primary charged-particle beam 210 (in this case, a primary electron beam) with a crossover (virtual or real) 208. For ease of explanation without causing ambiguity, electrons are used as examples in some of the descriptions herein. However, it should be noted that any charged particle may be used in any embodiment of this disclosure, not limited to electrons. Primary charged-particle beam 210 can be visualized as being emitted from crossover 208. Gun aperture 204 can block off peripheral charged particles of primary charged-particle beam 210 to reduce Coulomb effect. The Coulomb effect may cause an increase in size of probe spots.
[0041] Source conversion unit 212 can comprise an array of image-forming elements and an array of beam-limit apertures. The array of image-forming elements can comprise an array of microdeflectors or micro-lenses. The array of image-forming elements can form a plurality of parallel images (virtual or real) of crossover 208 with a plurality of beamlets 214, 216, and 218 of primary charged-particle beam 210. The array of beam-limit apertures can limit the plurality of beamlets 214, 216, and 218. While three beamlets 214, 216, and 218 are shown in Fig. 2A, embodiments of the present disclosure are not so limited. For example, in some embodiments, the apparatus 104 A may be configured to generate a first number of beamlets. In some embodiments, the first number of beamlets may be in a range from 1 to 1000. In some embodiments, the first number of beamlets may be in a range from 200-500. In some embodiments, an apparatus 104A may generate 400 beamlets.
[0042] Condenser lens 206 can focus primary charged-particle beam 210. The electric currents of beamlets 214, 216, and 218 downstream of source conversion unit 212 can be varied by adjusting the focusing power of condenser lens 206 or by changing the radial sizes of the corresponding beam-limit apertures within the array of beam-limit apertures. Objective lens 228 can focus beamlets 214, 216, and 218 onto a wafer 230 for imaging, and can form a plurality of probe spots 270, 272, and 274 on a surface of wafer 230.
[0043] Beam separator 222 can be a beam separator of Wien filter type generating an electrostatic dipole field and a magnetic dipole field. In some embodiments, if they are applied, the force exerted by the electrostatic dipole field on a charged particle (e.g., an electron) of beamlets 214, 216, and 218 can be substantially equal in magnitude and opposite in a direction to the force exerted on the charged particle by magnetic dipole field. Beamlets 214, 216, and 218 can, therefore, pass straight through beam separator 222 with zero deflection angle. However, the total dispersion of beamlets 214, 216, and 218 generated by beam separator 222 can also be non-zero. Beam separator 222 can separate secondary charged-particle beams 236, 238, and 240 from beamlets 214, 216, and 218 and direct secondary charged-particle beams 236, 238, and 240 towards secondary optical system 242.
[0044] Deflection scanning unit 226 can deflect beamlets 214, 216, and 218 to scan probe spots 270, 272, and 274 over a surface area of wafer 230. In response to the incidence of beamlets 214, 216, and 218 at probe spots 270, 272, and 274, secondary charged-particle beams 236, 238, and 240 may be emitted from wafer 230. Secondary charged-particle beams 236, 238, and 240 may comprise charged particles (e.g., electrons) with a distribution of energies. For example, secondary charged- particle beams 236, 238, and 240 may be secondary electron beams including secondary electrons (energies < 50 eV) and backscattered electrons (energies between 50 eV and landing energies of beamlets 214, 216, and 218). Secondary optical system 242 can focus secondary charged-particle beams 236, 238, and 240 onto detection sub-regions 246, 248, and 250 of charged-particle detection device 244. Detection sub-regions 246, 248, and 250 may be configured to detect corresponding secondary charged-particle beams 236, 238, and 240 and generate corresponding signals (e.g., voltage, current, or the like) used to reconstruct an inspection image of structures on or underneath the surface area of wafer 230.
[0045] The generated signals may represent intensities of secondary charged-particle beams 236, 238, and 240 and may be provided to image processing system 290 that is in communication with charged-particle detection device 244, primary projection optical system 220, and motorized wafer stage 280. The movement speed of motorized wafer stage 280 may be synchronized and coordinated with the beam deflections controlled by deflection scanning unit 226, such that the movement of the scan probe spots (e.g., scan probe spots 270, 272, and 274) may orderly cover regions of interest on the wafer 230. The parameters of such synchronization and coordination may be adjusted to adapt to different materials of wafer 230. For example, different materials of wafer 230 may have different resistance-capacitance characteristics that may cause different signal sensitivities to the movement of the scan probe spots.
[0046] The intensity of secondary charged-particle beams 236, 238, and 240 may vary according to the external or internal structure of wafer 230, and thus may indicate whether wafer 230 includes defects. Moreover, as discussed above, beamlets 214, 216, and 218 may be projected onto different locations of the top surface of wafer 230, or different sides of local structures of wafer 230, to generate secondary charged-particle beams 236, 238, and 240 that may have different intensities. Therefore, by mapping the intensity of secondary charged-particle beams 236, 238, and 240 with the areas of wafer 230, image processing system 290 may reconstruct an image that reflects the characteristics of internal or external structures of wafer 230.
[0047] In some embodiments, image processing system 290 may include an image acquirer 292, a storage 294, and a controller 296. Image acquirer 292 may comprise one or more processors. For example, image acquirer 292 may comprise a computer, server, mainframe host, terminals, personal computer, any kind of mobile computing devices, or the like, or a combination thereof. Image acquirer 292 may be communicatively coupled to charged-particle detection device 244 of beam tool 104A through a medium such as an electric conductor, optical fiber cable, portable storage media, IR, Bluetooth, internet, wireless network, wireless radio, or a combination thereof. In some embodiments, image acquirer 292 may receive a signal from charged-particle detection device 244 and may construct an image. Image acquirer 292 may thus acquire inspection images of wafer 230. Image acquirer 292 may also perform various post-processing functions, such as generating contours, superimposing indicators on an acquired image, or the like. Image acquirer 292 may be configured to perform adjustments of brightness and contrast of acquired images. In some embodiments, storage 294 may be a storage medium such as a hard disk, flash drive, cloud storage, random access memory (RAM), other types of computer-readable memory, or the like. Storage 294 may be coupled with image acquirer 292 and may be used for saving scanned raw image data as original images, and postprocessed images. Image acquirer 292 and storage 294 may be connected to controller 296. In some embodiments, image acquirer 292, storage 294, and controller 296 may be integrated together as one control unit.
[0048] In some embodiments, image acquirer 292 may acquire one or more inspection images of a wafer based on an imaging signal received from charged-particle detection device 244. An imaging signal may correspond to a scanning operation for conducting charged particle imaging. An acquired image may be a single image comprising a plurality of imaging areas. The single image may be stored in storage 294. The single image may be an original image that may be divided into a plurality of regions. Each of the regions may comprise one imaging area containing a feature of wafer 230. The acquired images may comprise multiple images of a single imaging area of wafer 230 sampled multiple times over a time sequence. The multiple images may be stored in storage 294. In some embodiments, image processing system 290 may be configured to perform image processing steps with the multiple images of the same location of wafer 230.
[0049] In some embodiments, image processing system 290 may include measurement circuits (e.g., analog-to-digital converters) to obtain a distribution of the detected secondary charged particles (e.g., secondary electrons). The charged-particle distribution data collected during a detection time window, in combination with corresponding scan path data of beamlets 214, 216, and 218 incident on the wafer surface, can be used to reconstruct images of the wafer structures under inspection. The reconstructed images can be used to reveal various features of the internal or external structures of wafer 230, and thereby can be used to reveal any defects that may exist in the wafer.
[0050] In some embodiments, the charged particles may be electrons. When electrons of primary charged-particle beam 210 are projected onto a surface of wafer 230 (e.g., probe spots 270, 272, and 274), the electrons of primary charged-particle beam 210 may penetrate the surface of wafer 230 for a certain depth, interacting with particles of wafer 230. Some electrons of primary charged-particle beam 210 may elastically interact with (e.g., in the form of elastic scattering or collision) the materials of wafer 230 and may be reflected or recoiled out of the surface of wafer 230. An elastic interaction conserves the total kinetic energies of the bodies (e.g., electrons of primary charged-particle beam 210) of the interaction, in which the kinetic energy of the interacting bodies does not convert to other forms of energy (e.g., heat, electromagnetic energy, or the like). Such reflected electrons generated from elastic interaction may be referred to as backscattered electrons (BSEs). Some electrons of primary charged-particle beam 210 may inelastically interact with (e.g., in the form of inelastic scattering or collision) the materials of wafer 230. An inelastic interaction does not conserve the total kinetic energies of the bodies of the interaction, in which some or all of the kinetic energy of the interacting bodies convert to other forms of energy. For example, through the inelastic interaction, the kinetic energy of some electrons of primary charged-particle beam 210 may cause electron excitation and transition of atoms of the materials. Such inelastic interaction may also generate electrons exiting the surface of wafer 230, which may be referred to as secondary electrons (SEs). Yield or emission rates of BSEs and SEs depend on, e.g., the material under inspection and the landing energy of the electrons of primary charged-particle beam 210 landing on the surface of the material, among others. The energy of the electrons of primary charged-particle beam 210 may be imparted in part by its acceleration voltage (e.g., the acceleration voltage between the anode and cathode of charged -particle source 202 in Fig. 2A). The quantity of BSEs and SEs may be more or fewer (or even the same) than the injected electrons of primary charged-particle beam 210.
[0051] Another example of a charged particle beam apparatus will now be discussed with reference to Fig. 2B. Beam tool 104B (also referred to herein as apparatus 104B) may be an example of beam tool 104 and may be similar to beam tool 104A shown in Fig. 2A. However, different from apparatus 104A, apparatus 104B may be a single -beam tool that uses only one primary electron beam to scan one location on the wafer at a time.
[0052] As shown in Fig. 2B, apparatus 104B includes a wafer holder 136 supported by motorized stage 134 to hold a wafer 150 to be inspected. Beam tool 104B includes an electron emitter, which may comprise a cathode 103, an anode 121, and a gun aperture 122. Beam tool 104B further includes a beam limit aperture 125, a condenser lens 126, a column aperture 135, an objective lens assembly 132, and a detector 144. Objective lens assembly 132, in some embodiments, may be a modified SORIL lens, which includes a pole piece 132a, a control electrode 132b, a deflector unit 132c, and an exciting coil 132d. In a detection or imaging process, an electron beam 161 emanating from the tip of cathode 103 may be accelerated by anode 121 voltage, pass through gun aperture 122, beam limit aperture 125, condenser lens 126, and be focused into a probe spot 170 by the modified SORIL lens and impinge onto the surface of wafer 150. Probe spot 170 may be scanned across the surface of wafer 150 by a deflector, such as deflector unit 132c or other deflectors in the SORIL lens. Secondary or scattered particles, such as secondary electrons or scattered primary electrons emanated from the wafer surface may be collected by detector 144 to determine intensity of the beam and so that an image of an area of interest on wafer 150 may be reconstructed.
[0053] There may also be provided an image processing system 199 that includes an image acquirer 120, a storage 130, and controller 109. Image acquirer 120 may comprise one or more processors. For example, image acquirer 120 may comprise a computer, server, mainframe host, terminals, personal computer, any kind of mobile computing devices, and the like, or a combination thereof. Image acquirer 120 may connect with detector 144 of beam tool 104B through a medium such as an electrical conductor, optical fiber cable, portable storage media, IR, Bluetooth, internet, wireless network, wireless radio, or a combination thereof. Image acquirer 120 may receive a signal from detector 144 and may construct an image. Image acquirer 120 may thus acquire images of wafer 150. Image acquirer 120 may also perform various post-processing functions, such as image averaging, generating contours, superimposing indicators on an acquired image, and the like. Image acquirer 120 may be configured to perform adjustments of brightness and contrast, etc. of acquired images. Storage 130 may be a storage medium such as a hard disk, random access memory (RAM), cloud storage, other types of computer readable memory, and the like. Storage 130 may be coupled with image acquirer 120 and may be used for saving scanned raw image data as original images, and post-processed images. Image acquirer 120 and storage 130 may be connected to controller 109. In some embodiments, image acquirer 120, storage 130, and controller 109 may be integrated together as one electronic control unit.
[0054] In some embodiments, image acquirer 120 may acquire one or more images of a sample based on an imaging signal received from detector 144. An imaging signal may correspond to a scanning operation for conducting charged particle imaging. An acquired image may be a single image comprising a plurality of imaging areas that may contain various features of wafer 150. The single image may be stored in storage 130. Imaging may be performed on the basis of imaging frames.
[0055] The condenser and illumination optics of the electron beam tool may comprise or be supplemented by electromagnetic quadrupole electron lenses. For example, as shown in Fig. 2B, electron beam tool 104B may comprise a first quadrupole lens 148 and a second quadrupole lens 158. In some embodiments, the quadrupole lenses may be used for controlling the electron beam. For example, first quadrupole lens 148 may be controlled to adjust the beam current and second quadrupole lens 158 may be controlled to adjust the beam spot size and beam shape.
[0056] Fig. 2B illustrates a charged particle beam apparatus that may use a single primary beam configured to generate secondary electrons by interacting with wafer 150. Detector 144 may be placed along optical axis 105, as in the embodiment shown in Fig. 2B. The primary electron beam may be configured to travel along optical axis 105. Accordingly, detector 144 may include a hole at its center so that the primary electron beam may pass through to reach wafer 150. Fig. 2B shows an example of detector 144 having an opening at its center. However, some embodiments may use a detector placed off-axis relative to the optical axis along which the primary electron beam travels. For example, as in the embodiment shown in Fig. 2B, discussed above, a beam separator 222 may be provided to direct secondary electron beams toward a detector placed off-axis. Beam separator 222 may be configured to divert secondary electron beams toward an electron detection device 244, as shown in Fig. 2A.
[0057] As used herein, the term defect broadly refers to any deviation from the intended design, configuration, material, etc., of a structure on a sample. In the context of integrated circuit samples, such defects may include, among others, irregularities in a geometric pattern (e.g., missing lines, extra lines, incorrect line widths, variation in line pattern, misalignment of layers, etc.), breaks, cracks, voids in the material, foreign particles or residues on the sample, impurities or contaminants, unintended oxidation of metal layers, residues from a fabrication process, short circuits, open circuits, leakage paths, surface roughness, step coverage issues, topographic irregularities, etc. In short, any anomaly in the sample can be considered as a defect. These defects can occur at various stages of the manufacturing process of an integrated circuit. Digital images of the sample (e.g., acquired by an SCPM) may be used to identify defects on the sample. For example, as explained previously and as schematically illustrated in Fig. 3, an experimentally obtained image (experimental image (b)) of the sample (e.g., wafer, die, etc.) capturing a test device region may be compared with a reference image (a) of the same region (e.g. an experimentally obtained reference image of the sample in a die-die method) or a numerically generated reference image (e.g., representation of the imaged region without defects in a die-to-database method). If a difference between the experimental image (b) and the reference image (a) exceeds a tolerance level, a potential defect may be identified. These defects are circled in images (b) and (c) of Fig. 3 for clarity. It should be noted the illustrated defects are merely exemplary and not all defects are circled. Moreover, although the identified defects are shown as dark (or black) marks or spots in image (c), this is only exemplary. In general, the defects may be identified in any manner or color (or GLV).
[0058] In some embodiments, data (e.g., gray level value (GLV) data) corresponding to the images (e.g., experimental and reference images) may be compared to detect defects. GLV data refers to the variation of grayscale intensity (or gray scale values) within the captured image. Grayscale intensity typically corresponds to the contrast or brightness of the pixels in the image and may be represented on a scale from 0 (black) to 255 (white) in an 8 -bit grayscale image. In an exemplary application, the SEM image of a pattern on the sample surface is captured. Then, suitable algorithms may be applied to compare the GLV data associated with the experimental image to GLV data associated with a reference image of the pattern to detect defects. By analyzing the GLV data, measurements such as the distance between specific intensity thresholds may also be used to determine critical dimensions (e.g., the width) of structures (e.g., a trace) or the defects in the imaged pattern.
[0059] In embodiments of the current disclosure, instead of using an experimentally obtained reference image of the sample (as in a typical die-die method), or a numerically generated reference image of the sample (as in a typical die -database method), to compare with an experimentally obtained image of the sample (e.g., an image generated by an SCPM), a representation of the experimentally obtained image itself is used as the reference image. In other words, in embodiments of the current disclosure, the reference image (e.g., image (a) of Fig. 3) used for comparison with the experimental image (e.g., image (b) of Fig. 3) is a representation of the experimental image itself. In general, the information content in the experimental image is reduced and the resulting reduced image is used as the reference image. For example, information associated with the defects is removed from the experimental image and used as the reference image. In some embodiments, higher spatial frequency content (or data) is selectively removed from the experimental image and the resulting image (which preserves the lower spatial frequency content of the experimental image) is used as the reference image.
[0060] In general, any known numerical or image processing method (e.g., frequency-domain filtering (FFT filtering), singular value decomposition filtering (SVD filtering), etc.) may be used to selectively remove higher spatial frequency content (or data) from the experimental image. In general, various image processing methods (e.g., low-pass filtering, Fourier transform filtering, wavelet transform, etc.) or algorithms can be used to selectively remove higher spatial frequency data from an experimentally obtained image of a sample. Eow-pass filtering allows low-frequency components to pass through while attenuating higher-frequency components. Common low-pass filtering methods or algorithms include a gaussian filter that applies a gaussian function to smooth the image and effectively reduces high frequency noise and details, a mean filter that replaces each pixel value with the average value of its neighboring pixels, a median filter which replaces each pixel value with the median value of its neighboring pixels. Fourier transform filtering is a frequency domain filtering technique that transforms the image to the frequency domain (using Fourier transform), applies a filter (e.g., a low -pass filter), and transforms the filtered image back to the spatial domain. Wavelet transforms decomposes the image into different frequency components allowing selective removal of high-frequency details from the image.
[0061] Fig. 4A illustrates an exemplary experimental image of a sample (e.g., generated by an SCPM), and Fig. 4B illustrates an exemplary reference image obtained by selectively removing higher spatial frequency content from the experimental image of Fig. 4A. Note that the experimental image of Fig. 4A contains both relatively small local irregularities (some of which are circled and marked A) that result from defects, and larger image inhomogeneities (some of which are marked B) that are imaging artifacts resulting from various sources of image noise and analysis uncertainties rather than real defects. The high spatial frequency content in the experimental image is typically comprised of signal noise and defects in the sample. As can be seen in Fig. 4B, selectively removing higher spatial frequency content from the experimental image of Fig. 4A removes the smaller local irregularities from the image while retaining the larger inhomogeneities.
[0062] The experimental image (Fig. 4A) is then compared with the generated reference image (Fig. 4B) to identify defects in the sample. Since most defects in the sample are relatively small, and thus possess high spatial frequency content, they are selectively removed from the generated reference image. Therefore, when the experimental image is compared with the reference image, these defects show up as differences between the two images. Figs. 4C and 4D illustrate the identified defects in an example case. The defects are shown within the added squares in Figs. 4C and 4D. The identified defects are shown as dark spots in Fig. 4C and overlaid on the experimental image of the sample in Fig. 4D. The illustration of defects (e.g., number, pattern, shape, etc.) in Figs. 4C and 4D is only exemplary, and the defects may be identified in any color and in any manner.
[0063] The experimental image (e.g., Fig. 4A) may be compared to the generated reference image (e.g., Fig. 4B) to identify defects in any manner. Various algorithms and techniques may be applied to numerically compare the two digital images (or corresponding data), to identify the differences between them, and thereby identify defects. In some embodiments, the two images may be compared pixel -by-pixel. For example, after ensuring that the two images are of the same size and are aligned (e.g., structures in both images are at the same pixel locations), the intensity (or GLV) at different pixel locations may be subtracted to identify the differences between them. Differences exceeding a predetermined threshold value may be considered to be defects. The threshold value may be determined empirically (e.g., experimentally) or based on prior knowledge. Other known methods (e.g., structural similarity index (SSM), image histogram comparison, feature-based methods, etc.) may also be used to identify defects based on a numerical comparison of the two images.
[0064] Fig. 5 is a flow chart of an exemplary method 500 that may be used to identify defects based on a comparison of the experimental and reference images. In step 510, the experimental image of a location (e.g., with a test pattern) on a sample is obtained. In some embodiments, an imaging tool (e.g., tools 100 of Fig. 1, 104A of Fig. 2A, 104B of Fig. 2B, etc.) may be used to acquire the experimental image. In some embodiments, a previously acquired experimental image of the sample image and saved in a memory (e.g., a database, storage 130 of Fig. 2A, etc.) may be retrieved and used in step 510. In some embodiments, step 510 may also include denoising the experimental image. Denoising a digital image involves reducing or removing high frequency noise while preserving important details and features in the image. Any known technique (or algorithm) may be used to denoise the image in step 540. In some embodiments, a filtering algorithm (e.g., an algorithm using a mean filter, median filter, gaussian filter, etc.) may be used to denoise the image in step 510. Alternatively, or additionally, in some embodiments, an algorithm using one or more of frequency domain filtering, edge -preserving filtering, or wavelet transform may be applied in denoising the image. In some embodiments of the current disclosure, a gaussian filter may be applied to the experimental image to denoise it.
[0065] A reference image of the sample may be generated from the experimental image in step 520. In some embodiments, the reference image may be generated based on the received image using a filtering technique associated with the received image. For example, the reference image may be generated in step 520 by selectively removing higher spatial frequency content from the experimental image obtained in step 510. An exemplary method of selectively removing higher spatial frequency content from the image will be described later with reference to Fig. 6. The received image may then be compared with the received image (or a representation of the received image) with the reference image (or a representation of the reference image) to identify one or more defects in the received image.
[0066] In step 530, the absolute difference between the experimental image and the reference image may be calculated. In this step, after aligning the two images, the absolute value of the pixel- by-pixel difference between the two images may be calculated. For example, after ensuring that the two images are of the same size and the two images are aligned (e.g., structures in both images are at the same pixel locations), the absolute value of the difference in intensity at each pixel location of the two images may be calculated. Notably, since the reference image (of step 520) is generated from the experimentally obtained image (of step 510), the two images may be perfectly aligned. In some embodiments, if the images are in color, they may be converted to grayscale as part of, or before, step 530.
[0067] In step 540, the pattern (or image) of absolute differences obtained in step 530 may be denoised. As discussed previously with reference to step 510, any known technique or algorithm may be used to denoise the image in step 540. In some embodiments, a technique similar to that used to denoise the experimental image in step 510 may be used in step 540. In some embodiments, a filtering algorithm (e.g., an algorithm using a mean filter, median filter, gaussian filter, etc.) may be used to denoise the image in step 540. [0068] In step 550, defect strength may be calculated from the denoised absolute difference image of step 540. In some embodiments, the denoised absolute difference image may also be converted to an image of binary values (e.g., 0 or 1) in this step. Any suitable algorithm may be used to calculate defect strength from the absolute difference values — Abs(dGLV). In some embodiments, defect strength may be calculated by applying an algorithm such as, for example, Abs(dGLV)/T*n to the absolute difference values at each pixel location. T may be an empirically obtained and tunable threshold value, and <J may be a parameter related to image noise. In some embodiments, T and <J may be selected experimentally (for example, by trying different values to determine values that produce acceptable results). In some embodiments, values of defect strength (Abs(dGLV)/T*n) greater than a threshold value may be considered to be defects.
[0069] In some embodiments, in step 550, the calculated defect strength may be converted to binary values (e.g., 0 or 1). For example, values of defect strength > 1 may be considered to be one binary value (e.g., 1) and values of defect strength < 1 may be considered to be another binary value (e.g., 0). In some embodiments, a binary value of 1 (or white pixels) may be considered to be a defect, and the corresponding pixel locations may be considered to include a defect.
[0070] In step 560, defect properties may be calculated or determined. In general, any defect property (e.g., defect size, defect area, defect shape, defect location, etc.) may be determined. For example, in some embodiments, the defect area (total defect area, area of defects at different locations, etc.) may be calculated as the total number of white pixels (e.g., having a binary value of 1). In some embodiments, an area with a number of contiguous white pixels may be considered as one defect, and the areas of different defects at different locations of the image may be catalogued. In some embodiments, defect size may be determined as the square root of the total number of white pixels. It should be noted that the above-described defect properties, and the method of calculating them, are merely exemplary. In general, any property of the defects that may obtained from the observed pattern of white and black pixels may be determined using method 500. In some embodiments, an image processing system associated with an imaging tool (e.g., image processing system 199 of Fig. 2B) may perform some or all the steps of method 500.
[0071] Fig. 6 illustrates an exemplary process of selectively removing high spatial frequency content from an exemplary experimental image (e.g., experimental image of Fig. 4A, experimental image obtained in step 510, etc.) to generate a reference image. In the process illustrated in Fig. 6, frequency domain amplitude thresholding is used to remove high frequency data from the experimental image. In Fig. 6, a Two-Dimensional Fast Fourier Transform Fast Fourier Transform (2D FFT) of the experimental image is plotted. 2D FFT converts the spatial domain data (e.g., pixel intensity values) of the image into frequency domain and reveals how different frequency components are distributed within the image. The horizontal Xi and X2 axes 620 and 640 represent the spatial frequency components along the horizontal direction of the image. For example, the Xi axis 620 represents the X-direction frequency, and the X2 axis 640 represents the Y-direction frequency in the image. And Y axis 660 represents the amplitude of the signal. Note that in Fig. 6, lower frequency data is plotted close to the center of the Xi and X2 axes 620 and 640 (where frequency approaches zero), and higher frequency data is plotted at the extreme ends of the Xi and X2 axes 620 and 640 (where frequency approaches ± 150). Also note that lower frequency data (see, e.g., box 690) located closer to the center of the Xi and X2 axes 620 and 640 have a higher amplitude than data at the extreme ends of the axes.
[0072] In embodiments of the current disclosure, an amplitude threshold 680 is selected such that data below the selected amplitude threshold 680 (e.g., higher frequency data) is not included in the reference image. In other words, when generating a reference image from an experimental image, data below the selected amplitude threshold 680 is not included in (or is excluded from) the reference image. Thus, the generated reference image only includes data having a lower frequency than a cutoff frequency defined by the selected amplitude threshold 680. As explained previously, the experimental image contains both small defects and larger image inhomogeneities (e.g., imaging artifacts), and higher frequency data is mainly comprised of signal noise and defects, and lower frequency data is mainly comprised of image inhomogeneities. Thus, the reference image generated by including only data having a lower frequency that the cut-off frequency defined by the selected amplitude threshold 680 includes only larger image inhomogeneities in the experimental image.
[0073] In embodiments of the current disclosure, by selecting different amplitude threshold 680, the spatial frequency content in the reference image can changed, to detect different types of defects in the sample. For example, when the reference image retains more of the high spatial frequency data (e.g., by decreasing the value of the amplitude threshold 680), shorter length-scale defects in the experimental image can be detected. By reducing the amount of high spatial frequency data in the experimental image (e.g., by increasing the value of the amplitude threshold 680), longer length scale defects can be detected. In other words, by generating multiple reference images by moving the amplitude threshold 680 up and down along the Y -axis 660, different types of defects in the sample may be detected. For example, with reference to Figs. 6 and 7A-7C, a first reference image 710 (see Fig. 7A) may be generated from the experimental image by selecting an amplitude threshold of 680 (in Fig. 6), a second reference image 720 (see Fig. 7B) may be generated by selecting an amplitude threshold 680 of 9, a third reference image 730 (see Fig. 7C) may be generated by selecting an amplitude threshold 680 of 10, etc. And by comparing the generated reference images 710-730 with the experimental image (e.g., using method 500 of Fig. 5), different types of defects in the sample may be detected. Thus, in some embodiments of the current disclosure, the frequency domain amplitude threshold may be selectively chosen to detect different types of defects in the sample.
[0074] For example, an exemplary sample may include a test pattern of parallel lines (or traces) with two types of defects (e.g., bridging defects and line wiggling defects). A first reference image may be obtained by applying a first amplitude threshold (e.g., a higher amplitude threshold) on an FFT plot (e.g., a plot similar to Fig. 6) of the experimental image of the sample. In other words, the first reference image may be generated by not including data below the first amplitude threshold (from the experimental image) in that reference image. Similarly, a second reference image may be generated by disregarding data from the experimental image below a second amplitude threshold (a lower amplitude threshold) in that reference image. Comparing the experimental image with the first reference image (e.g., using method 500) may reveal more of the first type of defect (e.g., bridging defect) than the second type of defect (e.g., the line wiggling defect), and comparing the experimental image with the second reference image may reveal more of the second type of defect than the first type of defect. Thus, choosing different amplitude thresholds to apply to the same reference image may enable the detection of different type of defects in the sample.
[0075] In some embodiments, a reference image may be generated by removing data having a spatial frequency higher than a selected frequency from a received image. In some embodiments, a reference image may be generated by removing data associated with low amplitudes. In other words, the reference image may be generated by only retaining data higher than a selected amplitude threshold.
[0076] It should be noted that although Fig. 6 illustrates frequency-domain filtering (FFT filtering) being used to selectively remove higher spatial frequency content (or data) from the experimental image (to generate the reference image), this is only exemplary. Other image processing techniques may also be used. In general, any technique of removing data having a spatial frequency higher than a selected frequency from the experimental image may be used to generate the reference image. For example, in some embodiments, singular value decomposition filtering (SVD filtering) techniques may be used to selectively remove higher spatial frequency content from the experimental image to generate reference images.
[0077] While some embodiments have been described using a pattern having parallel (or traces or trenches), it will be appreciated that the present disclosure can be applied to detect defects in any type of pattern (e.g., including vias, pillars, contact pads, lines with reduced length, staggered lines, lines with various widths, overlay patterns, etc.). In some embodiments, defects in a sample having a test structure with a repeated pattern of structures may be detected using the disclosed technique. For example, Fig. 8A illustrates a sample having a repeated pattern of holes. By selectively removing high spatial frequency content (of an experimental image of the sample) to generate a reference image, defects in the pattern of holes in the sample may be detected. By selecting different amplitude thresholds, different types of defects (see Fig. 8B) may be detected. For example, by selecting a first amplitude threshold, a first defect 810 (e.g., different sized hole) may be detected, and by selecting a different amplitude threshold, a second defect 820 (e.g., filled hole) may be detected.
[0078] In some embodiments of the current disclosure, a non-transitory computer readable medium may be provided that stores instructions for a processor of a controller (e.g., controller 109 of Fig. 1) to carry out, among other things, image inspection, image acquisition, image processing, stage positioning, beam focusing, electric field adjustment, beam bending, condenser lens adjusting, activating charged-particle source, beam deflecting, and the functionalities described above and including method 500 of Fig. 5. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a Compact Disc Read Only Memory (CD-ROM), any other optical data storage medium, any physical medium with patterns of holes, a Random Access Memory (RAM), a Programmable Read Only Memory (PROM), and Erasable Programmable Read Only Memory (EPROM), a FLASH-EPROM or any other flash memory, Non-Volatile Random Access Memory (NVRAM), a cache, a register, any other memory chip or cartridge, and networked versions of the same. In some embodiments, a charged particle beam system (e.g., system 100 of Fig. 1, tools 104A, 104B of Figs. 2A-2B, etc.) or an image inspection system may include a controller (e.g., controller 109 of Fig. 1) that carried out, among other things, image inspection, image acquisition, image processing, stage positioning, beam focusing, electric field adjustment, beam bending, condenser lens adjusting, activating charged-particle source, beam deflecting, and the functionalities described above, including method 500 of Fig. 5.
[0079] The embodiments may further be described using the following clauses:
1. A non-transitory computer readable medium that stores a set of instructions that is executable by at least on processor of a computing device to cause the computing device to perform operations for inspecting an image, the operations comprising: receiving the image; generating a reference image based on the received image using a filtering technique associated with the received image; and comparing the received image or a representation of the received image with the reference image or a representation of the reference image to identify one or more defects in the received image or confirm that the received image is free of defects.
2. The non-transitory computer readable medium of clause 1 , wherein the reference image is generated by removing data having a spatial frequency higher than a selected frequency from the received image.
3. The non-transitory computer readable medium of clause 1, wherein the reference image is generated by applying one of frequency-domain filtering or singular value decomposition filtering on the received image to remove data having a spatial frequency higher than a selected frequency from the received image.
4. The non-transitory computer readable medium of clause 1 , wherein comparing the received image includes determining a magnitude of a difference in gray level values between the received image and the reference image at multiple locations in the received image. 5. The non-transitory computer readable medium of clause 4, wherein comparing the received image includes determining the magnitude of the difference in gray level values at each pixel location in the received image.
6. The non-transitory computer readable medium of any of clauses 4 to 5, wherein comparing the received image further includes identifying the one or more defects based on the determined magnitude of the difference in gray level values at the multiple locations.
7. The non-transitory computer readable medium of clause 6, further comprising calculating at least one property of the one or more defects based on the determined magnitude of the difference.
8. The non-transitory computer readable medium of any of clauses 4 to 5, wherein comparing the received image further includes converting the determined magnitude of the difference in gray level values at the multiple locations into a binary format and identifying the one or more defects based on the binary format.
9. The non-transitory computer readable medium of any of clauses 4 to 5, wherein comparing the received image further includes denoising the determined magnitude of the difference in gray level values at the multiple locations.
10. The non-transitory computer readable medium of clause 9, wherein comparing the received image further includes calculating a defect strength based on the denoised magnitude of the difference in gray level values.
11. The non-transitory computer readable medium of clause 1 , wherein the received image includes a representation of a repeated pattern of structures.
12. The non-transitory computer readable medium of clause 11, wherein the repeated pattern of structures includes a repeated pattern of one or more of lines, traces, trenches, vias, pillars, contact pads, and holes.
13. The non-transitory computer readable medium of clause 1, wherein the received image is a scanning electron microscope image.
14. The non-transitory computer readable medium of clause 1, wherein generating a reference image includes generating multiple reference images, and wherein each reference image of the multiple reference images is generated by removing data having a spatial frequency higher than a different frequency from the received image.
15. The non-transitory computer readable medium of clause 14, wherein the multiple reference images includes a first reference image and a second reference image, and wherein comparing the received image with the reference image includes comparing the received image with the first reference image to identify a first type of defect and comparing the received image with the second reference image to identify a second type of defect different from the first type of defect.
16. A method of inspecting an image, the operations comprising: receiving the image; generating a reference image based on the received image using a filtering technique associated with the received image; and comparing the received image or a representation of the received image with the reference image or a representation of the reference image to identify one or more defects in the received image or confirm that the received image is free of defects.
17. The method of clause 16, wherein the reference image is generated by removing data having a spatial frequency higher than a selected frequency from the received image.
18. The method of clause 16, wherein the reference image is generated by applying one of frequency-domain filtering or singular value decomposition filtering on the received image to remove data having a spatial frequency higher than a selected frequency from the received image.
19. The method of clause 16, wherein comparing the received image includes determining a magnitude of a difference in gray level values between the received image and the reference image at multiple locations in the received image.
20. The method of clause 19, wherein comparing the received image includes determining the magnitude of the difference in gray level values at each pixel location in the received image.
21. The method of any of clauses 19 to 20, wherein comparing the received image further includes identifying the one or more defects based on the determined magnitude of the difference in gray level values at the multiple locations.
22. The method of clause 21, further comprising calculating at least one property of the one or more defects based on the determined magnitude of the difference.
23. The method of any of clauses 19 to 20, wherein comparing the received image further includes converting the determined magnitude of the difference in gray level values at the multiple locations into a binary format and identifying the one or more defects based on the binary format.
24. The method of any of clauses 19 to 20, wherein comparing the received image further includes denoising the determined magnitude of the difference in gray level values at the multiple locations.
25. The method of clause 24, wherein comparing the received image further includes calculating a defect strength based on the denoised magnitude of the difference in gray level values.
26. The method of clause 16, wherein the received image includes a representation of a repeated pattern of structures.
27. The method of clause 26, wherein the repeated pattern of structures includes a repeated pattern of one or more of lines, traces, trenches, vias, pillars, contact pads, and holes.
28. The method of clause 16, wherein the received image is a scanning electron microscope image.
29. The method of clause 16, wherein generating a reference image includes generating multiple reference images, and wherein each reference image of the multiple reference images is generated by removing data having a spatial frequency higher than a different frequency from the received image. 30. The method of clause 29, wherein the multiple reference images includes a first reference image and a second reference image, and wherein comparing the received image with the reference image includes comparing the received image with the first reference image to identify a first type of defect and comparing the received image with the second reference image to identify a second type of defect different from the first type of defect.
31. An inspection system, comprising: a controller having one or more processors and a memory, the controller including circuitry to cause the one or more processors to perform operations for inspecting an image, the operations comprising: receiving the image; generating a reference image based on the received image using a filtering technique associated with the associated received image; and comparing the received image or a representation of the received image with the reference image or a representation of the reference image to identify one or more defects in the received image or confirm that the received image is free of defects.
32. The inspection system of clause 31 , wherein the reference image is generated by removing data having a spatial frequency higher than a selected frequency from the received image.
33. The inspection system of clause 31, wherein the reference image is generated by applying one of frequency-domain filtering or singular value decomposition filtering on the received image to remove data having a spatial frequency higher than a selected frequency from the received image.
34. The inspection system of clause 31 , wherein comparing the received image includes determining a magnitude of a difference in gray level values between the received image and the reference image at multiple locations in the received image.
35. The inspection system of clause 34, wherein comparing the received image includes determining the magnitude of the difference in gray level values at each pixel location in the received image.
36. The inspection system of any of clauses 34 to 35, wherein comparing the received image further includes identifying the one or more defects based on the determined magnitude of the difference in gray level values at the multiple locations.
37. The inspection system of clause 36, further comprising calculating at least one property of the one or more defects based on the determined magnitude of the difference.
38. The inspection system of any of clauses 34 to 35, wherein comparing the received image further includes converting the determined magnitude of the difference in gray level values at the multiple locations into a binary format and identifying the one or more defects based on the binary format.
39. The inspection system of any of clauses 34 to 35, wherein comparing the received image further includes denoising the determined magnitude of the difference in gray level values at the multiple locations. 40. The inspection system of clause 39, wherein comparing the received image with the reference image further includes calculating a defect strength based on the denoised magnitude of the difference in gray level values.
41. The inspection system of clause 31, wherein the received image includes a representation of a repeated pattern of structures.
42. The inspection system of clause 31, wherein the repeated pattern of structures includes a repeated pattern of one or more of lines, traces, trenches, vias, pillars, contact pads, and holes.
43. The inspection system of clause 31, wherein the received image is a scanning electron microscope image.
44. The inspection system of clause 31 , wherein generating a reference image includes generating multiple reference images, and wherein each reference image of the multiple reference images is generated by removing data having a spatial frequency higher than a different frequency from the received image.
45. The inspection system of clause 44, wherein the multiple reference images includes a first reference image and a second reference image, and wherein comparing the received image with the reference image includes comparing the received image with the first reference image to identify a first type of defect and comparing the received image with the second reference image to identify a second type of defect different from the first type of defect.
[0080] Block diagrams in the figures may illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer hardware or software products according to various exemplary embodiments of the present disclosure. In this regard, each block may represent one or multiple arithmetical or logical operation processing that may be implemented using hardware such as an electronic circuit. Blocks may also represent modules, segments, or portions of code that comprises one or more executable instructions for implementing the specified logical functions. It should be understood that in some alternative implementations, functions indicated in a block may occur out of the order noted in the figures. For example, two blocks shown in succession may be executed or implemented substantially concurrently, or two blocks may sometimes be executed in reverse order, depending upon the functionality involved. Some blocks may also be omitted. It should also be understood that each block of the block diagrams, and combination of the blocks, may be implemented by special purpose hardware -based systems that perform the specified functions or acts, or by combinations of special purpose hardware and computer instructions.
[0081] It will be appreciated that the embodiments of the present disclosure are not limited to the exact construction that has been described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The present disclosure has been described in connection with various embodiments, other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims

1. A non-transitory computer readable medium that stores a set of instructions that is executable by at least on processor of a computing device to cause the computing device to perform operations for inspecting an image, the operations comprising: receiving the image; generating a reference image based on the received image using a filtering technique associated with the received image; and comparing the received image or a representation of the received image with the reference image or a representation of the reference image to identify one or more defects in the received image or confirm that the received image is free of defects.
2. The non-transitory computer readable medium of claim 1 , wherein the reference image is generated by removing data having a spatial frequency higher than a selected frequency from the received image.
3. The non-transitory computer readable medium of claim 1, wherein the reference image is generated by applying one of frequency-domain filtering or singular value decomposition filtering on the received image to remove data having a spatial frequency higher than a selected frequency from the received image.
4. The non-transitory computer readable medium of claim 1 , wherein comparing the received image includes determining a magnitude of a difference in gray level values between the received image and the reference image at multiple locations in the received image.
5. The non-transitory computer readable medium of claim 4, wherein comparing the received image includes determining the magnitude of the difference in gray level values at each pixel location in the received image.
6. The non-transitory computer readable medium of claim 4, wherein comparing the received image further includes identifying the one or more defects based on the determined magnitude of the difference in gray level values at the multiple locations.
7. The non-transitory computer readable medium of claim 6, further comprising calculating at least one property of the one or more defects based on the determined magnitude of the difference.
8. The non-transitory computer readable medium of claim 4, wherein comparing the received image further includes converting the determined magnitude of the difference in gray level values at the multiple locations into a binary format and identifying the one or more defects based on the binary format.
9. The non-transitory computer readable medium of claim 4, wherein comparing the received image further includes denoising the determined magnitude of the difference in gray level values at the multiple locations.
10. The non-transitory computer readable medium of claim 9, wherein comparing the received image further includes calculating a defect strength based on the denoised magnitude of the difference in gray level values.
11. The non-transitory computer readable medium of claim 1 , wherein the received image includes a representation of a repeated pattern of structures.
12. The non-transitory computer readable medium of claim 1, wherein generating a reference image includes generating multiple reference images, and wherein each reference image of the multiple reference images is generated by removing data having a spatial frequency higher than a different frequency from the received image.
13. The non-transitory computer readable medium of claim 12, wherein the multiple reference images includes a first reference image and a second reference image, and wherein comparing the received image with the reference image includes comparing the received image with the first reference image to identify a first type of defect and comparing the received image with the second reference image to identify a second type of defect different from the first type of defect.
14. A method of inspecting an image, the operations comprising: receiving the image; generating a reference image based on the received image using a filtering technique associated with the received image; and comparing the received image or a representation of the received image with the reference image or a representation of the reference image to identify one or more defects in the received image or confirm that the received image is free of defects.
15. An inspection system, comprising: a controller having one or more processors and a memory, the controller including circuitry to cause the one or more processors to perform operations for inspecting an image, the operations comprising: receiving the image; generating a reference image based on the received image using a filtering technique associated with the associated received image; and comparing the received image or a representation of the received image with the reference image or a representation of the reference image to identify one or more defects in the received image or confirm that the received image is free of defects.
PCT/EP2025/064966 2024-06-28 2025-05-29 Methods for inspecting images Pending WO2026002519A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463665897P 2024-06-28 2024-06-28
US63/665,897 2024-06-28

Publications (1)

Publication Number Publication Date
WO2026002519A1 true WO2026002519A1 (en) 2026-01-02

Family

ID=96019953

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2025/064966 Pending WO2026002519A1 (en) 2024-06-28 2025-05-29 Methods for inspecting images

Country Status (1)

Country Link
WO (1) WO2026002519A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8126258B2 (en) * 2006-11-09 2012-02-28 Samsung Electronics Co., Ltd. Method of detecting defects in patterns on semiconductor substrate by comparing second image with reference image after acquiring second image from first image and apparatus for performing the same

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8126258B2 (en) * 2006-11-09 2012-02-28 Samsung Electronics Co., Ltd. Method of detecting defects in patterns on semiconductor substrate by comparing second image with reference image after acquiring second image from first image and apparatus for performing the same

Similar Documents

Publication Publication Date Title
JP2019505089A (en) System and method for performing region adaptive defect detection
US20240005463A1 (en) Sem image enhancement
TWI876176B (en) Methods and apparatus for correcting distortion of an inspection image and associated non-transitory computer readable medium
US20250095116A1 (en) Image enhancement in charged particle inspection
KR102869587B1 (en) Reference data processing for wafer inspection
WO2026002519A1 (en) Methods for inspecting images
TWI857040B (en) Systems and methods for image enhancement for a multi-beam charged-particle inspection system
WO2023194014A1 (en) E-beam optimization for overlay measurement of buried features
TWI869728B (en) Beam position displacement correction in charged particle inspection
US20250285227A1 (en) System and method for improving image quality during inspection
US20250265701A1 (en) Transient defect inspection using an inspection image
US20250391011A1 (en) System and method for image resolution characterization
WO2024083451A1 (en) Concurrent auto focus and local alignment methodology
TW202541085A (en) Image ripple correction by dynamic compensation
WO2025201837A1 (en) Metrology improvement for pattern-edge based measurements
WO2024231002A1 (en) Precise and accurate critical dimension measurement by modeling local charging distortion
WO2025237633A1 (en) Systems and methods for overlay measurement
WO2025098768A1 (en) Design aware dynamic pixel sizes to boost scanning electron microscopy inspection and metrology throughput
TW202520325A (en) Systems and methods for increasing throughput during voltage contrast inspection using points of interest and signals
WO2025131570A1 (en) Systems and methods for signal-based defect classification in transient inspection
TW202503815A (en) Apparatus for contamination reduction in charged particle beam systems