[go: up one dir, main page]

WO2026018408A1 - Image processing system, image processing method, and display device - Google Patents

Image processing system, image processing method, and display device

Info

Publication number
WO2026018408A1
WO2026018408A1 PCT/JP2024/025924 JP2024025924W WO2026018408A1 WO 2026018408 A1 WO2026018408 A1 WO 2026018408A1 JP 2024025924 W JP2024025924 W JP 2024025924W WO 2026018408 A1 WO2026018408 A1 WO 2026018408A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
contour
contour information
image processing
processing system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/025924
Other languages
French (fr)
Japanese (ja)
Inventor
憶土 池田
淳 澤田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi High Tech Corp
Original Assignee
Hitachi High Tech Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi High Tech Corp filed Critical Hitachi High Tech Corp
Priority to PCT/JP2024/025924 priority Critical patent/WO2026018408A1/en
Publication of WO2026018408A1 publication Critical patent/WO2026018408A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B15/00Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons
    • G01B15/04Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons for measuring contours or curvatures

Definitions

  • This disclosure relates to technologies such as image processing systems, image processing methods, and display devices.
  • In order to accommodate a wide variety of images, algorithms for extracting contours within observed images can control their operation using the numerical values of variables called parameters.
  • one typical parameter is the threshold.
  • Contour extraction algorithms often include a process for selecting pixels that will become part of the contour from among pixels in the observed image that show large changes in brightness.
  • the threshold is used in the process of selecting pixels. Even when extracting contours from the same image, changing the threshold value will change the extracted contour.
  • the parameters included in the algorithm, including the threshold must be set to optimal values.
  • TEM transmission electron microscopes
  • STEM scanning transmission electron microscopes
  • SEM scanning electron microscopes
  • shape analysis such as length and angle measurements
  • Patent Documents 1 and 2 technologies related to the automatic optimization of contour extraction algorithms have been proposed, such as those in Patent Documents 1 and 2.
  • the method described in Patent Document 1 is only applicable when the manufacturing process of the sample is known. Therefore, it is difficult to apply it to naturally occurring samples, such as biological samples, which do not have a manufacturing process in the first place.
  • the method described in Patent Document 2 automatically determines parameter values that minimize the influence of noise in the observed image on contour extraction, or the roughness of the contour, but does not guarantee the extraction of contours that the user envisions, including rough contours.
  • the purpose of this disclosure is to provide technology that can obtain parameter values that extract contours that are closer to the contours that the user envisions and expects.
  • An image processing system in one embodiment processes an observation image of a sample obtained by a charged particle beam device.
  • the image processing system includes an image display unit that displays the observation image, a contour extraction unit that extracts first contour information related to the structural boundaries of the sample from the observation image based on a first set value of a preset parameter, a first contour information display unit that displays the first contour information extracted by the contour extraction unit on the observation image, a second contour information receiving unit that receives input of second contour information related to boundaries visible in the observation image, and a parameter value acquisition unit that acquires a second set value different from the first set value so as to minimize the difference between the first contour information and the second contour information in the observation image.
  • An image processing method executed by an image processing system that processes an observation image of a sample obtained by a charged particle beam device wherein the image processing system displays the observation image, extracts first contour information relating to the structural boundaries of the sample from the observation image based on a first setting value of a preset parameter, displays the extracted first contour information on the observation image, accepts input of second contour information relating to boundaries visible in the observation image, and obtains a second setting value different from the first setting value so as to minimize the difference between the first contour information and the second contour information in the observation image.
  • the display device is used in an image processing system that processes observation images of a sample obtained by a charged particle beam device.
  • the display device displays first contour information relating to the structural boundaries of the sample extracted from the observation image based on a first setting value of a preset parameter on the observation image, accepts input of second contour information relating to boundaries visible in the observation image, acquires a second setting value different from the first setting value so as to minimize the difference between the first contour information and the second contour information in the observation image, and displays third contour information extracted based on the acquired second setting value on the observation image.
  • a representative embodiment of the present disclosure provides technology for acquiring parameter values that extract a contour line that is closer to the contour line that the user envisions and expects. Issues, configurations, effects, etc. other than those described above are described in the detailed description of the invention.
  • FIG. 1 is a diagram illustrating an example of the configuration of a charged particle beam apparatus according to a first embodiment.
  • FIG. 2 illustrates an example of functional blocks of a controller according to the first embodiment.
  • 10 is a flowchart showing a flow of processing of a functional block according to the first embodiment;
  • FIG. 2 is a diagram illustrating an example of a cross-sectional image of a sample according to the first embodiment.
  • FIG. 2 illustrates an example of a user interface according to the first embodiment.
  • FIG. 2 illustrates an example of a reference line according to the first embodiment;
  • 3A and 3B are diagrams illustrating an example of a line profile and a contour line according to the first embodiment;
  • 10A and 10B are diagrams for explaining an example of acquiring a horizontal line profile according to the first embodiment;
  • FIG. 10 illustrates an example of a line profile in the horizontal direction according to the first embodiment.
  • FIG. 10 is a diagram illustrating an example of a line profile in the radial direction according to the first embodiment.
  • FIG. 10 is a diagram illustrating an example of an initial contour calculated using initial setting values of two parameters according to the first embodiment.
  • FIG. 10 illustrates an example of a range in which movement of an outline point is restricted according to the first embodiment;
  • FIG. 10 is a diagram illustrating an example of a correct contour line according to the first embodiment;
  • FIG. 10 is a diagram illustrating an example of a corrected contour line corrected based on a corrected contour line according to the first embodiment.
  • FIG. 10 illustrates an example of a display during execution of an optimization algorithm according to the first embodiment.
  • FIG. 10 illustrates an example of a display during execution of an optimization algorithm according to the first embodiment.
  • FIG. 10 is a diagram illustrating an example of a cross-sectional image of a sample according to the second embodiment.
  • 10A and 10B are diagrams illustrating an example of contour points detected by a contour point detection algorithm according to a second embodiment;
  • FIG. 10 is a diagram illustrating an example of display of corrected contour points according to the second embodiment.
  • FIG. 10 illustrates an example of a display on a display unit during execution of an optimization algorithm according to a second embodiment.
  • FIG. 10 illustrates an example of a top-view image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of initial contour points and correct contour points extracted from a top-view image according to the second embodiment.
  • FIG. 11 is a diagram illustrating an example of a corrected contour point that is closest to a correct contour point according to the second embodiment.
  • FIG. 10 is a diagram showing an example of a particle sample image obtained when observing and analyzing a particulate sample according to the third embodiment.
  • FIG. 13 is a diagram showing an example of a particle sample image after separation processing according to the third embodiment.
  • FIG. 13 is a diagram showing an example of a particle sample image after noise removal processing according to the third embodiment.
  • FIG. 13 is a diagram showing an example of a particle sample image after extraction processing of a "certain background region" according to the third embodiment.
  • FIG. 13 is a diagram showing an example of a particle sample image after extraction processing of a “reliable particle region” according to the third embodiment.
  • FIG. 11 is a diagram showing an example of a particle sample image after image processing in which parameter value setting is not required according to the third embodiment.
  • FIG. 13 is a diagram showing an example of a particle sample image according to the third embodiment.
  • FIG. 13 is a diagram showing an example of a particle sample image according to the third embodiment.
  • FIG. 13 illustrates an example of a display on a display unit during execution of an optimization algorithm according to the third embodiment.
  • the program, function, processing unit, etc. may be described as the main focus, but the main hardware component of these is the processor, or a controller, device, computer system, system, etc. that is made up of that processor, etc.
  • a computer system uses the processor to execute processing in accordance with a program read into memory, appropriately using resources such as memory and communication interfaces. This realizes specified functions, processing units, etc.
  • a processor is made up of semiconductor devices such as a CPU/MPU or GPU, for example. Processing is not limited to software program processing, and can also be implemented using dedicated circuits. Dedicated circuits such as FPGAs, ASICs, and CPLDs can be used.
  • the program may be pre-installed as data on the target computer system, or may be distributed as data to the target computer system from a program source.
  • the program source may be a program distribution server on a communications network, or a non-transitory computer-readable storage medium such as a memory card or disk.
  • the program may be composed of multiple modules.
  • the computer system may be composed of multiple devices.
  • the computer system may be composed of a client-server system, a cloud computing system, an IoT system, etc.
  • Various data and information may be composed of structures such as tables and lists, for example, but are not limited to these. Expressions such as identification information, identifiers, IDs, names, and numbers are interchangeable.
  • a well-known method of observing samples, such as semiconductor devices, using a charged particle beam device is so-called top-view observation and measurement, in which the wafer surface is viewed from above using a scanning electron microscope, such as a critical dimension scanning electron microscope (CD-SEM).
  • a scanning electron microscope such as a critical dimension scanning electron microscope (CD-SEM).
  • CD-SEM critical dimension scanning electron microscope
  • semiconductor devices are three-dimensional structures, and so observation of the wafer surface from the side, i.e., cross-sectional observation and measurement, also plays an important role in the device development and quality assurance processes. For this reason, the embodiments mainly focus on image processing techniques for observing and measuring the cross sections of semiconductor devices.
  • FIG. 1 is a diagram showing an example of the configuration of a charged particle beam device (hereinafter referred to as "SEM") 100.
  • SEM 100 roughly comprises an SEM main body 101 and a controller 102 connected to the main body 101.
  • the main body 101 further comprises an electron optical column (hereinafter referred to as "column") and a sample chamber provided below the column.
  • the controller 102 is a system that controls imaging by the main body 101, etc.
  • the controller 102 comprises an overall control unit 120, a signal processing unit 121, a storage unit 122, a communication interface 123, etc.
  • An input device 124 and an output device 125 are externally connected to the controller 102.
  • the column of the main body 101 comprises, as its components, an electron gun 111, a focusing lens 112, a deflection lens 113, and an objective lens 114.
  • the electron gun 111 emits an electron beam b1, which is a beam of charged particles.
  • the focusing lens 112 focuses the electron beam b1.
  • the deflection lens 113 deflects the trajectory of the electron beam b1.
  • the objective lens 114 controls the focusing height of the electron beam b1.
  • the sample chamber is a room where samples such as wafers and coupons (broken pieces of wafers) are stored.
  • the sample chamber comprises a stage 115 and a detector 116. A sample 130 is placed on the stage 115.
  • the stage 115 is a sample stage on which the target sample 130, a semiconductor device, is placed.
  • the stage 115 has the ability to move not only in the X and Y directions but also in the Z direction, and to rotate about the XY, YX, or Z axes. As a result, the stage 115 can move the captured field of view in the horizontal and vertical directions relative to the front-on image, or in a rotational direction within the field of view. This allows the field of view for imaging to be set.
  • the detector 116 detects particles b2, such as secondary electrons and backscattered electrons, generated from the sample 130 irradiated with the electron beam b1 as an electrical signal.
  • the detector 116 outputs a detection signal, which is an electrical signal.
  • the overall control unit 120 controls the operation of the controller 102 and the main body 101.
  • the overall control unit 120 issues instructions such as drive control to each unit.
  • Each unit, including the overall control unit 120, can be implemented as a computer or dedicated circuit.
  • the signal processing unit 121 receives the detection signal from the detector 116, performs processing such as analog/digital conversion to generate an image signal, and stores it as image data in the memory unit 122.
  • the memory unit 122 can be implemented as a non-volatile storage device, etc.
  • the overall control unit 120 also associates and stores shooting information related to the image in the memory unit 122.
  • the communication interface 123 is a device that has a communication interface for a communication network (not shown) or an external computer (not shown).
  • the overall control unit 120 outputs data such as images and shooting information stored in the storage unit 122 to the output device 125 in response to a request from the input device 124, for example.
  • the overall control unit 120 may receive requests from an external computer via the communication interface 123, and in response to the received request, transmit data such as images and shooting information stored in the storage unit 122 to the external computer.
  • FIG. 2 is a diagram showing an example of functional blocks of the controller 102.
  • the controller 102 includes an image display unit 1021, a contour extraction unit 1022, a first contour information display unit 1023, a second contour information receiving unit 1024, a parameter value acquisition unit 1025, and a third contour information display unit 1026.
  • the overall control unit 120 reads a program stored in the storage unit 122, for example, and executes the read program, thereby causing the controller 102 to realize the functions of the image display unit 1021, the contour extraction unit 1022, the first contour information display unit 1023, the second contour information receiving unit 1024, the parameter value acquisition unit 1025, and the third contour information display unit 1026.
  • the controller 102 operates as an image processing system that performs image processing of contours in an observation image of the sample 130 obtained by the SEM 100.
  • the controller 102 will be described as being connected to the SEM 10, but the present invention is not limited to this.
  • the controller 102 may be configured independent of the SEM.
  • the controller 102 may have an image reading function for reading an image acquired by an external SEM or the like, and may be configured to implement the functions shown in FIG. 2 for the image read by this image reading function.
  • the image display unit 1021 displays an observation image of the sample captured by the charged particle beam device.
  • the image display unit 1021 displays a sample cross-section image G10 (described below) of the sample 130 captured by the SEM 100.
  • the contour extraction unit 1022 extracts first contour information relating to the structural boundary of the sample from the observation image based on first setting values of preset parameters.
  • the contour extraction unit 1022 includes setting values for multiple parameters.
  • the contour extraction unit 1022 extracts first contour information from the observation image based on setting values for multiple parameters.
  • the contour extraction unit 1022 extracts an initial contour line 20 (first contour information) relating to the structural boundary of the sample 130 from the sample cross-sectional image G10 based on preset parameters, such as initial setting values (first setting values) of smooth and threshold.
  • preset parameters such as initial setting values (first setting values) of smooth and threshold.
  • the first contour information display unit 1023 displays the first contour information extracted by the contour extraction unit 1022 on the observation image. For example, as described below, the first contour information display unit 1023 displays the initial contour line 20 extracted using parameter values, such as the initial settings for smooth and threshold, on the sample cross-section image G10 displayed on the image display unit 1021.
  • the second contour information receiving unit 1024 receives input of second contour information relating to the boundary visible in the observation image.
  • the second contour line receiving unit 1024 receives input of the correct contour line 21 relating to the boundary of the pillar 11 visible in the sample cross-sectional image G10 (described below) displayed on the image display unit 1021, as described below.
  • the parameter value acquisition unit 1025 acquires a second setting value that is different from the first setting value so as to minimize the difference between the first contour information and the second contour information in the observed image. For example, as described below, the parameter value acquisition unit 1025 acquires a parameter setting value (second setting value) that is different from the initial parameter setting value so as to minimize the difference between the initial contour line 20 and the correct contour line 21.
  • the third contour information display unit 1026 displays on the observed image the third contour information extracted by the contour extraction unit based on the second setting value acquired by the parameter value acquisition unit 1025. For example, as described below, the third contour information display unit 1026 displays the corrected contour line 22 extracted by the contour extraction unit 1022 based on the parameter value, smooth, and threshold setting values acquired by the parameter value acquisition unit 1025.
  • FIG. 3 is a flowchart showing the flow of processing by the functional blocks.
  • the image processing of this embodiment is as follows: The steps are executed in the following order: Step ST110: Image display processing by image display unit 1021; Step ST120: Contour extraction processing by contour extraction unit 1022; Step ST130: First contour information display processing by first contour information display unit 1023; Step ST140: Second contour information reception processing by second contour information reception unit 1024; Step ST150: Parameter value acquisition processing by parameter value acquisition unit 1025; Step ST160: Third contour information display processing by third contour information display unit 1026.
  • the processing executed by the controller 102 which is an image processing system, will be described in more detail below.
  • the image display unit 1021 operates the SEM main body 101 to observe the cross section of the sample (semiconductor device) 130 and acquire a sample cross-sectional image with sufficient image quality for the purpose of analysis.
  • Fig. 4 is a diagram showing an example of the sample cross-sectional image G10.
  • the sample cross-sectional image (observation image) G10 is acquired by the SEM 100, but it may also be acquired by a TEM or STEM.
  • the sample cross-sectional image G10 includes a protrusion structure (hereinafter referred to as a "pillar") 11 and a valley structure (hereinafter referred to as a "trench") 12 between two pillars.
  • a white area is provided as the background for the areas where symbols and lead lines are added, but the white area behind the symbols and lead lines is not an image.
  • a white area will be provided and the symbols and lead lines will be added.
  • the image file format is assumed for the image file format of the sample cross-section image G10.
  • the harmonic components contained in the image data are important in the contour extraction process of the contour extraction unit 1022.
  • it is desirable for the image file format to be an uncompressed image file format such as Tagged Image File Format (TIFF format) or Microsoft Windows Bitmap Image (BMP format).
  • the image display unit 1021 loads the sample cross-sectional image G10 shown in Figure 4 into the shape analysis software.
  • the shape analysis software has a user interface (UI) for loading the image format output by the SEM 100.
  • Figure 5 is a diagram showing an example of the user interface.
  • a graphical user interface (GUI: display device) 1251 is used as the user interface.
  • GUI 1251 has buttons for import image button 1251A, detect button 1251B, edit button 1251C, and setting adjustment button 1251D.
  • Import image button 1251A is a button that instructs the shape analysis software to load the sample cross-sectional image G10 and display the observed image.
  • Detect button 1251B is a button that instructs the software to extract structural units, in other words, the contour lines of structural boundaries, from the sample cross-sectional image G10.
  • Edit button 1251C is a button that instructs, for example, the second contour information receiving unit 1024 to accept editing of the contour lines.
  • Setting adjustment button 1251D is a button that instructs, for example, the parameter value acquisition unit 1025 to adjust the setting values of parameter values.
  • GUI 1251 also has a display unit 1251E.
  • the display unit 1251E displays, for example, a sample cross-sectional image G10 loaded into shape analysis software under the control of the image display unit 1021.
  • the analysis of the sample cross-sectional image G10 is performed using the pillars 11 and trenches 12 shown in Figure 4 as structural units (hereinafter referred to as "unit structures").
  • unit structures structural units
  • the contour extraction process is performed by the contour extraction unit 1022.
  • the contour extraction unit 1022 sets a reference line in the horizontal direction of the image for the sample cross-sectional image G10 shown in Figure 4.
  • Figure 6 is a diagram showing an example of the reference line 13.
  • the contour extraction unit 1022 extracts the contour lines of the unit structures in the upper portion 14 above the reference line 13 for the pillars 11, and in the lower portion 15 below the reference line 13 for the trenches 12.
  • the contour extraction unit 1022 sets reference points 16 near the tip of the pillar 11 or trench 12 for which contour extraction is desired. Any number of reference points 16 can be used. By setting the reference lines 13 and reference points 16, the contour extraction unit 1022 can appropriately set line profiles 17 and 18, which will be described later, and can appropriately extract the contours of structures such as pillars 11 and trenches 12 from the sample cross-sectional image G10.
  • the contour extraction unit 1022 After setting the reference line 13 and the reference point 16, the contour extraction unit 1022 acquires a line profile of pixel values at a certain width along the horizontal direction of the image if the line is below the reference point 16, and along the radial direction around the reference point 16 if the line is above the reference point 16.
  • Figure 7 shows an example of a line profile and a contour line.
  • Figure 7 shows a line profile 17 in the horizontal direction of the image and a line profile 18 in the radial direction.
  • the contour extraction unit 1022 acquires horizontal line profiles 17 in one-pixel increments below the reference point 16, and acquires radial line profiles 18 in one-degree increments above the reference point 16.
  • a contour line is formed by multiple contour points 19.
  • Figure 8 is a diagram illustrating an example of acquiring a horizontal line profile 17.
  • the (x, y) coordinate system in Figure 8 corresponds to the pixel coordinates of the sample cross-sectional image G10.
  • the pixel value at pixel coordinates (x, y) is represented as I(x, y).
  • the contour extraction unit 1022 takes the average of pixel values in a range of width 2ys pixels in the direction perpendicular to the horizontal line segment, and sets the average value as the profile value.
  • the contour extraction unit 1022 can remove desired noise from, for example, the sample cross-sectional image G10.
  • the contour extraction unit 1022 acquires the maximum value IP MAX and minimum value IP MIN of the profile value, and when the minimum value is 0% and the maximum value is 100%, detects a point having a profile value equivalent to T% (0 ⁇ T ⁇ 100) as an edge point.
  • IP (x e ) satisfies the following formula:
  • the edge point xe also needs to be set as a parameter.
  • the threshold value of the parameter for the edge point xe is hereinafter referred to as the "threshold.”
  • the threshold is a parameter that determines the threshold for distinguishing between image regions.
  • the contour extraction unit 1022 can distinguish, for example, the pillar 11 region from the region other than the pillar 11 in the sample cross-sectional image G10.
  • the contour extraction unit 1022 can obtain contour points 19 of each line profile 17, 18 above and below the reference point 16.
  • the first contour information display unit 1023 can display the group of contour points 19 obtained in this way, as shown in Figure 7, which has already been described. In Figure 7, multiple contour points 19 are displayed so as to roughly follow the shape of the pillar 11.
  • the two parameters (smooth and threshold) used in the contour extraction process are adjusted to optimal values by applying the technology of this embodiment.
  • the shape analysis software has default settings for smooth and threshold. Therefore, the contour extraction unit 1022 first executes the contour extraction process using the default settings for the two parameters, and extracts an initial contour line (first contour information) formed by multiple contour points 19.
  • the user uses the mouse to align the initial contour line 20 with the shape of the pillar 11, thereby creating a correct contour line 21 (second contour information) to be displayed in FIG. 13 , which will be described later.
  • the shaping of the initial contour line 20 is achieved, for example, by moving each of the contour points 19 that make up the initial contour line 20.
  • the movement of each contour point 19 is restricted to a certain range.
  • Figure 12 is a diagram showing an example of the range in which the movement of a contour point is restricted.
  • the display in Figure 12 is displayed, for example, by pressing edit button 1251C and selecting a contour point 19.
  • the double-headed arrows AW1 and AW2 indicate the range in which the movement of the contour point 19 is restricted.
  • the double-headed arrow AW1 indicates the range in which the contour point 19 obtained from the horizontal line profile 17 can be moved.
  • the double-headed arrow AW2 indicates the range in which the contour point 19 obtained from the radial line profile 18 can be moved.
  • the double-headed arrow AW1 extends horizontally from the reference point 16, with the contour point 191 at its center. Contour point 191 is one of multiple contour points 19 located below the reference point 16. One end of the double-headed arrow AW1 extends a distance L1 to a line extending vertically from the reference point 16. The other end of the double-headed arrow AW1 extends a distance L1 from the reference point 16 on the opposite side to the one end. Contour point 191, below the reference point 16 and to the left of the reference point 16 as viewed from the reference point 16, cannot move up or down, and cannot be moved horizontally from the position of contour point 191 shown in Figure 12 to the right of the reference point 16.
  • the double-headed arrow AW2 extends in the radial direction from the reference point 16, centered on the contour point 192.
  • Contour point 192 is one of multiple contour points 19 located above the reference point 16.
  • One end of the double-headed arrow AW2 extends a distance L2 to the reference point 16.
  • the other end of the double-headed arrow AW2 extends a distance L2 from the reference point 16 on the opposite side to the one end.
  • Contour point 192 above the reference point 16 can only move on the straight line connecting the position shown in Figure 12 and contour point 192; contour point 192 cannot be moved below the reference point 16 from the position of contour point 192 shown in Figure 12.
  • the double arrows AW1, AW2 and portions NG1, NG2 allow the user to visually confirm the range in which contour points 191, 192 can be moved. Furthermore, by limiting the range in which contour points 191, 192 can be moved, it is possible to prevent the user from inputting the wrong contour points 191, 192.
  • inputting the wrong contour points 191, 192 means, for example, moving contour points 191, 192 to positions that are clearly unrelated to the contour of a structure such as pillar 11. Because it is possible to prevent inputting such wrong movement positions, the contour extraction unit 1022 can properly execute the contour extraction process.
  • the user uses the mouse to input a correct contour line within the limited range illustrated by the double arrows AW1 and AW2 in Figure 12.
  • the user modifies the initial contour line 20, for example, to fit the shape of the pillar 11.
  • the correct contour line is stored, for example, as information indicating the positions of multiple contour points 19 after each contour point 19 has been moved.
  • Figure 13 is a diagram showing an example of a correct contour line 21. In Figure 13, the correct contour line 21 is displayed, fitting the shape of the pillar 11.
  • the second contour information receiving unit 1024 stores the multiple contour points 19 that form the received correct contour line 21, for example, in a specified memory.
  • the parameter value acquisition unit 1025 executes a parameter optimization algorithm. More specifically, the parameter value acquisition unit 1025 changes the smooth and threshold values from the initial settings, repeats the process of calculating the loss, and ultimately acquires the smooth value and threshold setting values that minimize the loss.
  • the smooth value and threshold value that minimize the loss are the values that, when the contour extraction unit 1022 extracts a contour, extracts a contour whose display difference is closest to the correct contour 21.
  • FIG. 14 is a diagram showing an example of a corrected contour 22 (third contour information) that has been corrected based on the corrected contour 21.
  • Figure 14 displays the corrected contour 22 that follows the shape of the pillar 11.
  • the procedure for setting a series of parameter values in the processing of the parameter value acquisition unit 1025 differs depending on the optimization algorithm used.
  • the optimization algorithm may be, for example, a gradient method represented by the steepest descent method, a method using random numbers such as simulated annealing, or a statistical method such as Bayesian optimization.
  • the initial contour 20 calculated by the algorithm in the processing of the contour extraction unit 1022 described above is a point group made up of contour points 19 extracted on a specific line profile 17, as shown in FIG. 7.
  • the loss is the sum of squares of the distances between corresponding contour points on two contours: and the maximum distance between corresponding contour points:
  • x i (t) and y i (t) are the image coordinates of the i-th contour point at a certain time t during execution of the optimization algorithm
  • x i R and y i R are the image coordinates of the i-th contour point of the correct contour 21.
  • FIG. 15 is a diagram showing an example of the display of the display unit 1251E while an optimization algorithm is running.
  • a loss calculation display G15 that displays the loss calculation is displayed.
  • the loss calculation display G15 displays the calculation results of the loss for the parameters.
  • the loss calculation display G15 displays a graph showing the calculation results of the loss result.
  • the graph showing the calculation results of the loss result does not have to be displayed.
  • a smooth curve such as the solid line in FIG. 15 is not drawn, and the loss values calculated at each time t may be dotted as shown by A' in FIG. 15, etc.
  • the parameter value acquisition unit 1025 calculates the loss from the initial contour 20 (consisting of A and the like shown in the figure) and the correct contour 21 at a certain time t during execution of the optimization algorithm, using L2 shown in the above-mentioned equation (3) and L1M shown in equation (4).
  • the indicator A is one of the contour points 19 forming the initial contour 20.
  • the loss is displayed, for example, as a square indicator A' on a graph showing the loss results in the loss calculation indicator G15.
  • the parameter value acquisition unit 1025 updates the parameter values according to the optimization algorithm, and similarly calculates the loss for the contour line (consisting of B, etc., as shown) detected one step later at time t+1.
  • Point B is one of the contour points 19 that form the correct contour line 21.
  • the loss is displayed, for example, as point B' in the loss calculation display G15.
  • the parameter value acquisition unit 1025 repeats the process of calculating the loss. By repeating the process of calculating the loss a sufficient number of times, the parameter value acquisition unit 1025 can obtain the parameter value that minimizes the loss. For example, in the loss calculation display G25, the parameter value that minimizes the loss is displayed as point O'. The contour extracted by the contour extraction unit 1022 using the parameter value that minimizes the loss becomes the corrected contour 22 that is closer to the correct contour 21.
  • the parameter value acquisition unit 1025 creates binary arrays for the initial contour 20 and the correct contour 21, in which the pixel values of the pixels constituting the contour in the sample cross-sectional image G10 are set to 1 and the other pixel values are set to 0, and calculates the loss L R by subtracting the product of the elements of the created arrays.
  • I B (x, y; t) is an element value corresponding to the (x, y) image coordinates in the binary array created from the initial contour 20 at a certain time t during execution of the optimization algorithm
  • I B R (x, y) is an element value corresponding to the (x, y) image coordinates in the binary array created from the correct contour 21.
  • the controller 102 can acquire parameter setting values that minimize the difference from the correct contour line 21 assumed and expected by the user; in other words, that extract a corrected contour line 22 that is closest to the correct contour line 21.
  • This technology for acquiring parameter setting values can be applied to any type of observation image and any shape of contour line captured by a charged particle beam device such as SEM 100.
  • the controller 102 can also display the corrected contour line 22 extracted using the acquired parameter values on the display unit 1251E. This allows the user to visually see how close the corrected contour line 22 is to the correct contour line 21 assumed and expected.
  • 16 is a diagram showing an example of a sample cross-sectional image (observation image) G20.
  • the contour extraction unit 1022 when measuring the distance AW3 from a contour point 193 of one pillar 11 to a contour point 194 of another pillar 11, and when the positions of the two contour points to be measured are approximately determined, the contour extraction unit 1022 does not need to extract all contour points. Instead, it is sufficient for the contour extraction unit 1022 to extract one contour point 193, 194 for each pillar 11 within a specified small region in the sample cross-sectional image G20.
  • the simplest algorithm for the contour extraction unit 1022 to extract contour points 19 is to apply smoothing filter processing to the pixels of the small region.
  • smoothing filter processing is a filter processing for blurring and smoothing an image.
  • the contour extraction unit 1022 can smooth the image of the small region.
  • the contour extraction unit 1022 detects the pixel with the highest pixel value as the contour point 193 (or contour point 194).
  • the image I S (x, y) obtained by applying smoothing filter processing to the image I(x, y) is expressed as follows: That is, I S (x, y) is an image obtained by averaging the pixel values of a pixel region of area K ⁇ K centered on each pixel of the original image I S (x, y). K used here is a positive odd number, and is a parameter called kernel size.
  • the parameter value acquisition unit 1025 can adjust this parameter value using the same optimization algorithm as in the first embodiment.
  • Figure 17 is a diagram showing an example of contour points detected by the contour point detection algorithm.
  • Figure 17 shows initial contour points 231, 232 (first contour information) for each pillar 11.
  • the user can modify the positions of the initial contour points 231, 232 extracted by the contour point detection algorithm by mouse operation as described in the first embodiment.
  • the range in which the initial contour points 231, 232 can be moved may be limited.
  • the initial contour points 231, 232 are designated as correct contour points 241, 242 (second contour information).
  • the parameter value acquisition unit 1025 minimizes the sum of squares of the distances between the initial contour points 231, 232 and the correct contour points 241, 242 for all small regions as a loss, and can obtain the optimal parameter values, i.e., kernel size values, for detecting correct contour points that minimize the correct contour points 241, 242.
  • the contour extraction unit 1022 extracts the contour
  • the third contour information display unit 1026 displays the correct contour points that are the extraction results on the display unit 1251E.
  • Figure 18 shows an example of the display of corrected contour points 251, 252 (third contour information).
  • corrected contour points 251, 252 are displayed near correct contour points 241, 241.
  • FIG. 19 is a diagram showing an example of the display of the display unit 1251E while the optimization algorithm is running.
  • a loss calculation display G25 that displays the loss calculation is displayed.
  • the loss calculation display G25 displays the calculation results of the loss for the parameters.
  • the loss calculation display G25 displays a graph showing the calculation results of the loss result.
  • the parameter value that minimizes the loss is displayed as point O' on the graph.
  • the processing order of this optimization algorithm is the same as that explained using FIG. 15 in the first embodiment.
  • the graph showing the calculation results of the loss result does not have to be displayed, and may be dotted, as explained using FIG. 15.
  • the controller 102 when the controller 102 measures the distance from a contour point 193 of one pillar 11 to a contour point 194 of another pillar 11, and when the positions of the two contour points to be measured are nearly determined, it can obtain parameter values with the least loss without extracting all contour points. This reduces the image processing load on the controller 102.
  • FIG. 20 is a diagram showing an example of a top-view image (observation image) G30 of the sample 130.
  • the sample 130 is a semiconductor device.
  • FIG. 21 is a diagram showing an example of initial contour points 261 and 262 (first contour information) and correct contour points 271 and 272 (second contour information) extracted from the top-view image G30.
  • FIG. 22 is a diagram showing an example of corrected contour points 281 and 282 (third contour information) that minimize the display difference from the correct contour points.
  • the contour extraction unit 1022 specifies the positions of two points to be measured, detects the pixel with the highest brightness after smoothing filter processing, and extracts initial contour points 261 and 262.
  • the first contour information display unit 1023 displays the initial contour points 261 and 262 in the top-view image G30, as shown in FIG. 21.
  • contour extraction unit 1022 extracts corrected contour points 281, 282.
  • third contour information display unit 1026 displays corrected contour points 281, 282 in top-view image G30. This allows the user to visually recognize corrected contour points 281, 282 that are closest to correct contour points 271, 272.
  • ⁇ Particle sample image> 23 is a diagram showing an example of a particle sample image (observation image) G40 obtained when observing and analyzing a particulate sample using SEM 100.
  • a region extraction algorithm can be used to extract the boundaries, i.e., contours, of individual particles in particle sample image G40 of such sample 130.
  • FIG. 24 in particle sample image G40, multiple circular particles are displayed in pixel area PA1.
  • the outline extraction unit 1022 performs particle region extraction processing on the particle sample image to be analyzed.
  • the particle region extraction processing for a given image involves the following image processing: Step ST1: Rough separation of particle regions (white) and background regions (black) by binarization. Step ST2: Noise removal by opening/closing. Step ST3: Extraction of "certain background regions” by expansion. Step ST4: Extraction of "certain particle regions” by distance transformation and binarization.
  • the contour extraction unit 1022 performs image processing of particle images using steps ST1 to ST4. Furthermore, step ST4 is followed by extraction processing of "areas where it is unclear whether they are particles or background," labeling processing of each area, and watershed processing; however, these are image processes that do not require parameter value setting, and therefore will not be discussed in detail in this embodiment.
  • Figure 24 is a diagram showing an example of a particle sample image G40 after the separation process of step ST1.
  • Figure 25 is a diagram showing an example of a particle sample image G40 after the noise removal process of step ST2.
  • Figure 26 is a diagram showing an example of a particle sample image G40 after the extraction process of step ST3.
  • Figure 27 is a diagram showing an example of a particle sample image G40 after the extraction process of step ST4.
  • the parameter setting values that the contour extraction unit 1022 should set when executing image processing of the particle sample image G40 are as follows: In the separation process of step ST1, it is the threshold of the binarization process (parameter P1). In the noise removal process of step ST2, it is the kernel size and number of repetitions of the opening process/closing process (parameter P2). In the extraction process of step ST3, it is the kernel size and number of repetitions of the expansion process (parameter P3). In the extraction process of step ST4, it is the threshold of the distance transformation/binarization process (parameter P4). The relationship between each image process and the setting value will be explained below.
  • the particle sample image G40 contains a pixel area PA2 that appears to be a particle region, as well as multiple black spot noises N1 and white spot noises N2.
  • the black spot noise N1 is black spot-shaped noise in the white region.
  • the white spot noise N2 is black spot-shaped noise in the black region.
  • the particle sample image G40 obtained by the binarization process shown in Fig. 24 contains sesame-like noise N.
  • the opening process/closing process is performed to remove this noise N.
  • the opening process/closing process is a combination of expansion/contraction processes, so it will be explained first.
  • Both the expansion process and the contraction process are processes performed on a binarized image.
  • the expansion process is performed on the binarized image I B (x, y) with a kernel size K D and one repetition to obtain an image I (D, 1) (x, y) .
  • I (D, 1) x, y
  • KD is a positive odd number.
  • an image I (D,n)(x,y) obtained by performing expansion processing on a binary image IB (x,y) with a kernel size KD and repeated n times is expressed as follows: where o represents the composition of the process and has the following meaning:
  • the erosion process is the inverse operation of the expansion process.
  • the image I (E,1)(x,y) obtained by performing the erosion process on the binary image IB(x,y) with the kernel size K E and one repetition is given by
  • I(x,y) is set to 0 (this is the contraction process).
  • K E is a positive odd number, just like K D.
  • Dilation processing can be used to remove black spot noise N1 from the white area, but performing dilation processing also expands the white area itself. Therefore, by performing erosion processing after dilation processing, the contour extraction unit 1022 can remove black spot noise N1 from the white area without changing the white area.
  • This process of performing dilation followed by erosion is called opening processing.
  • the image I(O,1)(x,y) obtained by performing opening processing on the binary image (I B (x,y) with a kernel size K O and one repetition is It is calculated by: where K O is a positive odd number.
  • the closing process is the reverse process of the opening process.
  • the contour extraction unit 1022 can remove white spot noise from black areas without changing the black areas. That is, the image I( C ,1)(x,y) obtained by performing the closing process on the binarized image IB (x,y) with a kernel size KC and one repetition is expressed as follows: where KC is a positive odd number.
  • the contour extraction unit 1022 may repeat both the opening process and the closing process.
  • noise is removed by performing an opening process (closing process) on the particle sample image shown in Figure 24 with a kernel size K O (K C ) and a repetition count N O (N C ), and then a dilation process is performed with a kernel size K D and a repetition count N D.
  • K C kernel size
  • N C repetition count
  • K D repetition count
  • a reliable particle region is considered to be a pixel region located near the center of the "likely particle region (white)" PA3 shown in Fig. 25, that is, a pixel region that is sufficiently far from the "reliable background region” (black).
  • an image transformation process called distance transformation is performed.
  • the white area PA4 that appears to be a particle area is displayed with stepped brightness as shown in FIG. 26, with the brighter the area, the farther it is from the "certain background area” (black). The brighter the area, the more likely it is to be a "certain particle area”.
  • the distance map image shown in FIG. 26 is binarized using a threshold specified by the user, to obtain the binarized image shown in FIG. 27.
  • the threshold binarization process here is similar to that of the previously described equation (7), but the specific value of the parameter threshold IT is different, and the user specifies and sets an initial value suitable for the image of the particle region. This specification is performed, for example, using the input device 124.
  • the white particle region PA5 in the particle sample image G40 shown in FIG. 27 is a region that is determined to be sufficiently far from the "certain background region,” and is therefore considered to be the "certain particle region.”
  • FIG. 28 is a diagram showing an example of a particle sample image G40 after image processing that does not require the setting of parameter values.
  • the particle sample image G40 shown in Figure 28 displays multiple particle regions PA6.
  • the contour extraction unit 1022 uses an optimization algorithm for each parameter shown in parameters P1 to P4 to determine the initial values of parameters P1 to P4 so as to minimize the difference between the diameter value input by the user and the average diameter of the particle region determined by the processing of steps ST1 to ST4 and the subsequent processing.
  • the contour extraction unit 1022 uses the initial values of parameters P1 to P4 determined in this way, or the initial settings of parameters P1 to P4 held as constants by the software, the contour extraction unit 1022 performs steps ST1 to ST4 and the subsequent image processing.
  • the first contour information display unit 1023 displays the particle area and initial contour line obtained based on the results of the image processing on the display unit 1251E of the GUI 1251.
  • the user modifies the initial contour displayed on the display unit 1251E using the mouse operation described in the first embodiment to input the correct contour.
  • the second contour information receiving unit 1024 receives the input correct contour
  • the second contour information receiving unit 1024 stores the correct contour in the memory of the overall control unit 120, for example.
  • Figure 29 is a diagram showing an example of a particle sample image G40.
  • Figure 29 shows a top view image of one particle.
  • multiple initial contour points 29 and multiple correct contour points 30 are displayed for one particle.
  • the multiple initial contour points 29 are displayed by the first contour information display unit 1023.
  • the multiple correct contour points 30 are contour points that the user assumes or expects and inputs by operating a mouse or the like.
  • the positions of the multiple correct contour points 30 are each accepted by the second contour information acceptance unit 1024.
  • the parameter value acquisition unit 1025 can obtain the values of the parameters P1 to P4 so as to minimize the difference between the initial contour point 29 and the correct contour point 30.
  • Figure 30 is a diagram showing an example of a particle sample image G40.
  • multiple corrected contour points 31 are displayed on the particle sample image G40 for one particle.
  • the corrected contour points 31 are contour points that minimize the difference between the initial contour points 29 and the correct contour points 30.
  • Figure 31 is a diagram showing an example of the display on the display unit 1251E while the optimization algorithm is running.
  • a loss calculation display G45 that displays the loss calculation is displayed.
  • the loss calculation display G45 displays the calculation results of the loss for the parameters.
  • the loss calculation display G45 displays a graph showing the calculation results of the loss result.
  • the parameter value that minimizes the loss is displayed as point O' on the graph.
  • the processing order of this optimization algorithm is the same as that explained using Figure 15 in the first embodiment.
  • the graph showing the calculation results of the loss result does not have to be displayed, and may be dotted, as explained using Figure 15.
  • the process of setting the initial values of the parameters P1 to P4 described above can also be applied to the contour extraction process of the contour extraction unit 1022 for the sample cross-sectional image G10 as shown in the first embodiment.
  • the contour extraction unit 1022 can obtain the pitch calculated from the initial contour extracted using that expected value.
  • the contour extraction unit 1022 can then obtain initial settings for the smooth and threshold parameters used in the contour extraction process so that the displayed difference between the pitch of the contour points extracted using the initial settings and the pitch obtained from the expected values is minimized. This allows the contour extraction unit 1022 to extract the initial contour so that it more accurately follows the structure of the pillars 11.
  • the second contour information receiving unit 1024 receives the correct contour
  • the parameter value acquisition unit 1025 optimizes the parameter values P1 to P4, etc., are the same as the processing described in the first embodiment.
  • SEM scattered particle beam device

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a technology for acquiring parameter values according to which a contour line close to a contour line that a user anticipates and expects is extracted. A controller 102 comprises: an image display unit 1021 that displays an observation image; a contour extraction unit 1022 that extracts first contour information for dividing the structure of a sample from the observation image on the basis of a preset first parameter value; a first contour information display unit 1023 that displays the first contour information extracted by using the first parameter value on the displayed observation image; a second contour information accepting unit 1024 that accepts, on the observation image, input of second contour information indicating the contour of the structure included in the observation image; and a parameter value acquisition unit 1025 that acquires a second parameter value, different from the first parameter value, for extracting third contour information for which the difference from the second contour information in terms of display is a minimum.

Description

画像処理システム、画像処理方法、および表示装置Image processing system, image processing method, and display device

 本開示は、画像処理システム、画像処理方法、および表示装置などの技術に関する。 This disclosure relates to technologies such as image processing systems, image processing methods, and display devices.

 荷電粒子ビーム装置を用いて撮像される観察画像の多くは、試料中の形状解析や構造解析に用いられる。これらの数値処理を行うためには、構造物の輪郭線を抽出する画像処理がほぼ必須である。このため、輪郭線抽出を精確に行う手法が重要になる。 Many of the observation images captured using charged particle beam devices are used for shape and structural analysis of samples. In order to perform these numerical processes, image processing to extract the contours of structures is almost essential. For this reason, a method for accurately extracting contours is important.

 観察画像内の輪郭線を抽出するアルゴリズムは、多様な画像に対して対応するため、パラメータと呼ばれる変数の数値でその動作を制御することができるようになっている。例えば、代表的なパラメータの1つは、閾値である。輪郭線抽出のアルゴリズムでは、観察画像内で明るさの変化の大きい画素のうち、輪郭線の一部となる画素を選択する処理を含むことが多い。閾値は、画素を選択する処理に用いられる。同じ画像に対する輪郭線抽出であっても、閾値の値を変えれば抽出される輪郭線が変化する。逆に、画像に含まれる輪郭線を精確に抽出するためには、閾値を含め、アルゴリズムに含まれるパラメータを最適な値に設定する必要がある。 In order to accommodate a wide variety of images, algorithms for extracting contours within observed images can control their operation using the numerical values of variables called parameters. For example, one typical parameter is the threshold. Contour extraction algorithms often include a process for selecting pixels that will become part of the contour from among pixels in the observed image that show large changes in brightness. The threshold is used in the process of selecting pixels. Even when extracting contours from the same image, changing the threshold value will change the extracted contour. Conversely, to accurately extract contours within an image, the parameters included in the algorithm, including the threshold, must be set to optimal values.

 輪郭線抽出のアルゴリズム、または、このアルゴリズムに付随するパラメータ値の最適化をコンピュータによる計算で自動実行すれば、手動による最適化に比べ精度・効率の向上が期待できる。このためには、多くの場合、最適なパラメータ値の計算基準となる数値データなどを観察画像とは別の入力として受け付け、所望の輪郭線を抽出するようなアルゴリズムを選択、あるいは、パラメータ値を計算するという方法が取られる。先行する技術として、特許文献1,2を挙げることができる。 If the optimization of the contour extraction algorithm or the parameter values associated with this algorithm is performed automatically by computer calculation, improved accuracy and efficiency can be expected compared to manual optimization. To achieve this, in many cases, numerical data that serves as the basis for calculating the optimal parameter values is accepted as input separate from the observed image, and an algorithm that extracts the desired contour line is selected or parameter values are calculated. Patent Documents 1 and 2 can be cited as examples of prior art.

特開2008-116207号公報Japanese Patent Application Laid-Open No. 2008-116207 特開2012-21832号公報JP 2012-21832 A

 様々な産業分野における研究開発や品質保証の場では、透過型電子顕微鏡(TEM)、走査透過型電子顕微鏡(STEM)、走査型電子顕微鏡(SEM)などを用い、多数の観察画像を撮影し、撮影したすべての観察画像に対して測長や角度計測などの形状解析を行うことが多い。これらの形状解析は観察画像内の構造の輪郭線を抽出する処理が前提になる。このため、ユーザは形状解析に先立ち、観察画像に合わせて輪郭線抽出アルゴリズムの設定、特に、パラメータ値を最適化する必要がある。 In research and development and quality assurance settings across a variety of industrial fields, transmission electron microscopes (TEM), scanning transmission electron microscopes (STEM), scanning electron microscopes (SEM), and other instruments are often used to capture numerous observation images, and shape analysis, such as length and angle measurements, is performed on all of these images. These shape analyses require the extraction of the contours of structures within the observation images. For this reason, users must configure the contour extraction algorithm to suit the observation images, optimizing the parameter values in particular, before conducting shape analysis.

 しかしながら、与えられた観察画像に対して、パラメータ値と抽出される輪郭線との間に明確な関係性がある場合は少なく、パラメータ値の系統的な調整方法は存在しない。したがって、ユーザは観察画像ごとに、輪郭線抽出のアルゴリズムの最適なパラメータ値を試行錯誤で探索しているのが実態である。前述のように観察画像数が多い場合、この観察画像数に比例して作業量も増大し、増大する作業量はメーカにとって無視できないコストとなる。また、例えば、半導体デバイスのメーカでは、半導体デバイスの構造微細化・複雑化により、抽出すべき輪郭線の形状もまた同様に複雑になり、最適なパラメータ値への調整がより困難になっている。 However, for a given observation image, there are few cases where there is a clear relationship between the parameter values and the extracted contours, and no systematic method for adjusting parameter values exists. As a result, users must search for the optimal parameter values for the contour extraction algorithm by trial and error for each observation image. As mentioned above, when there are a large number of observation images, the amount of work increases in proportion to the number of observation images, and this increased workload represents a non-negligible cost for manufacturers. Furthermore, for semiconductor device manufacturers, for example, as the structures of semiconductor devices become finer and more complex, the shapes of the contours to be extracted also become more complex, making it more difficult to adjust to the optimal parameter values.

 この問題を解決するため、特許文献1,2のような、輪郭線抽出のアルゴリズムの自動最適化に関する技術が提案されてきた。ここで、注意すべきは、特許文献1に記載の手法は、試料の製造工程がわかる場合に限って適用可能である点である。したがって、生体試料のような、そもそも製造工程をもたない天然由来の試料に対しては適用が難しい。また、特許文献2に記載の手法も、観察画像のもつノイズの輪郭線抽出への影響量、あるいは輪郭線の粗さを最小化するパラメータ値を自動で求める手法であって、粗い輪郭線を含む、ユーザが想定する輪郭線抽出を約束するものではない。ユーザの想定や期待を満足するような輪郭線を抽出できるアルゴリズム・パラメータ値を自動取得できる手法が求められている。 In order to solve this problem, technologies related to the automatic optimization of contour extraction algorithms have been proposed, such as those in Patent Documents 1 and 2. It should be noted here that the method described in Patent Document 1 is only applicable when the manufacturing process of the sample is known. Therefore, it is difficult to apply it to naturally occurring samples, such as biological samples, which do not have a manufacturing process in the first place. Furthermore, the method described in Patent Document 2 automatically determines parameter values that minimize the influence of noise in the observed image on contour extraction, or the roughness of the contour, but does not guarantee the extraction of contours that the user envisions, including rough contours. There is a need for a method that can automatically obtain algorithm parameter values that can extract contours that meet the user's expectations and assumptions.

 本開示の目的は、ユーザが想定、期待する輪郭線に近づけた輪郭線が抽出されるパラメータ値を取得できる技術を提供することである。 The purpose of this disclosure is to provide technology that can obtain parameter values that extract contours that are closer to the contours that the user envisions and expects.

 本開示のうち代表的な実施形態は以下に示す構成を有する。一実施形態の画像処理システムは、荷電粒子ビーム装置による試料の観察画像を処理する画像処理システムである。画像処理システムは、前記観察画像を表示する画像表示部と、予め設定されたパラメータの第1設定値に基づいて、前記観察画像から前記試料の構造上の境界に関する第1輪郭情報を抽出する輪郭抽出部と、前記輪郭抽出部で抽出された第1輪郭情報を前記観察画像上に表示する第1輪郭情報表示部と、前記観察画像において視認可能な境界に関する第2輪郭情報の入力を受け付ける第2輪郭情報受付部と、前記観察画像における前記第1輪郭情報と前記第2輪郭情報との差異が最小化するように前記第1設定値と異なる第2設定値を取得するパラメータ値取得部とを備える。 A representative embodiment of the present disclosure has the following configuration. An image processing system in one embodiment processes an observation image of a sample obtained by a charged particle beam device. The image processing system includes an image display unit that displays the observation image, a contour extraction unit that extracts first contour information related to the structural boundaries of the sample from the observation image based on a first set value of a preset parameter, a first contour information display unit that displays the first contour information extracted by the contour extraction unit on the observation image, a second contour information receiving unit that receives input of second contour information related to boundaries visible in the observation image, and a parameter value acquisition unit that acquires a second set value different from the first set value so as to minimize the difference between the first contour information and the second contour information in the observation image.

 一実施形態の荷電粒子ビーム装置による試料の観察画像を処理する画像処理システムが実行する画像処理方法であって、前記画像処理システムが、前記観察画像を表示し、予め設定されたパラメータの第1設定値に基づいて、前記観察画像から前記試料の構造上の境界に関する第1輪郭情報を抽出し、抽出された前記第1輪郭情報を前記観察画像上に表示し、前記観察画像において視認可能な境界に関する第2輪郭情報の入力を受け付け、前記観察画像における前記第1輪郭情報と前記第2輪郭情報との差異が最小化するように前記第1設定値と異なる第2設定値を取得する、画像処理方法。 An image processing method executed by an image processing system that processes an observation image of a sample obtained by a charged particle beam device according to one embodiment, wherein the image processing system displays the observation image, extracts first contour information relating to the structural boundaries of the sample from the observation image based on a first setting value of a preset parameter, displays the extracted first contour information on the observation image, accepts input of second contour information relating to boundaries visible in the observation image, and obtains a second setting value different from the first setting value so as to minimize the difference between the first contour information and the second contour information in the observation image.

 一実施の形態の表示装置は、荷電粒子ビーム装置による試料の観察画像を処理する画像処理システムに用いられる。表示装置は、予め設定されたパラメータの第1設定値に基づいて、前記観察画像から抽出された前記試料の構造上の境界に関する第1輪郭情報を前記観察画像上に表示し、前記観察画像において視認可能な境界に関する第2輪郭情報の入力を受け付け、前記観察画像における前記第1輪郭情報と前記第2輪郭情報との差異が最小化するように前記第1設定値と異なる第2設定値が取得され、取得された前記第2設定値に基づいて抽出された第3輪郭情報を前記観察画像上に表示する、表示装置。 In one embodiment, the display device is used in an image processing system that processes observation images of a sample obtained by a charged particle beam device. The display device displays first contour information relating to the structural boundaries of the sample extracted from the observation image based on a first setting value of a preset parameter on the observation image, accepts input of second contour information relating to boundaries visible in the observation image, acquires a second setting value different from the first setting value so as to minimize the difference between the first contour information and the second contour information in the observation image, and displays third contour information extracted based on the acquired second setting value on the observation image.

 本開示のうち代表的な実施の形態によれば、ユーザが想定、期待する輪郭線に近づけた輪郭線が抽出されるパラメータ値を取得する技術を提供できる。上記した以外の課題、構成および効果等については、発明を実施するための形態において示される。 A representative embodiment of the present disclosure provides technology for acquiring parameter values that extract a contour line that is closer to the contour line that the user envisions and expects. Issues, configurations, effects, etc. other than those described above are described in the detailed description of the invention.

第1の実施の形態の荷電粒子ビーム装置の構成の一例を示す図である。1 is a diagram illustrating an example of the configuration of a charged particle beam apparatus according to a first embodiment. 第1の実施の形態のコントローラの機能ブロックの一例を示す図である。FIG. 2 illustrates an example of functional blocks of a controller according to the first embodiment. 第1の実施の形態の機能ブロックの処理の流れを示すフローチャートである。10 is a flowchart showing a flow of processing of a functional block according to the first embodiment; 第1の実施の形態の試料断面画像の一例を示す図である。FIG. 2 is a diagram illustrating an example of a cross-sectional image of a sample according to the first embodiment. 第1の実施の形態のユーザインタフェースの一例を示す図である。FIG. 2 illustrates an example of a user interface according to the first embodiment. 第1の実施の形態の基準線の一例を示す図である。FIG. 2 illustrates an example of a reference line according to the first embodiment; 第1の実施の形態のラインプロファイル、および輪郭線の一例を示す図である。3A and 3B are diagrams illustrating an example of a line profile and a contour line according to the first embodiment; 第1の実施の形態の水平方向のラインプロファイルの取得例を説明するための図である。10A and 10B are diagrams for explaining an example of acquiring a horizontal line profile according to the first embodiment; 第1の実施の形態の水平方向のラインプロファイルの一例を示す図である。FIG. 10 illustrates an example of a line profile in the horizontal direction according to the first embodiment. 第1の実施の形態の動径方向のラインプロファイルの一例を示す図である。FIG. 10 is a diagram illustrating an example of a line profile in the radial direction according to the first embodiment. 第1の実施の形態の2つのパラメータの初期設定値を用いて算出された初期輪郭線の一例を示す図である。FIG. 10 is a diagram illustrating an example of an initial contour calculated using initial setting values of two parameters according to the first embodiment. 第1の実施の形態の輪郭点の移動が制限される範囲の一例を示す図である。FIG. 10 illustrates an example of a range in which movement of an outline point is restricted according to the first embodiment; 第1の実施の形態の正解輪郭線の一例を示す図である。FIG. 10 is a diagram illustrating an example of a correct contour line according to the first embodiment; 第1の実施の形態の正解輪郭線に基づいて修正された修正輪郭線の一例を示す図である。FIG. 10 is a diagram illustrating an example of a corrected contour line corrected based on a corrected contour line according to the first embodiment. 第1の実施の形態の最適化アルゴリズム実行中の表示の一例を示す図である。FIG. 10 illustrates an example of a display during execution of an optimization algorithm according to the first embodiment. 第2の実施の形態の試料断面画像の一例を示す図である。FIG. 10 is a diagram illustrating an example of a cross-sectional image of a sample according to the second embodiment. 第2の実施の形態の輪郭点検出アルゴリズムによって検出された輪郭点の一例を示す図である。10A and 10B are diagrams illustrating an example of contour points detected by a contour point detection algorithm according to a second embodiment; 第2の実施の形態の修正輪郭点の表示の一例を示す図である。FIG. 10 is a diagram illustrating an example of display of corrected contour points according to the second embodiment. 第2の実施の形態の最適化アルゴリズム実行中の表示部の表示の一例を示す図である。FIG. 10 illustrates an example of a display on a display unit during execution of an optimization algorithm according to a second embodiment. 第2の実施の形態のトップビュー画像の一例を示す図である。FIG. 10 illustrates an example of a top-view image according to the second embodiment. 第2の実施の形態のトップビュー画像から抽出した初期輪郭点、および正解輪郭点の一例を示す図である。FIG. 11 is a diagram showing an example of initial contour points and correct contour points extracted from a top-view image according to the second embodiment. 第2の実施の形態の正解輪郭点に最も近い修正輪郭点の一例を示す図である。FIG. 11 is a diagram illustrating an example of a corrected contour point that is closest to a correct contour point according to the second embodiment. 第3の実施の形態の粒子状試料の観察・解析を行う場合に得られる粒子試料画像の一例を示す図である。FIG. 10 is a diagram showing an example of a particle sample image obtained when observing and analyzing a particulate sample according to the third embodiment. 第3の実施の形態の分離処理後の粒子試料画像の一例を示す図である。FIG. 13 is a diagram showing an example of a particle sample image after separation processing according to the third embodiment. 第3の実施の形態のノイズ除去処理後の粒子試料画像の一例を示す図である。FIG. 13 is a diagram showing an example of a particle sample image after noise removal processing according to the third embodiment. 第3の実施の形態の「確実な背景領域」の抽出処理後の粒子試料画像の一例を示す図である。FIG. 13 is a diagram showing an example of a particle sample image after extraction processing of a "certain background region" according to the third embodiment. 第3の実施の形態の「確実な粒子領域」の抽出処理後の粒子試料画像の一例を示す図である。FIG. 13 is a diagram showing an example of a particle sample image after extraction processing of a “reliable particle region” according to the third embodiment. 第3の実施の形態のパラメータ値の設定が不要な画像処理後の粒子試料画像の一例を示す図である。FIG. 11 is a diagram showing an example of a particle sample image after image processing in which parameter value setting is not required according to the third embodiment. 第3の実施の形態の粒子試料画像の一例を示す図である。FIG. 13 is a diagram showing an example of a particle sample image according to the third embodiment. 第3の実施の形態の粒子試料画像の一例を示す図である。FIG. 13 is a diagram showing an example of a particle sample image according to the third embodiment. 第3の実施の形態の最適化アルゴリズム実行中の表示部の表示の一例を示す図である。FIG. 13 illustrates an example of a display on a display unit during execution of an optimization algorithm according to the third embodiment.

 以下、図面を参照しながら本開示の実施の形態を詳細に説明する。図面において、同一部には原則として同一符号を付し、繰り返しの説明を省略する。図面において、構成要素の表現は、発明の理解を容易にするために、実際の位置、大きさ、形状、範囲等を表していない場合がある。 Embodiments of the present disclosure will be described in detail below with reference to the drawings. In the drawings, identical parts are generally designated by the same reference numerals, and repeated explanations will be omitted. In the drawings, the depiction of components may not represent their actual position, size, shape, range, etc., in order to facilitate understanding of the invention.

 説明上、プログラムによる処理について説明する場合に、プログラムや機能や処理部等を主体として説明する場合があるが、それらについてのハードウェアとしての主体は、プロセッサ、あるいはそのプロセッサ等で構成されるコントローラ、装置、コンピュータシステム、システム等である。コンピュータシステムは、プロセッサによって、適宜にメモリや通信インタフェース等の資源を用いながら、メモリ上に読み出されたプログラムに従った処理を実行する。これにより、所定の機能や処理部等が実現される。プロセッサは、例えばCPU/MPUやGPU等の半導体デバイス等で構成される。処理は、ソフトウェアプログラム処理に限らず、専用回路でも実装可能である。専用回路は、FPGA、ASIC、CPLD等が適用可能である。 For the purpose of explanation, when describing processing by a program, the program, function, processing unit, etc. may be described as the main focus, but the main hardware component of these is the processor, or a controller, device, computer system, system, etc. that is made up of that processor, etc. A computer system uses the processor to execute processing in accordance with a program read into memory, appropriately using resources such as memory and communication interfaces. This realizes specified functions, processing units, etc. A processor is made up of semiconductor devices such as a CPU/MPU or GPU, for example. Processing is not limited to software program processing, and can also be implemented using dedicated circuits. Dedicated circuits such as FPGAs, ASICs, and CPLDs can be used.

 プログラムは、対象コンピュータシステムに予めデータとしてインストールされていてもよいし、プログラムソースから対象コンピュータシステムにデータとして配布されてもよい。プログラムソースは、通信網上のプログラム配布サーバでもよいし、非一過性のコンピュータ読み取り可能な記憶媒体、例えばメモリカードやディスクでもよい。プログラムは、複数のモジュールから構成されてもよい。コンピュータシステムは、複数台の装置によって構成されてもよい。コンピュータシステムは、クライアント・サーバシステム、クラウドコンピューティングシステム、IoTシステム等で構成されてもよい。各種のデータや情報は、例えばテーブルやリスト等の構造で構成されるが、これに限定されない。識別情報、識別子、ID、名前、番号等の表現は互いに置換可能である。 The program may be pre-installed as data on the target computer system, or may be distributed as data to the target computer system from a program source. The program source may be a program distribution server on a communications network, or a non-transitory computer-readable storage medium such as a memory card or disk. The program may be composed of multiple modules. The computer system may be composed of multiple devices. The computer system may be composed of a client-server system, a cloud computing system, an IoT system, etc. Various data and information may be composed of structures such as tables and lists, for example, but are not limited to these. Expressions such as identification information, identifiers, IDs, names, and numbers are interchangeable.

 以下では、本実施の形態の技術として、例えば、半導体デバイスである試料の断面画像に対する輪郭線抽出における本技術の適用ケースを述べる(第1の実施の形態)。その後、応用的な適用例として、輪郭線の微小要素である輪郭点に対しても本技術が同様に適用できることを示す(第2の実施の形態)。最後に、輪郭線が観察画像内の領域の境界線になっている場合への本技術の適用例を示す(第3の実施の形態)。 Below, as an example of the technology of this embodiment, we will describe an application case of this technology in extracting contour lines from a cross-sectional image of a sample that is a semiconductor device (first embodiment). Then, as an applied example, we will show that this technology can also be applied to contour points, which are minute elements of a contour line (second embodiment). Finally, we will show an example of the application of this technology when a contour line forms the boundary line of an area within an observed image (third embodiment).

 (第1の実施の形態)
 荷電粒子ビーム装置による試料、例えば、半導体デバイスの観察としては、走査型電子顕微鏡、例えば、測長用走査型電子顕微鏡(CD-SEM)を用いてウェハ表面を上から見る、いわゆるトップビュー観察・計測が有名である。一方、半導体デバイスは立体構造物であり、ウェハ面を横から見る観察、すなわち断面観察・計測も、デバイス開発や品質保証の工程において重要な位置を占める。このため、実施の形態として、半導体デバイスの断面の観察・計測するための画像処理の技術を主として取り上げている。
(First embodiment)
A well-known method of observing samples, such as semiconductor devices, using a charged particle beam device is so-called top-view observation and measurement, in which the wafer surface is viewed from above using a scanning electron microscope, such as a critical dimension scanning electron microscope (CD-SEM). However, semiconductor devices are three-dimensional structures, and so observation of the wafer surface from the side, i.e., cross-sectional observation and measurement, also plays an important role in the device development and quality assurance processes. For this reason, the embodiments mainly focus on image processing techniques for observing and measuring the cross sections of semiconductor devices.

 <走査型電子顕微鏡の構成>
 図1は、荷電粒子ビーム装置(「SEM」と称する。)100の構成の一例を示す図である。SEM100は、大別して、SEMの本体101と、本体101に対し接続されるコントローラ102とを有する。本体101は、更に、電子光学カラム(以下、「カラム」と称する。)とカラム下部に設けられた試料室とにより構成される。コントローラ102は、本体101による撮像などを制御するシステムである。コントローラ102は、全体制御部120、信号処理部121、記憶部122、通信インタフェース123等を備えている。また、コントローラ102は、入力デバイス124や出力デバイス125が外部接続されている。
<Configuration of scanning electron microscope>
FIG. 1 is a diagram showing an example of the configuration of a charged particle beam device (hereinafter referred to as "SEM") 100. The SEM 100 roughly comprises an SEM main body 101 and a controller 102 connected to the main body 101. The main body 101 further comprises an electron optical column (hereinafter referred to as "column") and a sample chamber provided below the column. The controller 102 is a system that controls imaging by the main body 101, etc. The controller 102 comprises an overall control unit 120, a signal processing unit 121, a storage unit 122, a communication interface 123, etc. An input device 124 and an output device 125 are externally connected to the controller 102.

 本体101のカラムは、構成要素として、電子銃111、集束レンズ112、偏向レンズ113、および対物レンズ114等を備える。電子銃111は、荷電粒子ビームである電子ビームb1を出射する。集束レンズ112は、電子ビームb1を集束する。偏向レンズ113は、電子ビームb1の軌道を偏向させる。対物レンズ114は、電子ビームb1の集束する高さを制御する。試料室は、ウェハやクーポン(ウェハの割断片)等の試料が格納される部屋である。試料室は、ステージ115、および検出器116を備える。ステージ115上には、試料130が配置される。 The column of the main body 101 comprises, as its components, an electron gun 111, a focusing lens 112, a deflection lens 113, and an objective lens 114. The electron gun 111 emits an electron beam b1, which is a beam of charged particles. The focusing lens 112 focuses the electron beam b1. The deflection lens 113 deflects the trajectory of the electron beam b1. The objective lens 114 controls the focusing height of the electron beam b1. The sample chamber is a room where samples such as wafers and coupons (broken pieces of wafers) are stored. The sample chamber comprises a stage 115 and a detector 116. A sample 130 is placed on the stage 115.

 ステージ115は、対象の試料130である半導体デバイスを載置する試料台である。ステージ115は、X方向やY方向のみならずZ方向への移動機能やXY、YX軸或いはZ軸に対する回転機能も備えている。このため、ステージ115は、撮像した視野を正対画像に対する水平方向および垂直方向、或いは視野内での回転方向に移動可能である。これにより、撮像の視野が設定できる。 The stage 115 is a sample stage on which the target sample 130, a semiconductor device, is placed. The stage 115 has the ability to move not only in the X and Y directions but also in the Z direction, and to rotate about the XY, YX, or Z axes. As a result, the stage 115 can move the captured field of view in the horizontal and vertical directions relative to the front-on image, or in a rotational direction within the field of view. This allows the field of view for imaging to be set.

 検出器116は、電子ビームb1が照射された試料130から発生した二次電子や後方散乱電子等の粒子b2を電気信号として検出する。検出器116は、電気信号である検出信号を出力する。 The detector 116 detects particles b2, such as secondary electrons and backscattered electrons, generated from the sample 130 irradiated with the electron beam b1 as an electrical signal. The detector 116 outputs a detection signal, which is an electrical signal.

 全体制御部120は、コントローラ102、および本体101の動作を制御する。全体制御部120は、各部に駆動制御などの指示を与える。全体制御部120等の各部は、コンピュータまたは専用回路などで実装できる。信号処理部121は、検出器116からの検出信号を入力し、アナログ/デジタル変換等の処理を行って画像信号を生成し、記憶部122に画像等のデータとして保存する。記憶部122は、不揮発性記憶装置等で実装できる。全体制御部120は、画像に関する撮影情報等も記憶部122に関連付けて保存する。通信インタフェース123は、通信網(図示省略。)や外部コンピュータ(図示省略。)に対する通信インタフェースが実装された装置である。 The overall control unit 120 controls the operation of the controller 102 and the main body 101. The overall control unit 120 issues instructions such as drive control to each unit. Each unit, including the overall control unit 120, can be implemented as a computer or dedicated circuit. The signal processing unit 121 receives the detection signal from the detector 116, performs processing such as analog/digital conversion to generate an image signal, and stores it as image data in the memory unit 122. The memory unit 122 can be implemented as a non-volatile storage device, etc. The overall control unit 120 also associates and stores shooting information related to the image in the memory unit 122. The communication interface 123 is a device that has a communication interface for a communication network (not shown) or an external computer (not shown).

 全体制御部120は、例えば、入力デバイス124からの要求等に応じて、記憶部122に記憶されている画像や撮影情報等のデータを、出力デバイス125に出力する。全体制御部120は、通信インタフェース123を介して、外部コンピュータからの要求を受信し、受信した要求に応じて、記憶部122に記憶されている画像や撮影情報等のデータを、外部コンピュータに送信してもよい。 The overall control unit 120 outputs data such as images and shooting information stored in the storage unit 122 to the output device 125 in response to a request from the input device 124, for example. The overall control unit 120 may receive requests from an external computer via the communication interface 123, and in response to the received request, transmit data such as images and shooting information stored in the storage unit 122 to the external computer.

 <機能ブロック>
 図2は、コントローラ102の機能ブロックの一例を示す図である。図2に示すように、コントローラ102は、画像表示部1021、輪郭抽出部1022、第1輪郭情報表示部1023、第2輪郭情報受付部1024、パラメータ値取得部1025、および第3輪郭情報表示部1026を有する。全体制御部120が、例えば、記憶部122に記憶されたプログラムを読み出して、読み出したプログラムを実行することにより、コントローラ102は、画像表示部1021、輪郭抽出部1022、第1輪郭情報表示部1023、第2輪郭情報受付部1024、パラメータ値取得部1025、および第3輪郭情報表示部1026の機能を実現する。これらの機能を実現することにより、コントローラ102は、SEM100による試料130の観察画像における輪郭線を画像処理する画像処理システムとして動作する。本実施の形態では、コントローラ102は、SEM10と接続される場合で説明するが、これに限るものではない。例えば、コントローラ102がSEMとは、独立した構成になっていてもよい。この場合、コントローラ102は、外部のSEMなどで取得した画像を読み込む画像読み込み機能を有し、この画像読み込み機能により読み込んだ画像に対して、図2に示す機能を実現するように構成してもよい。
<Function block>
FIG. 2 is a diagram showing an example of functional blocks of the controller 102. As shown in FIG. 2, the controller 102 includes an image display unit 1021, a contour extraction unit 1022, a first contour information display unit 1023, a second contour information receiving unit 1024, a parameter value acquisition unit 1025, and a third contour information display unit 1026. The overall control unit 120 reads a program stored in the storage unit 122, for example, and executes the read program, thereby causing the controller 102 to realize the functions of the image display unit 1021, the contour extraction unit 1022, the first contour information display unit 1023, the second contour information receiving unit 1024, the parameter value acquisition unit 1025, and the third contour information display unit 1026. By realizing these functions, the controller 102 operates as an image processing system that performs image processing of contours in an observation image of the sample 130 obtained by the SEM 100. In this embodiment, the controller 102 will be described as being connected to the SEM 10, but the present invention is not limited to this. For example, the controller 102 may be configured independent of the SEM. In this case, the controller 102 may have an image reading function for reading an image acquired by an external SEM or the like, and may be configured to implement the functions shown in FIG. 2 for the image read by this image reading function.

 画像表示部1021は、試料を荷電粒子ビーム装置で撮像した観察画像を表示する。例えば、画像表示部1021は、試料130をSEM100で撮像した試料断面画像G10(後述する。)を表示する。 The image display unit 1021 displays an observation image of the sample captured by the charged particle beam device. For example, the image display unit 1021 displays a sample cross-section image G10 (described below) of the sample 130 captured by the SEM 100.

 輪郭抽出部1022は、予め設定されたパラメータの第1設定値に基づいて、観察画像から試料の構造上の境界に関する第1輪郭情報を抽出する。輪郭抽出部1022には、複数のパラメータの設定値が含まれる。輪郭抽出部1022は、本実施の形態では、複数のパラメータの設定値に基づいて、第1輪郭情報を観察画像から抽出する。例えば、輪郭抽出部1022は、後述するように、予め設定されたパラメータ、例えば、スムース、スレッショルドの初期設定値(第1設定値)に基づいて、試料断面画像G10から試料130の構造上の境界に関する初期輪郭線20(第1輪郭情報)を抽出する。パラメータは、スムース、スレッショルドのように複数種類ある。 The contour extraction unit 1022 extracts first contour information relating to the structural boundary of the sample from the observation image based on first setting values of preset parameters. The contour extraction unit 1022 includes setting values for multiple parameters. In this embodiment, the contour extraction unit 1022 extracts first contour information from the observation image based on setting values for multiple parameters. For example, as described below, the contour extraction unit 1022 extracts an initial contour line 20 (first contour information) relating to the structural boundary of the sample 130 from the sample cross-sectional image G10 based on preset parameters, such as initial setting values (first setting values) of smooth and threshold. There are multiple types of parameters, such as smooth and threshold.

 第1輪郭情報表示部1023は、輪郭抽出部1022で抽出された第1輪郭情報を観察画像上に表示する。例えば、第1輪郭情報表示部1023は、後述するように、パラメータ値、例えば、スムース、およびスレッショルドの初期設定値を用いて抽出された初期輪郭線20を画像表示部1021に表示された試料断面画像G10上に表示する。 The first contour information display unit 1023 displays the first contour information extracted by the contour extraction unit 1022 on the observation image. For example, as described below, the first contour information display unit 1023 displays the initial contour line 20 extracted using parameter values, such as the initial settings for smooth and threshold, on the sample cross-section image G10 displayed on the image display unit 1021.

 第2輪郭情報受付部1024は、観察画像において視認可能な境界に関する第2輪郭情報の入力を受け付ける。例えば、第2輪郭線受付部1024は、後述するように、画像表示部1021に表示された試料断面画像G10(後述する。)において、視認可能なピラー11の境界に関する正解輪郭線21の入力を受け付ける。 The second contour information receiving unit 1024 receives input of second contour information relating to the boundary visible in the observation image. For example, the second contour line receiving unit 1024 receives input of the correct contour line 21 relating to the boundary of the pillar 11 visible in the sample cross-sectional image G10 (described below) displayed on the image display unit 1021, as described below.

 パラメータ値取得部1025は、観察画像における第1輪郭情報と第2輪郭情報との差異が最小化するように第1設定値と異なる第2設定値を取得する。例えば、パラメータ値取得部1025は、後述するように、初期輪郭線20と正解輪郭線21との差異が最小化するようにパラメータの初期設定値と異なるパラメータの設定値(第2設定値)を取得する。 The parameter value acquisition unit 1025 acquires a second setting value that is different from the first setting value so as to minimize the difference between the first contour information and the second contour information in the observed image. For example, as described below, the parameter value acquisition unit 1025 acquires a parameter setting value (second setting value) that is different from the initial parameter setting value so as to minimize the difference between the initial contour line 20 and the correct contour line 21.

 第3輪郭情報表示部1026は、パラメータ値取得部1025により取得された第2設定値に基づいて輪郭抽出部により抽出された第3輪郭情報を観察画像上に表示する。例えば、第3輪郭情報表示部1026は、後述するように、パラメータ値取得部1025により取得されたパラメータ値、スムース、およびスレッショルドの設定値に基づいて輪郭抽出部1022により抽出された修正輪郭線22を表示する。 The third contour information display unit 1026 displays on the observed image the third contour information extracted by the contour extraction unit based on the second setting value acquired by the parameter value acquisition unit 1025. For example, as described below, the third contour information display unit 1026 displays the corrected contour line 22 extracted by the contour extraction unit 1022 based on the parameter value, smooth, and threshold setting values acquired by the parameter value acquisition unit 1025.

 <画像処理の流れ>
 図3は、機能ブロックの処理の流れを示すフローチャートである。
 図3に示すように、本実施の形態の画像処理は、
 ステップST110:画像表示部1021による画像表示処理
 ステップST120:輪郭抽出部1022による輪郭抽出処理
 ステップST130:第1輪郭情報表示部1023による第1輪郭情報表示処理
 ステップST140:第2輪郭情報受付部1024による第2輪郭情報受付処理
 ステップST150:パラメータ値取得部1025によるパラメータ値取得処理
 ステップST160:第3輪郭情報表示部1026による第3輪郭情報表示処理
 の順で実行される。
<Image processing flow>
FIG. 3 is a flowchart showing the flow of processing by the functional blocks.
As shown in FIG. 3, the image processing of this embodiment is as follows:
The steps are executed in the following order: Step ST110: Image display processing by image display unit 1021; Step ST120: Contour extraction processing by contour extraction unit 1022; Step ST130: First contour information display processing by first contour information display unit 1023; Step ST140: Second contour information reception processing by second contour information reception unit 1024; Step ST150: Parameter value acquisition processing by parameter value acquisition unit 1025; Step ST160: Third contour information display processing by third contour information display unit 1026.

 以下、画像処理システムであるコントローラ102の実行する処理について、より詳細に説明する。
 画像表示部1021は、試料(半導体デバイス)130の断面を、SEMの本体101を動作させて、観察し、解析目的に対して十分な像質をもつ試料断面画像を取得する。図4は、試料断面画像G10の一例を示す図である。本実施の形態では、試料断面画像(観察画像)G10は、SEM100により取得されるが、TEMやSTEMにより取得されてもよい。
The processing executed by the controller 102, which is an image processing system, will be described in more detail below.
The image display unit 1021 operates the SEM main body 101 to observe the cross section of the sample (semiconductor device) 130 and acquire a sample cross-sectional image with sufficient image quality for the purpose of analysis. Fig. 4 is a diagram showing an example of the sample cross-sectional image G10. In this embodiment, the sample cross-sectional image (observation image) G10 is acquired by the SEM 100, but it may also be acquired by a TEM or STEM.

 図4に示すように、試料断面画像G10には、突起構造(以下「ピラー」と称する。)11、および2つのピラー間の谷構造(以下「トレンチ」と称する。)12が含まれる。なお、試料断面画像G10においては、符号、および引き出し線を付す領域の背景として白色領域を設けているが、符号、および引き出し線の背景の白色領域は画像ではない。以下でも画像表示中の符号、および引き出し線に関しては、同様に白色領域を設けて符号、および引き出し線を付すこととする。 As shown in Figure 4, the sample cross-sectional image G10 includes a protrusion structure (hereinafter referred to as a "pillar") 11 and a valley structure (hereinafter referred to as a "trench") 12 between two pillars. Note that in the sample cross-sectional image G10, a white area is provided as the background for the areas where symbols and lead lines are added, but the white area behind the symbols and lead lines is not an image. In the following, with regard to symbols and lead lines in image displays, a white area will be provided and the symbols and lead lines will be added.

 試料断面画像G10の画像ファイル形式は特定の画像ファイル形式を想定しない。しかし、輪郭抽出部1022の輪郭抽出処理では画像データに含まれる高調成分が重要になる。このため、画像ファイル形式は、Tagged Image File Format(TIFF形式)、Microsoft Windows Bitmap Image(BMP形式)などの非圧縮形式の画像ファイル形式が望ましい。 No specific image file format is assumed for the image file format of the sample cross-section image G10. However, the harmonic components contained in the image data are important in the contour extraction process of the contour extraction unit 1022. For this reason, it is desirable for the image file format to be an uncompressed image file format such as Tagged Image File Format (TIFF format) or Microsoft Windows Bitmap Image (BMP format).

 次に、画像表示部1021は、図4に示す試料断面画像G10を形状解析ソフトウェアに読込む。このために形状解析ソフトウェアは、SEM100が出力する画像形式を読込むためのユーザインタフェース(UI)を有する。図5は、ユーザインタフェースの一例を示す図である。本実施の形態では、ユーザインタフェースとして、グラフィカルユーザインタフェース(GUI:表示装置)1251を用いる。 Next, the image display unit 1021 loads the sample cross-sectional image G10 shown in Figure 4 into the shape analysis software. To this end, the shape analysis software has a user interface (UI) for loading the image format output by the SEM 100. Figure 5 is a diagram showing an example of the user interface. In this embodiment, a graphical user interface (GUI: display device) 1251 is used as the user interface.

 図5に示すように、GUI1251は、インポートイメージボタン1251A、ディテクトボタン1251B、エディットボタン1251C、および設定調整ボタン1251Dのボタンを有する。インポートイメージボタン1251Aは、試料断面画像G10を形状解析ソフトウェアに読み込ませ、観察画像を表示する指示をするボタンである。ディテクトボタン1251Bは、試料断面画像G10から構造上の単位、言い換えれば、構造上の境界の輪郭線を抽出する指示をするボタンである。エディットボタン1251Cは、例えば、第2輪郭情報受付部1024に輪郭線の編集の受け付けを指示するボタンである。設定調整ボタン1251Dは、例えば、パラメータ値取得部1025にパラメータ値の設定値の調整を指示するボタンである。また、GUI1251は、表示部1251Eを有する。表示部1251Eには、画像表示部1021の制御に基づいて、例えば、形状解析ソフトウェアに読み込ませた試料断面画像G10が表示される。 As shown in FIG. 5, GUI 1251 has buttons for import image button 1251A, detect button 1251B, edit button 1251C, and setting adjustment button 1251D. Import image button 1251A is a button that instructs the shape analysis software to load the sample cross-sectional image G10 and display the observed image. Detect button 1251B is a button that instructs the software to extract structural units, in other words, the contour lines of structural boundaries, from the sample cross-sectional image G10. Edit button 1251C is a button that instructs, for example, the second contour information receiving unit 1024 to accept editing of the contour lines. Setting adjustment button 1251D is a button that instructs, for example, the parameter value acquisition unit 1025 to adjust the setting values of parameter values. GUI 1251 also has a display unit 1251E. The display unit 1251E displays, for example, a sample cross-sectional image G10 loaded into shape analysis software under the control of the image display unit 1021.

 試料断面画像G10に対する解析は、本実施の形態では、図4に示されるピラー11、およびトレンチ12を構造上の単位(以下、「単位構造」と称する。)として行われる。以下、これらの単位構造まわりの輪郭抽出に特化したアルゴリズムによる輪郭抽出処理を説明する。輪郭抽出処理は、輪郭抽出部1022により実行される。 In this embodiment, the analysis of the sample cross-sectional image G10 is performed using the pillars 11 and trenches 12 shown in Figure 4 as structural units (hereinafter referred to as "unit structures"). Below, we will explain the contour extraction process using an algorithm specialized for extracting contours around these unit structures. The contour extraction process is performed by the contour extraction unit 1022.

 輪郭抽出部1022は、図4に示す試料断面画像G10に対して、画像水平方向に基準線を設定する。図6は、基準線13の一例を示す図である。輪郭抽出部1022は、基準線13に対し、ピラー11に対しては基準線13の上方の上方部分14、トレンチ12に対しては基準線13の下方の下方部分15において、単位構造の輪郭線を抽出する。 The contour extraction unit 1022 sets a reference line in the horizontal direction of the image for the sample cross-sectional image G10 shown in Figure 4. Figure 6 is a diagram showing an example of the reference line 13. The contour extraction unit 1022 extracts the contour lines of the unit structures in the upper portion 14 above the reference line 13 for the pillars 11, and in the lower portion 15 below the reference line 13 for the trenches 12.

 次に、輪郭抽出部1022は、図6に示すように、輪郭抽出をしたいピラー11、または、トレンチ12の先端部付近に基準点16をそれぞれ設定する。基準点16の個数は何個でもよい。基準線13、および基準点16を設定することにより、輪郭抽出部1022は、後述するラインプロファイル17,18を適切に設定することができ、試料断面画像G10から適切なピラー11、トレンチ12といった構造の輪郭を抽出することが可能になる。 Next, as shown in FIG. 6, the contour extraction unit 1022 sets reference points 16 near the tip of the pillar 11 or trench 12 for which contour extraction is desired. Any number of reference points 16 can be used. By setting the reference lines 13 and reference points 16, the contour extraction unit 1022 can appropriately set line profiles 17 and 18, which will be described later, and can appropriately extract the contours of structures such as pillars 11 and trenches 12 from the sample cross-sectional image G10.

 以下、ピラー11まわりの輪郭抽出の処理について説明を続ける。
 輪郭抽出部1022は、基準線13、および基準点16を設定した後、基準点16より下方であれば画像水平方向、および基準点16より上方であれば基準点16まわりの動径方向に沿って、一定幅での画素値のラインプロファイルを取得する。
The process of extracting the contour around the pillar 11 will be explained below.
After setting the reference line 13 and the reference point 16, the contour extraction unit 1022 acquires a line profile of pixel values at a certain width along the horizontal direction of the image if the line is below the reference point 16, and along the radial direction around the reference point 16 if the line is above the reference point 16.

 図7は、ラインプロファイル、および輪郭線の一例を示す図である。図7には、画像水平方向のラインプロファイル17、動径方向のラインプロファイル18が示されている。例えば、輪郭抽出部1022は、基準点16の下方では、1ピクセル刻みで水平方向のラインプロファイル17を取得し、基準点16の上方では1度刻みで動径方向のラインプロファイル18を取得する。また、複数の輪郭点19により輪郭線が形成されている。 Figure 7 shows an example of a line profile and a contour line. Figure 7 shows a line profile 17 in the horizontal direction of the image and a line profile 18 in the radial direction. For example, the contour extraction unit 1022 acquires horizontal line profiles 17 in one-pixel increments below the reference point 16, and acquires radial line profiles 18 in one-degree increments above the reference point 16. In addition, a contour line is formed by multiple contour points 19.

 ここで、輪郭抽出部1022が、基準点16の下方における、ある水平線上のラインプロファイル17を取得する処理について説明する。図8は、水平方向のラインプロファイル17の取得例を説明するための図である。 Here, we will explain the process by which the contour extraction unit 1022 acquires a line profile 17 on a horizontal line below the reference point 16. Figure 8 is a diagram illustrating an example of acquiring a horizontal line profile 17.

 図8における(x,y)座標系は、試料断面画像G10のピクセル座標に対応する。ピクセル座標(x,y)における画素値をI(x,y)と表す。水平線分y=y(x<x<x)上でラインプロファイル17を取得する場合で説明する。この場合、輪郭抽出部1022は、ノイズ除去のため、同水平線分に対し垂直な方向に関し、幅2yピクセルの範囲で画素値の平均をとって、平均値をプロファイル値とする。すなわち、水平線分y=y(x<x<x)上の点(x,y)におけるプロファイル値I(x)とすると、I(x)は、
により計算される。この平均計算の範囲を決めるピクセル数yはパラメータとして設定する必要がある。以下、ピクセル数yを「スムース」と称する。スムースは、画像のノイズを除去するためのパラメータである。スムースの値を設定することにより、輪郭抽出部1022は、例えば、試料断面画像G10から所望のノイズを除去することが可能になる。
The (x, y) coordinate system in Figure 8 corresponds to the pixel coordinates of the sample cross-sectional image G10. The pixel value at pixel coordinates (x, y) is represented as I(x, y). A case will be described in which a line profile 17 is acquired on a horizontal line segment y = y0 ( xi < x < xt ) . In this case, in order to remove noise, the contour extraction unit 1022 takes the average of pixel values in a range of width 2ys pixels in the direction perpendicular to the horizontal line segment, and sets the average value as the profile value. In other words, if the profile value Ip (x) at point (x, y0 ) on the horizontal line segment y = y0 ( xi < x < xt ) is
The number of pixels ys, which determines the range of this average calculation, must be set as a parameter. Hereinafter, the number of pixels ys will be referred to as "smooth." Smooth is a parameter for removing noise from an image. By setting the value of smooth, the contour extraction unit 1022 can remove desired noise from, for example, the sample cross-sectional image G10.

 ここで、図9は、水平方向のラインプロファイル17の一例を示す図である。このラインプロファイル17において、輪郭抽出部1022は、プロファイル値の最大値IPMAXおよび最小値IPMINを取得し、最小値を0%、最大値を100%としたとき、T%(0<T<100)に相当するプロファイル値をとる点をエッジ点として検出する。すなわちエッジ点をxとしたとき、I(x)は、以下の式を満たす:
 スムースと同様に、エッジ点xもパラメータとして設定する必要がある。エッジ点xのパラメータの閾値を、以下、「スレッショルド」と称する。スレッショルドは、画像の領域を区別する閾値を定めるパラメータである。スレッショルドの値を設定することにより、輪郭抽出部1022は、例えば、試料断面画像G10からピラー11の領域と、ピラー11以外の領域とを区別することが可能になる。輪郭抽出部1022は、以上の過程を基準点16の下方のすべて水平線分y=yで繰り返すことにより、基準線13の下方のラインプロファイル17のエッジ点xを得ることができる。
9 is a diagram showing an example of a horizontal line profile 17. In this line profile 17, the contour extraction unit 1022 acquires the maximum value IP MAX and minimum value IP MIN of the profile value, and when the minimum value is 0% and the maximum value is 100%, detects a point having a profile value equivalent to T% (0<T<100) as an edge point. In other words, when the edge point is x e , IP (x e ) satisfies the following formula:
As with smoothing, the edge point xe also needs to be set as a parameter. The threshold value of the parameter for the edge point xe is hereinafter referred to as the "threshold." The threshold is a parameter that determines the threshold for distinguishing between image regions. By setting the threshold value, the contour extraction unit 1022 can distinguish, for example, the pillar 11 region from the region other than the pillar 11 in the sample cross-sectional image G10. The contour extraction unit 1022 can obtain the edge point xe of the line profile 17 below the reference line 13 by repeating the above process for all horizontal line segments y = y0 below the reference point 16.

 輪郭抽出部1022は、基準点16の上方におけるエッジ点xの検出についても基準点16の下方における処理と同様の処理で行われる。図10は、動径方向のラインプロファイル18の一例を示す図である。 The contour extraction unit 1022 detects the edge point xe above the reference point 16 in the same manner as the process below the reference point 16. Fig. 10 is a diagram showing an example of a line profile 18 in the radial direction.

 図10に示すように、基準点16の上方の場合、輪郭抽出部1022は、ラインプロファイル18を取得する(基準点16を中心とした)動径線分を中央として、2θdegの範囲で画素値平均をとることでノイズ除去を行う。この場合のスムースは、θそのものであってもよいし、θの定数倍であってもよい。 10 , in the case of an area above the reference point 16, the contour extraction unit 1022 performs noise removal by averaging pixel values in a range of 2θ s degrees, with the radial line segment (centered on the reference point 16) from which the line profile 18 is obtained as the center. In this case, the smoothness may be θ s itself or a constant multiple of θ s .

 以上の過程から、輪郭抽出部1022は、基準点16の上方および下方において、各ラインプロファイル17,18の輪郭点19を得ることができる。第1輪郭情報表示部1023は、このように得た輪郭点19の群を、既述の図7に示すように表示することができる。図7には、ピラー11の形状に概ね沿うように複数の輪郭点19が表示されている。 Through the above process, the contour extraction unit 1022 can obtain contour points 19 of each line profile 17, 18 above and below the reference point 16. The first contour information display unit 1023 can display the group of contour points 19 obtained in this way, as shown in Figure 7, which has already been described. In Figure 7, multiple contour points 19 are displayed so as to roughly follow the shape of the pillar 11.

 輪郭抽出処理に用いられる2つのパラメータ(スムース、およびスレッショルド)が、本実施の形態の技術の適用によって最適値に調整される。形状解析ソフトウェアでは、スムース、およびスレッショルドの初期設定値がそれぞれ決められている。このため、まず、輪郭抽出部1022は、2つのパラメータの初期設定値で輪郭抽出処理を実行し、複数の輪郭点19により形成される初期輪郭線(第1輪郭情報)を抽出する。 The two parameters (smooth and threshold) used in the contour extraction process are adjusted to optimal values by applying the technology of this embodiment. The shape analysis software has default settings for smooth and threshold. Therefore, the contour extraction unit 1022 first executes the contour extraction process using the default settings for the two parameters, and extracts an initial contour line (first contour information) formed by multiple contour points 19.

 次に、第1輪郭情報表示部1023は、抽出された初期輪郭線を、図5に示すGUI1251の表示部1251Eに表示された試料断面画像G10の上に表示する。図11は、2つのパラメータの初期設定値を用いて算出された初期輪郭線20の一例を示す図である。図11の表示は、例えば、ディテクトボタン1251Bが入力されると表示される。 Next, the first contour information display unit 1023 displays the extracted initial contour on the sample cross-section image G10 displayed on the display unit 1251E of the GUI 1251 shown in FIG. 5. FIG. 11 is a diagram showing an example of an initial contour 20 calculated using the initial setting values of two parameters. The display in FIG. 11 is displayed, for example, when the Detect button 1251B is pressed.

 図11に示すように、表示部1251Eに表示された初期輪郭線20が、ユーザが視認可能なピラー11の形状とは異なっている場合、言い換えれば、ユーザの想定・期待する輪郭線とは異なっている場合、ユーザは、入力デバイス124の操作によって初期輪郭線20を修正する。入力デバイス124は、例えば、マウスである。ユーザは、マウスのポインタをペン先として、表示部1251Eに表示された初期輪郭線20を押し出すような操作を行うことにより、ユーザは表示された初期輪郭線20を所望の輪郭線に整形することができる。ここでは、初期輪郭線20がピラー11の形状からずれているため、ユーザは、マウスを用いて、ピラー11の形状に沿うように初期輪郭線20をピラー11の形状に合わせるように整形し、後述する図13に表示する正解輪郭線21(第2輪郭情報)を作成する。 As shown in FIG. 11 , if the initial contour line 20 displayed on the display unit 1251E differs from the shape of the pillar 11 visible to the user, in other words, if it differs from the contour line the user envisions or expects, the user modifies the initial contour line 20 by operating the input device 124. The input device 124 is, for example, a mouse. The user uses the mouse pointer as a pen tip to push out the initial contour line 20 displayed on the display unit 1251E, thereby shaping the displayed initial contour line 20 into the desired contour line. In this case, since the initial contour line 20 deviates from the shape of the pillar 11, the user uses the mouse to align the initial contour line 20 with the shape of the pillar 11, thereby creating a correct contour line 21 (second contour information) to be displayed in FIG. 13 , which will be described later.

 初期輪郭線20の整形は、例えば、初期輪郭線20を構成する各輪郭点19の移動によって実現される。各輪郭点19の移動は、一定の範囲に制限される。図12は、輪郭点の移動が制限される範囲の一例を示す図である。図12の表示は、例えば、エディットボタン1251Cが入力され、輪郭点19が選択されることにより、表示される。図12においては、両矢印AW1、AW2により輪郭点19の移動が制限される範囲をそれぞれ示している。両矢印AW1は、水平方向のラインプロファイル17から得られた輪郭点19が移動できる範囲を示している。両矢印AW2は、動径方向のラインプロファイル18から得られた輪郭点19が移動できる範囲を示している。 The shaping of the initial contour line 20 is achieved, for example, by moving each of the contour points 19 that make up the initial contour line 20. The movement of each contour point 19 is restricted to a certain range. Figure 12 is a diagram showing an example of the range in which the movement of a contour point is restricted. The display in Figure 12 is displayed, for example, by pressing edit button 1251C and selecting a contour point 19. In Figure 12, the double-headed arrows AW1 and AW2 indicate the range in which the movement of the contour point 19 is restricted. The double-headed arrow AW1 indicates the range in which the contour point 19 obtained from the horizontal line profile 17 can be moved. The double-headed arrow AW2 indicates the range in which the contour point 19 obtained from the radial line profile 18 can be moved.

 両矢印AW1は、輪郭点191を中心に基準点16から水平方向に延在している。輪郭点191は、基準点16より下側に位置している複数の輪郭点19のうちの1つである。両矢印AW1の一端は、基準点16から垂直方向に延出される線まで距離L1だけ延在している。両矢印AW1の他端は、基準点16から前記一端と反対側に距離L1の位置まで延在している。基準点16の下方で基準点16から見て左側の輪郭点191は、上下には移動できず、輪郭点191の図12に示す位置から水平移動させて基準点16より右側に移動させることができない。 The double-headed arrow AW1 extends horizontally from the reference point 16, with the contour point 191 at its center. Contour point 191 is one of multiple contour points 19 located below the reference point 16. One end of the double-headed arrow AW1 extends a distance L1 to a line extending vertically from the reference point 16. The other end of the double-headed arrow AW1 extends a distance L1 from the reference point 16 on the opposite side to the one end. Contour point 191, below the reference point 16 and to the left of the reference point 16 as viewed from the reference point 16, cannot move up or down, and cannot be moved horizontally from the position of contour point 191 shown in Figure 12 to the right of the reference point 16.

 両矢印AW2は、輪郭点192を中心に基準点16から動径方向に延在している。輪郭点192は、基準点16より上側に位置している複数の輪郭点19のうちの1つである。両矢印AW2の一端は、基準点16まで距離L2だけ延在している。両矢印AW2の他端は、基準点16から前記一端と反対側に距離L2の位置まで延在している。基準点16の上方の輪郭点192は、図12に示す位置と輪郭点192とを結ぶ直線上をのみ移動可能で、図12に示す輪郭点192の位置から、基準点16を超えた下側に輪郭点192を移動させることができない。 The double-headed arrow AW2 extends in the radial direction from the reference point 16, centered on the contour point 192. Contour point 192 is one of multiple contour points 19 located above the reference point 16. One end of the double-headed arrow AW2 extends a distance L2 to the reference point 16. The other end of the double-headed arrow AW2 extends a distance L2 from the reference point 16 on the opposite side to the one end. Contour point 192 above the reference point 16 can only move on the straight line connecting the position shown in Figure 12 and contour point 192; contour point 192 cannot be moved below the reference point 16 from the position of contour point 192 shown in Figure 12.

 両矢印AW1,AW2、および部分NG1,NG2により、ユーザは、輪郭点191,192を移動させることができる範囲を視認することができる。また、輪郭点191,192を移動させることができる範囲を制限することにより、ユーザの誤った輪郭点191,192に入力を防止することができる。ここで、誤った輪郭点191,192の入力とは、例えば、明らかにピラー11などの構造の輪郭に無関係な位置へ輪郭点191,192が移動される場合である。このような誤った移動位置への入力を防止することができるため、輪郭抽出部1022は、輪郭抽出の処理を正常に実行することができる。 The double arrows AW1, AW2 and portions NG1, NG2 allow the user to visually confirm the range in which contour points 191, 192 can be moved. Furthermore, by limiting the range in which contour points 191, 192 can be moved, it is possible to prevent the user from inputting the wrong contour points 191, 192. Here, inputting the wrong contour points 191, 192 means, for example, moving contour points 191, 192 to positions that are clearly unrelated to the contour of a structure such as pillar 11. Because it is possible to prevent inputting such wrong movement positions, the contour extraction unit 1022 can properly execute the contour extraction process.

 ユーザがマウスを用いて、図12において両矢印AW1,AW2で例示する制限範囲内で正解輪郭線を入力する。ユーザにより、初期輪郭線20が、例えば、ピラー11の形状に沿うように修正される。正解輪郭線は、例えば、複数の輪郭点19がそれぞれ移動された後の輪郭点19の位置を示す情報として保持される。図13は、正解輪郭線21の一例を示す図である。図13においては、ピラー11の形状に沿った正解輪郭線21が表示されている。第2輪郭情報受付部1024は、受け付けた正解輪郭線21を形成する複数の輪郭点19を、例えば、所定のメモリに保持する。 The user uses the mouse to input a correct contour line within the limited range illustrated by the double arrows AW1 and AW2 in Figure 12. The user modifies the initial contour line 20, for example, to fit the shape of the pillar 11. The correct contour line is stored, for example, as information indicating the positions of multiple contour points 19 after each contour point 19 has been moved. Figure 13 is a diagram showing an example of a correct contour line 21. In Figure 13, the correct contour line 21 is displayed, fitting the shape of the pillar 11. The second contour information receiving unit 1024 stores the multiple contour points 19 that form the received correct contour line 21, for example, in a specified memory.

 次に、ユーザにより設定調整ボタン1251Dが入力されると、パラメータ値取得部1025は、パラメータの最適化アルゴリズムを実行する。より詳細には、パラメータ値取得部1025は、スムース、およびスレッショルドの値を初期設定値から変更し、損失を計算する処理を繰り返し、最終的に損失最小となるスムースの値とスレッショルドの設定値を取得する。ここで、損失最小となるスムースの値、およびスレッショルドの値は、輪郭抽出部1022が輪郭を抽出する場合に、表示上の差異が正解輪郭線21に最も近づくような輪郭線が抽出される値になる。 Next, when the user presses the setting adjustment button 1251D, the parameter value acquisition unit 1025 executes a parameter optimization algorithm. More specifically, the parameter value acquisition unit 1025 changes the smooth and threshold values from the initial settings, repeats the process of calculating the loss, and ultimately acquires the smooth value and threshold setting values that minimize the loss. Here, the smooth value and threshold value that minimize the loss are the values that, when the contour extraction unit 1022 extracts a contour, extracts a contour whose display difference is closest to the correct contour 21.

 このようにパラメータ値取得部1025によりパラメータ値が取得された後、輪郭抽出部1022が再度、輪郭線の抽出を行うことにより、第3輪郭情報表示部1026が修正輪郭線を表示する。図14は、正解輪郭線21に基づいて修正された修正輪郭線22(第3輪郭情報)の一例を示す図である。図14には、ピラー11の形状に沿った修正輪郭線22が表示されている。 After the parameter values are acquired by the parameter value acquisition unit 1025 in this way, the contour extraction unit 1022 extracts the contour again, and the third contour information display unit 1026 displays the corrected contour. Figure 14 is a diagram showing an example of a corrected contour 22 (third contour information) that has been corrected based on the corrected contour 21. Figure 14 displays the corrected contour 22 that follows the shape of the pillar 11.

 パラメータ値取得部1025の処理における一連のパラメータ値の設定の手順は、用いる最適化アルゴリズムによって異なる。最適化アルゴリズムは、例えば、最急降下法に代表される勾配法、シミュレーテッドアニーリングなどの乱数を用いた方法、または、ベイズ最適化などの統計的手法などを用いることができる。ここで、既述の輪郭抽出部1022の処理におけるアルゴリズムで算出される初期輪郭線20は、図7に示すように、特定のラインプロファイル17上で抽出された輪郭点19からなる点群であった。このため、損失は、2つの輪郭線間で対応する輪郭点間の距離の2乗和:
および、対応する輪郭点間の距離の最大値:
などを用いて求めることができる。ここで、x(t),y(t)は最適化アルゴリズム実行中のある時刻tでの、i番目の輪郭点の画像内座標であり、x ,y は正解輪郭線21のi番目の輪郭点の画像内座標である。
The procedure for setting a series of parameter values in the processing of the parameter value acquisition unit 1025 differs depending on the optimization algorithm used. The optimization algorithm may be, for example, a gradient method represented by the steepest descent method, a method using random numbers such as simulated annealing, or a statistical method such as Bayesian optimization. Here, the initial contour 20 calculated by the algorithm in the processing of the contour extraction unit 1022 described above is a point group made up of contour points 19 extracted on a specific line profile 17, as shown in FIG. 7. Therefore, the loss is the sum of squares of the distances between corresponding contour points on two contours:
and the maximum distance between corresponding contour points:
Here, x i (t) and y i (t) are the image coordinates of the i-th contour point at a certain time t during execution of the optimization algorithm, and x i R and y i R are the image coordinates of the i-th contour point of the correct contour 21.

 図15は、最適化アルゴリズム実行中の表示部1251Eの表示の一例を示す図である。図15においては、試料断面画像G10に加え、損失計算を表示する損失計算表示G15が表示される。損失計算表示G15には、パラメータに対する損失の計算結果が表示される。損失計算表示G15には、損失結果の計算結果を示すグラフが表示される。損失結果の計算結果を示すグラフは、表示されなくてもよい。また、最適化アルゴリズムによっては、全ての損失が計算されない場合、図15の実線に示すような滑らかな曲線は描画されず、各時刻tで計算した損失値が、図15のA´などで示したように点描されるケースもある。 FIG. 15 is a diagram showing an example of the display of the display unit 1251E while an optimization algorithm is running. In FIG. 15, in addition to the sample cross-section image G10, a loss calculation display G15 that displays the loss calculation is displayed. The loss calculation display G15 displays the calculation results of the loss for the parameters. The loss calculation display G15 displays a graph showing the calculation results of the loss result. The graph showing the calculation results of the loss result does not have to be displayed. Also, depending on the optimization algorithm, if not all losses are calculated, a smooth curve such as the solid line in FIG. 15 is not drawn, and the loss values calculated at each time t may be dotted as shown by A' in FIG. 15, etc.

 図15に示すように、最適化アルゴリズム実行中のある時刻tにおける初期輪郭線20(図示のAなどからなる)と正解輪郭線21とから、パラメータ値取得部1025は、既述の式(3)で示すL、および式(4)で示すLなどを用いて損失を計算する。表示Aは、初期輪郭線20を形成する輪郭点19の1つである。損失は、例えば、損失計算表示G15において、損失結果を示すグラフ上に四角形の表示A´で表示される。 15 , the parameter value acquisition unit 1025 calculates the loss from the initial contour 20 (consisting of A and the like shown in the figure) and the correct contour 21 at a certain time t during execution of the optimization algorithm, using L2 shown in the above-mentioned equation (3) and L1M shown in equation (4). The indicator A is one of the contour points 19 forming the initial contour 20. The loss is displayed, for example, as a square indicator A' on a graph showing the loss results in the loss calculation indicator G15.

 パラメータ値取得部1025は、最適化アルゴリズムに従ってパラメータ値を更新し、1ステップ進んだ時刻t+1で検出した輪郭線(図示のBなどからなる)に対しても、パラメータ値取得部1025は、同様に損失を計算する。点Bは、正解輪郭線21を形成する輪郭点19の1つである。損失は、例えば、損失計算表示G15に点B´で表示される。 The parameter value acquisition unit 1025 updates the parameter values according to the optimization algorithm, and similarly calculates the loss for the contour line (consisting of B, etc., as shown) detected one step later at time t+1. Point B is one of the contour points 19 that form the correct contour line 21. The loss is displayed, for example, as point B' in the loss calculation display G15.

 パラメータ値取得部1025は、パラメータ値を更新した後、損失を算出する処理を繰り返す。十分な回数損失を算出する処理を繰り返すことで、パラメータ値取得部1025は、損失が最小となるパラメータ値を得ることができる。例えば、損失計算表示G25では、損失が最小になるパラメータ値は点O´で表示される。損失が最小となるパラメータ値を用いて、輪郭抽出部1022が抽出した輪郭線が、正解輪郭線21に近づけた修正輪郭線22になる。 After updating the parameter values, the parameter value acquisition unit 1025 repeats the process of calculating the loss. By repeating the process of calculating the loss a sufficient number of times, the parameter value acquisition unit 1025 can obtain the parameter value that minimizes the loss. For example, in the loss calculation display G25, the parameter value that minimizes the loss is displayed as point O'. The contour extracted by the contour extraction unit 1022 using the parameter value that minimizes the loss becomes the corrected contour 22 that is closer to the correct contour 21.

 以上のように説明した場合以外でも、SEM100のような荷電粒子ビーム装置から得られた試料断面画像G10上の輪郭抽出において、ほぼ同様の手順でパラメータ値の自動最適化が可能である。ただし、既述した輪郭抽出処理で用いたアルゴリズム以外の輪郭線抽出アルゴリズムに対しては、与えられた2つの輪郭線間で輪郭点の対応付けができない場合も生じる。この場合、式(3)で示したL、および式(4)で示したLの損失計算ができない。したがって、輪郭点19の対応付けができない場合、パラメータ値取得部1025は、試料断面画像G10において輪郭線を構成する画素の画素値を1、そのほかの画素値を0とした二値配列を、初期輪郭線20、および正解輪郭線21それぞれに関して作成し、作成した配列の要素積にマイナスをつけた損失L
により計算することができる。ここで、(x,y)は、試料断面画像G10における画像内座標、W,Hはそれぞれ画像の幅、高さ(ともにピクセル数)であり、I(x,y;t)は最適化アルゴリズム実行中のある時刻tでの、初期輪郭線20から作成した二値配列の(x,y)画像内座標に対応する要素値、I (x,y)は正解輪郭線21から作成した二値配列の(x,y)画像内座標に対応する要素値である。
Even in cases other than those described above, automatic optimization of parameter values is possible in a similar procedure in contour extraction on a sample cross-sectional image G10 obtained from a charged particle beam device such as SEM 100. However, for contour extraction algorithms other than the algorithm used in the contour extraction process described above, there may be cases where contour points cannot be matched between two given contours. In such cases, it is not possible to calculate the losses L 2 shown in equation (3) and L M shown in equation (4). Therefore, when contour points 19 cannot be matched, the parameter value acquisition unit 1025 creates binary arrays for the initial contour 20 and the correct contour 21, in which the pixel values of the pixels constituting the contour in the sample cross-sectional image G10 are set to 1 and the other pixel values are set to 0, and calculates the loss L R by subtracting the product of the elements of the created arrays.
where (x, y) are image coordinates in the sample cross-sectional image G10, W and H are the width and height (both in number of pixels) of the image, respectively, I B (x, y; t) is an element value corresponding to the (x, y) image coordinates in the binary array created from the initial contour 20 at a certain time t during execution of the optimization algorithm, and I B R (x, y) is an element value corresponding to the (x, y) image coordinates in the binary array created from the correct contour 21.

 本実施の形態によると、コントローラ102は、ユーザが想定、期待する正解輪郭線21との差異が最小化するように、言い換えれば、正解輪郭線21に最も近づけた修正輪郭線22が抽出されるパラメータの設定値を取得することができる。このようにパラメータの設定値を取得する技術は、SEM100などの荷電粒子ビーム装置で撮像される、あらゆる種類の観察画像、あらゆる形状の輪郭線に対しても適用可能である。また、コントローラ102は、取得したパラメータ値を用いて抽出した修正輪郭線22を表示部1251Eに表示することができる。これにより、ユーザは、想定、期待する正解輪郭線21に修正輪郭線22がどこまで近づいているかを視認することができる。 According to this embodiment, the controller 102 can acquire parameter setting values that minimize the difference from the correct contour line 21 assumed and expected by the user; in other words, that extract a corrected contour line 22 that is closest to the correct contour line 21. This technology for acquiring parameter setting values can be applied to any type of observation image and any shape of contour line captured by a charged particle beam device such as SEM 100. The controller 102 can also display the corrected contour line 22 extracted using the acquired parameter values on the display unit 1251E. This allows the user to visually see how close the corrected contour line 22 is to the correct contour line 21 assumed and expected.

 (第2の実施の形態)
 第1の実施の形態で説明した試料130の試料断面画像G10での輪郭抽出処理において、全ての輪郭を抽出する必要がない場合がある。第2の実施の形態は、このような場合についての画像処理システムの処理について説明する。
Second Embodiment
In the contour extraction process for the sample cross-sectional image G10 of the sample 130 described in the first embodiment, there are cases where it is not necessary to extract all contours. In the second embodiment, processing by the image processing system for such cases will be described.

 <試料断面画像像の点間計測での適用例>
 図16は、試料断面画像(観察画像)G20の一例を示す図である。図16に示すように、あるピラー11の輪郭点193から他のピラー11の輪郭点194までの距離AW3の測長を行う場合、かつ、測長する2点の輪郭点の位置がほぼ決まっている場合には、輪郭抽出部1022は、すべての輪郭点を抽出する必要がない。かわりに、輪郭抽出部1022は、試料断面画像G20内の指定された小領域内で、各ピラー11において1点の輪郭点193,194をそれぞれ抽出すれば十分である。
<Example of application to point-to-point measurement of sample cross-sectional images>
16 is a diagram showing an example of a sample cross-sectional image (observation image) G20. As shown in FIG. 16, when measuring the distance AW3 from a contour point 193 of one pillar 11 to a contour point 194 of another pillar 11, and when the positions of the two contour points to be measured are approximately determined, the contour extraction unit 1022 does not need to extract all contour points. Instead, it is sufficient for the contour extraction unit 1022 to extract one contour point 193, 194 for each pillar 11 within a specified small region in the sample cross-sectional image G20.

 この場合に、輪郭抽出部1022が輪郭点19を抽出する輪郭点検出アルゴリズムとして最も簡単なアルゴリズムは、小領域の画素に対して平滑化フィルタ処理を施す処理である。ここで、平滑化フィルタ処理は、画像をぼかしてなめらかにするためのフィルタ処理である。輪郭抽出部1022は、小領域に対して平滑化フィルタ処理を施すことにより、小領域の画像をなめらかにすることができる。このフィルタ処理の後、輪郭抽出部1022は、もっとも画素値の高い画素を輪郭点193(あるいは、輪郭点194)として検出する。画像I(x,y)に対して、平滑化フィルタ処理を適用して得られる画像I(x,y)は、
により計算される。すなわちI(x,y)は、もとの画像I(x,y)の各ピクセルに対し、ピクセルを中心とする面積K×Kの画素領域で画素値の平均をとった画像である。ここで用いられるKは正の奇数であり、カーネルサイズと呼ばれるパラメータである。このパラメータ値を第1の実施の形態と同様の最適化アルゴリズムで、パラメータ値取得部1025が調整することができる。
In this case, the simplest algorithm for the contour extraction unit 1022 to extract contour points 19 is to apply smoothing filter processing to the pixels of the small region. Here, smoothing filter processing is a filter processing for blurring and smoothing an image. By applying smoothing filter processing to the small region, the contour extraction unit 1022 can smooth the image of the small region. After this filter processing, the contour extraction unit 1022 detects the pixel with the highest pixel value as the contour point 193 (or contour point 194). The image I S (x, y) obtained by applying smoothing filter processing to the image I(x, y) is expressed as follows:
That is, I S (x, y) is an image obtained by averaging the pixel values of a pixel region of area K × K centered on each pixel of the original image I S (x, y). K used here is a positive odd number, and is a parameter called kernel size. The parameter value acquisition unit 1025 can adjust this parameter value using the same optimization algorithm as in the first embodiment.

 図17は、輪郭点検出アルゴリズムによって検出された輪郭点の一例を示す図である。図17には、各ピラー11において、初期輪郭点231,232(第1輪郭情報)が示されている。輪郭点検出アルゴリズムによって抽出された初期輪郭点231,232は、それぞれ、第1の実施の形態で述べたようなマウス操作によって、ユーザが初期輪郭点231,232の位置を修正することができる。このとき、初期輪郭点231,232の移動できる範囲を制限してもよい。 Figure 17 is a diagram showing an example of contour points detected by the contour point detection algorithm. Figure 17 shows initial contour points 231, 232 (first contour information) for each pillar 11. The user can modify the positions of the initial contour points 231, 232 extracted by the contour point detection algorithm by mouse operation as described in the first embodiment. In this case, the range in which the initial contour points 231, 232 can be moved may be limited.

 ユーザにより位置が修正された初期輪郭点231,232を正解輪郭点241,242(第2輪郭情報)とする。パラメータ値取得部1025は、すべての小領域に関して、初期輪郭点231,232と正解輪郭点241,242との距離の二乗和を損失として最小化し、正解輪郭点241,242が最小となる修正輪郭点を検出する最適なパラメータ値、つまり、カーネルサイズの値を得ることができる。このように得たパラメータの設定値に基づいて、輪郭抽出部1022が輪郭を抽出し、その抽出結果である修正輪郭点を第3輪郭情報表示部1026が表示部1251Eに表示する。 The initial contour points 231, 232, whose positions have been corrected by the user, are designated as correct contour points 241, 242 (second contour information). The parameter value acquisition unit 1025 minimizes the sum of squares of the distances between the initial contour points 231, 232 and the correct contour points 241, 242 for all small regions as a loss, and can obtain the optimal parameter values, i.e., kernel size values, for detecting correct contour points that minimize the correct contour points 241, 242. Based on the parameter setting values obtained in this way, the contour extraction unit 1022 extracts the contour, and the third contour information display unit 1026 displays the correct contour points that are the extraction results on the display unit 1251E.

 図18は、修正輪郭点251,252(第3輪郭情報)の表示の一例を示す図である。図18においては、正解輪郭点241,241の近傍に修正輪郭点251,252が表示されている。 Figure 18 shows an example of the display of corrected contour points 251, 252 (third contour information). In Figure 18, corrected contour points 251, 252 are displayed near correct contour points 241, 241.

 図19は、最適化アルゴリズム実行中の表示部1251Eの表示の一例を示す図である。図19においては、試料断面画像G20に加え、損失計算を表示する損失計算表示G25が表示される。損失計算表示G25には、パラメータに対する損失の計算結果が表示される。損失計算表示G25には、損失結果の計算結果を示すグラフが表示される。グラフには、損失が最小になるパラメータ値は点O´で表示される。この最適化アルゴリズムの処理順序などは第1の実施の形態での図15を用いた説明と同様である。損失結果の計算結果を示すグラフは、表示されなくてもよく、点描されることもあるのは、図15を用いて説明した場合と同様である。 FIG. 19 is a diagram showing an example of the display of the display unit 1251E while the optimization algorithm is running. In FIG. 19, in addition to the sample cross-section image G20, a loss calculation display G25 that displays the loss calculation is displayed. The loss calculation display G25 displays the calculation results of the loss for the parameters. The loss calculation display G25 displays a graph showing the calculation results of the loss result. The parameter value that minimizes the loss is displayed as point O' on the graph. The processing order of this optimization algorithm is the same as that explained using FIG. 15 in the first embodiment. The graph showing the calculation results of the loss result does not have to be displayed, and may be dotted, as explained using FIG. 15.

 以上説明したように、本実施の形態によると、コントローラ102は、あるピラー11の輪郭点193から他のピラー11の輪郭点194までの測長を行う場合、かつ、測長する2点の輪郭点の位置がほぼ決まっている場合には、全ての輪郭点を抽出しなくても損失が最も少ないパラメータ値を取得できる。このため、コントローラ102の画像処理の負荷を減らすことができる。 As explained above, according to this embodiment, when the controller 102 measures the distance from a contour point 193 of one pillar 11 to a contour point 194 of another pillar 11, and when the positions of the two contour points to be measured are nearly determined, it can obtain parameter values with the least loss without extracting all contour points. This reduces the image processing load on the controller 102.

 <トップビュー画像での画像処理>
 以上説明した修正輪郭点251,252の抽出処理は、試料断面画像G20に対してのみならず、試料130のトップビュー画像に対しても有効である。図20は、試料130のトップビュー画像(観察画像)G30の一例を示す図である。試料130は、半導体デバイスである。図21は、トップビュー画像G30から抽出した初期輪郭点261,262(第1輪郭情報)、および正解輪郭点271,272(第2輪郭情報)の一例を示す図である。図22は、正解輪郭点に表示上の差異が最小となる修正輪郭点281,282(第3輪郭情報)の一例を示す図である。
<Image processing using top-view images>
The extraction process for corrected contour points 251 and 252 described above is effective not only for the sample cross-sectional image G20, but also for a top-view image of the sample 130. FIG. 20 is a diagram showing an example of a top-view image (observation image) G30 of the sample 130. The sample 130 is a semiconductor device. FIG. 21 is a diagram showing an example of initial contour points 261 and 262 (first contour information) and correct contour points 271 and 272 (second contour information) extracted from the top-view image G30. FIG. 22 is a diagram showing an example of corrected contour points 281 and 282 (third contour information) that minimize the display difference from the correct contour points.

 画像表示部1021の処理により与えられる、図20に示すトップビュー画像G30においても、輪郭抽出部1022は、測長したい2点の位置を指定し、平滑化フィルタ処理後の最大輝度画素を検出して、初期輪郭点261,262を抽出する。第1輪郭情報表示部1023は、図21に示すように、初期輪郭点261,262をトップビュー画像G30に表示する。 In the top-view image G30 shown in FIG. 20, which is provided by processing by the image display unit 1021, the contour extraction unit 1022 specifies the positions of two points to be measured, detects the pixel with the highest brightness after smoothing filter processing, and extracts initial contour points 261 and 262. The first contour information display unit 1023 displays the initial contour points 261 and 262 in the top-view image G30, as shown in FIG. 21.

 ユーザは、マウス等を操作して、初期輪郭点261,262の位置を修正して、正解輪郭点271,272を入力する。これにより、第2輪郭情報受付部1024は、正解輪郭点271,272の位置を受け付ける。次に、パラメータ値取得部1025は、最適化アルゴリズムを実行することにより、正解輪郭点271,271との差異が最小化するパラメータの設定値を取得する。このように取得した設定値に基づいて、輪郭抽出部1022が修正輪郭点281,282の抽出を行う。第3輪郭情報表示部1026は、図22に示すように、修正輪郭点281,282をトップビュー画像G30に表示する。これにより、ユーザは、正解輪郭点271,272に最も近づけた修正輪郭点281,282を視認することができる。 The user operates a mouse or the like to correct the positions of initial contour points 261, 262 and input correct contour points 271, 272. As a result, second contour information receiving unit 1024 receives the positions of correct contour points 271, 272. Next, parameter value acquisition unit 1025 executes an optimization algorithm to obtain parameter setting values that minimize the difference from correct contour points 271, 271. Based on the setting values thus obtained, contour extraction unit 1022 extracts corrected contour points 281, 282. As shown in FIG. 22, third contour information display unit 1026 displays corrected contour points 281, 282 in top-view image G30. This allows the user to visually recognize corrected contour points 281, 282 that are closest to correct contour points 271, 272.

 (第3の実施の形態)
 第3の実施の形態は、輪郭線の幾何学的特徴量の期待値が予め与えられた場合のコントローラ102の画像処理について説明する。
(Third embodiment)
In the third embodiment, image processing by the controller 102 when expected values of the geometric feature amounts of the contour lines are given in advance will be described.

 <粒子試料画像>
 図23は、SEM100を用いた粒子状試料の観察・解析を行う場合に得られる粒子試料画像(観察画像)G40の一例を示す図である。このような試料130の粒子試料画像G40内の個々の粒子の境界、すなわち輪郭の抽出には、領域抽出アルゴリズムを用いることができる。図24に示すように、粒子試料画像G40には、複数の円形状の粒子が画素領域PA1に表示されている。
<Particle sample image>
23 is a diagram showing an example of a particle sample image (observation image) G40 obtained when observing and analyzing a particulate sample using SEM 100. A region extraction algorithm can be used to extract the boundaries, i.e., contours, of individual particles in particle sample image G40 of such sample 130. As shown in FIG. 24, in particle sample image G40, multiple circular particles are displayed in pixel area PA1.

 <粒子領域の抽出処理>
 本実施の形態では、既述の輪郭抽出処理に先立ち、輪郭抽出部1022は、解析対象となる粒子試料画像に対して、粒子領域の抽出処理を行う。ここでは、粒子領域が背景領域よりも明るい画像を考える。多くの場合、与えられた画像に対する粒子領域抽出処理は、以下の画像処理を伴う:
 ステップST1:二値化処理による粒子領域(白)と背景領域(黒)の粗い分離処理
 ステップST2:オープニング処理/クロージング処理によるノイズ除去処理
 ステップST3:膨張処理による「確実な背景領域」の抽出処理
 ステップST4:距離変換と二値化処理による「確実な粒子領域」の抽出処理
<Particle region extraction process>
In this embodiment, prior to the outline extraction process described above, the outline extraction unit 1022 performs particle region extraction processing on the particle sample image to be analyzed. Here, we consider an image in which the particle region is brighter than the background region. In many cases, the particle region extraction processing for a given image involves the following image processing:
Step ST1: Rough separation of particle regions (white) and background regions (black) by binarization. Step ST2: Noise removal by opening/closing. Step ST3: Extraction of "certain background regions" by expansion. Step ST4: Extraction of "certain particle regions" by distance transformation and binarization.

 本実施の形態では、輪郭抽出部1022は、ステップST1~ST4の処理を用いて粒子画像の画像処理を行う。また、ステップST4の処理の後には、「粒子/背景かが曖昧な領域」の抽出処理、各領域のラベリング処理、およびWatershed処理が続くが、これらはパラメータ値設定が不要な画像処理であるため、本実施の形態では、詳しく言及しない。 In this embodiment, the contour extraction unit 1022 performs image processing of particle images using steps ST1 to ST4. Furthermore, step ST4 is followed by extraction processing of "areas where it is unclear whether they are particles or background," labeling processing of each area, and watershed processing; however, these are image processes that do not require parameter value setting, and therefore will not be discussed in detail in this embodiment.

 図24は、ステップST1の分離処理後の粒子試料画像G40の一例を示す図である。図25は、ステップST2のノイズ除去処理後の粒子試料画像G40の一例を示す図である。図26は、ステップST3の抽出処理後の粒子試料画像G40の一例を示す図である。図27は、ステップST4の抽出処理後の粒子試料画像G40の一例を示す図である。 Figure 24 is a diagram showing an example of a particle sample image G40 after the separation process of step ST1. Figure 25 is a diagram showing an example of a particle sample image G40 after the noise removal process of step ST2. Figure 26 is a diagram showing an example of a particle sample image G40 after the extraction process of step ST3. Figure 27 is a diagram showing an example of a particle sample image G40 after the extraction process of step ST4.

 既述のステップST1~ST4のそれぞれ処理において、輪郭抽出部1022が粒子試料画像G40の画像処理実行時に設定すべきパラメータの設定値は、それぞれ以下の通りである。
 ステップST1の分離処理においては、二値化処理のスレッショルド(パラメータP1)である。ステップST2のノイズ除去処理においては、オープニング処理/クロージング処理のカーネルサイズ、および繰り返し回数(パラメータP2)である。ステップST3の抽出処理においては、膨張処理のカーネルサイズ、および繰り返し回数(パラメータP3)である。ステップST4の抽出処理においては、距離変換/二値化処理のスレッショルド(パラメータP4)である。以下、それぞれの画像処理と設定値との関係を説明する。
In the processes of steps ST1 to ST4 described above, the parameter setting values that the contour extraction unit 1022 should set when executing image processing of the particle sample image G40 are as follows:
In the separation process of step ST1, it is the threshold of the binarization process (parameter P1). In the noise removal process of step ST2, it is the kernel size and number of repetitions of the opening process/closing process (parameter P2). In the extraction process of step ST3, it is the kernel size and number of repetitions of the expansion process (parameter P3). In the extraction process of step ST4, it is the threshold of the distance transformation/binarization process (parameter P4). The relationship between each image process and the setting value will be explained below.

 <二値化処理>
 画像I(x,y)に対してスレッショルドIで二値化処理を行った画像I(x,y)は、
により計算される。ここで、IMAXは、もとの画像I(x,y)のビット深度における最大画素値であり、例えば、ビット深度が8であれば、IMAX=2-1=255である。二値化処理により、粒子領域らしき画素領域は白(I(x,y)=IMAX)、背景領域らしき画素領域は黒(I(x,y)=0)になり、図24に示す画像が得られる。図24に示すように、粒子試料画像G40には、粒子領域らしき画素領域PA2、および黒点ノイズN1,白点ノイズN2が複数含まれる。黒点ノイズN1は、白色領域におけるゴマ状のノイズである。白点ノイズN2は、黒色領域におけるゴマ状のノイズである。
<Binarization processing>
The image I B (x, y) obtained by binarizing the image I(x, y) using a threshold I T is expressed as follows:
where I MAX is the maximum pixel value in the bit depth of the original image I(x, y); for example, if the bit depth is 8, then I MAX = 2 8 - 1 = 255. Through the binarization process, pixel areas that appear to be particle regions become white (I B (x, y) = I MAX ), and pixel areas that appear to be background regions become black (I B (x, y) = 0), resulting in the image shown in FIG. 24. As shown in FIG. 24, the particle sample image G40 contains a pixel area PA2 that appears to be a particle region, as well as multiple black spot noises N1 and white spot noises N2. The black spot noise N1 is black spot-shaped noise in the white region. The white spot noise N2 is black spot-shaped noise in the black region.

 <膨張/収縮処理>
 図24に示す二値化処理によって得られた粒子試料画像G40には、ゴマ状のノイズNが含まれる。このノイズNを除去する目的で行うのがオープニング処理/クロージング処理である。オープニング処理/クロージング処理は、膨張/収縮処理の組み合わせであるため、まず、その説明を行う。
<Expansion/Contraction Processing>
The particle sample image G40 obtained by the binarization process shown in Fig. 24 contains sesame-like noise N. The opening process/closing process is performed to remove this noise N. The opening process/closing process is a combination of expansion/contraction processes, so it will be explained first.

 膨張処理/収縮処理は、ともに二値化画像に対して実行される処理である。まず膨張処理だが、二値化画像I(x,y)に対してカーネルサイズK、繰り返し回数1回で膨張処理を行った画像I(D,1)(x,y)は、
により計算される。すなわち、もとの画像I(x,y)の各ピクセルに対し、ピクセルを中心とする面積K×Kの画素領域内に1つでも画素値IMAX(白)のピクセルがあれば、I(x,y)=IMAXとする処理が膨張処理である。なお、Kは正の奇数である。この処理により、図24に示すように、白色領域内にあるゴマ状の黒点ノイズN1(黒)は白に置き換えられ、除去される。以後、表記の簡単のため、
と表す。すなわち、たとえば二値化画像I(x,y)に対してカーネルサイズK、繰り返し回数n回で膨張処理を行った画像I(D,n)(x,y)を、
と表す。なお、oは処理の合成を表し、以下の意味である:
Both the expansion process and the contraction process are processes performed on a binarized image. First, the expansion process is performed on the binarized image I B (x, y) with a kernel size K D and one repetition to obtain an image I (D, 1) (x, y) .
In other words, for each pixel of the original image IB (x,y), if there is at least one pixel with pixel value IMAX (white) within a pixel region of area KD x KD centered on the pixel, then the expansion process sets I(x,y) = IMAX . Note that KD is a positive odd number. With this process, as shown in Figure 24, black dot noise N1 (black) in the white region is replaced with white and removed. Hereinafter, for simplicity of notation,
That is, for example, an image I (D,n)(x,y) obtained by performing expansion processing on a binary image IB (x,y) with a kernel size KD and repeated n times is expressed as follows:
where o represents the composition of the process and has the following meaning:

 収縮処理は膨張処理の逆操作である。二値化画像I(x,y)に対してカーネルサイズK、繰り返し回数1回で収縮処理を行った画像I(E,1)(x,y)は、
により計算される。すなわち、もとの画像I(x,y)の各ピクセルに対し、ピクセルを中心とする面積K×Kの画素領域内に1つでも画素値0(黒)のピクセルがあれば、I(x,y=0とする処理が収縮処理である。なお、KはKと同様、正の奇数である。この処理により、図24に示すように、黒色領域内にあるゴマ状の白点ノイズN2(白)は黒に置き換えられ、除去される。
The erosion process is the inverse operation of the expansion process. The image I (E,1)(x,y) obtained by performing the erosion process on the binary image IB(x,y) with the kernel size K E and one repetition is given by
In other words, for each pixel of the original image IB (x,y), if there is even one pixel with a pixel value of 0 (black) within a pixel region of area K E × K E centered on the pixel, then I(x,y) is set to 0 (this is the contraction process). Note that K E is a positive odd number, just like K D. With this process, as shown in FIG. 24, the black dot noise N2 (white) in the black region is replaced with black and removed.

 <オープニング/クロージング処理>
 白色領域内の黒点ノイズN1を除去するために膨張処理が使用できるが、膨張処理を行うと白色領域そのものも拡がってしまう。そこで、膨張処理の後、収縮処理を実行することで、輪郭抽出部1022は、白色領域についは変化させずに白色領域から黒点ノイズN1を除去することができる。この膨張から収縮の順で行う処理をオープニング処理という。すなわち、二値化画像(I(x,y)に対してカーネルサイズK、繰り返し回数1回でオープニング処理を行った画像I(O,1)(x,y)は、
により計算される。Kは正の奇数である。
<Opening/closing process>
Dilation processing can be used to remove black spot noise N1 from the white area, but performing dilation processing also expands the white area itself. Therefore, by performing erosion processing after dilation processing, the contour extraction unit 1022 can remove black spot noise N1 from the white area without changing the white area. This process of performing dilation followed by erosion is called opening processing. In other words, the image I(O,1)(x,y) obtained by performing opening processing on the binary image (I B (x,y) with a kernel size K O and one repetition is
It is calculated by: where K O is a positive odd number.

 オープニング処理の逆順処理がクロージング処理である。クロージング処理では、輪郭抽出部1022は、黒色領域については変化させずに黒色領域から白点ノイズを除去することができる。すなわち、二値化画像I(x,y)に対してカーネルサイズK、繰り返し回数1回でクロージング処理を行った画像I(C,1)(x,y)は、
により計算される。Kは正の奇数である。なお、輪郭抽出部1022は、オープニング処理およびクロージング処理は、ともに繰り返して処理を行ってよい。
The closing process is the reverse process of the opening process. In the closing process, the contour extraction unit 1022 can remove white spot noise from black areas without changing the black areas. That is, the image I( C ,1)(x,y) obtained by performing the closing process on the binarized image IB (x,y) with a kernel size KC and one repetition is expressed as follows:
where KC is a positive odd number. The contour extraction unit 1022 may repeat both the opening process and the closing process.

 粒子画像の解析では、確実な背景領域を見出すために、図24に示す粒子試料画像に対してカーネルサイズK(K)、繰り返し回数N(N)でオープニング処理(クロージング処理)を施すことでノイズ除去を行った後、さらにカーネルサイズK、繰り返し回数Nで膨張処理を行う。これにより粒子領域らしい領域を含むより大きな領域が白となり、それ以外の黒の領域を「確実な背景領域」とすることができる。 In analyzing particle images, in order to find reliable background regions, noise is removed by performing an opening process (closing process) on the particle sample image shown in Figure 24 with a kernel size K O (K C ) and a repetition count N O (N C ), and then a dilation process is performed with a kernel size K D and a repetition count N D. As a result, larger regions including areas that appear to be particle regions become white, and the remaining black regions can be determined to be "reliable background regions."

 <距離変換>
 確実な粒子領域は、図25に示す「粒子領域らしい領域(白)」PA3の中央近傍、すなわち「確実な背景領域」(黒)から十分離れた画素領域であると考えられる。そのような領域を同定するため、距離変換と呼ばれる画像変換処理を行う。
<Distance transformation>
A reliable particle region is considered to be a pixel region located near the center of the "likely particle region (white)" PA3 shown in Fig. 25, that is, a pixel region that is sufficiently far from the "reliable background region" (black). In order to identify such a region, an image transformation process called distance transformation is performed.

 距離変換は二値化画像に対して実行される処理である。二値化画像I(x,y)に対して距離変換を施した距離マップD(x,y;I))は、
により計算される。すなわち、もとの画像I(x,y)の各ピクセルに対し、ピクセルから最も近い黒ピクセル(I(x´,y´)=0)までの距離をD(x,y;I)とする。距離マップを画像化すると、粒子領域らしき白色領域PA4が図26に示すように、明るさが段階状に表示され、明るい領域ほど「確実な背景領域」(黒)ら離れている。明るい領域ほど、「確実な粒子領域」である可能性が高い。
Distance transformation is a process performed on a binarized image. The distance map D(x, y; I B ) obtained by performing distance transformation on a binarized image I B (x, y) is expressed as follows:
That is, for each pixel in the original image IB (x,y), the distance from the pixel to the nearest black pixel ( IB (x',y')=0) is defined as D(x,y; IB ). When the distance map is visualized, the white area PA4 that appears to be a particle area is displayed with stepped brightness as shown in FIG. 26, with the brighter the area, the farther it is from the "certain background area" (black). The brighter the area, the more likely it is to be a "certain particle area".

 図26に示す距離マップ画像に対し、ユーザが指定したスレッショルドで二値化を行い、図27に示す二値化画像を得る。ここでのスレッショルドで二値化する処理は既述の式(7)と同様であるが、パラメータであるスレッショルドIの具体的値は異なり、粒子領域の画像に適した初期値をユーザが指定して設定する。この指定は、例えば、入力デバイス124を用いて行われる。図27に示す粒子試料画像G40内の白色の粒子領域PA5は、「確実な背景領域」から十分離れていると判断された領域であり、「確実な粒子領域」とされる。 The distance map image shown in FIG. 26 is binarized using a threshold specified by the user, to obtain the binarized image shown in FIG. 27. The threshold binarization process here is similar to that of the previously described equation (7), but the specific value of the parameter threshold IT is different, and the user specifies and sets an initial value suitable for the image of the particle region. This specification is performed, for example, using the input device 124. The white particle region PA5 in the particle sample image G40 shown in FIG. 27 is a region that is determined to be sufficiently far from the "certain background region," and is therefore considered to be the "certain particle region."

 以上が粒子解析に伴う画像処理の説明である。パラメータの値の設定が必要なこれらの画像処理の後、輪郭抽出部1022は、パラメータの値の設定が不要な画像処理を行うことにより、粒子領域、およびその輪郭線が図28に示すように得られる。図28は、パラメータ値の設定が不要な画像処理後の粒子試料画像G40の一例を示す図である。図28に示す粒子試料画像G40には、複数の粒子領域PA6が表示されている。 The above is an explanation of the image processing involved in particle analysis. After these image processing steps that require the setting of parameter values, the contour extraction unit 1022 performs image processing that does not require the setting of parameter values, thereby obtaining particle regions and their contour lines as shown in Figure 28. Figure 28 is a diagram showing an example of a particle sample image G40 after image processing that does not require the setting of parameter values. The particle sample image G40 shown in Figure 28 displays multiple particle regions PA6.

 既述のステップST1~ST4の画像処理を行う場合、粒子の実寸(平均直径や面積など)が事前に分かっていれば、ユーザはその値を幾何学的特徴量としてソフトウェアに入力できる。幾何学的特徴量として粒子の平均直径を入力した場合、パラメータP1~P4に示した各パラメータに対する最適化アルゴリズムによって、輪郭抽出部1022は、ユーザの入力した直径の値と、ステップST1~ST4の処理およびそれに続く処理によって定まる粒子領域の平均直径との差が最小となるようにパラメータP1~P4の初期値を定めればよい。 When performing the image processing of steps ST1 to ST4 described above, if the actual particle dimensions (average diameter, area, etc.) are known in advance, the user can input these values into the software as geometric features. If the average particle diameter is input as a geometric feature, the contour extraction unit 1022 uses an optimization algorithm for each parameter shown in parameters P1 to P4 to determine the initial values of parameters P1 to P4 so as to minimize the difference between the diameter value input by the user and the average diameter of the particle region determined by the processing of steps ST1 to ST4 and the subsequent processing.

 このように定めたパラメータP1~P4の初期値、またはソフトウェアが定数として持っているパラメータP1~P4の初期設定値を用いて、輪郭抽出部1022は、ステップST1~ST4の処理およびそれに続く画像処理を行う。次に、第1輪郭情報表示部1023は、画像処理の結果に基づいて得られた粒子領域および初期輪郭線をGUI1251の表示部1251Eに表示する。 Using the initial values of parameters P1 to P4 determined in this way, or the initial settings of parameters P1 to P4 held as constants by the software, the contour extraction unit 1022 performs steps ST1 to ST4 and the subsequent image processing. Next, the first contour information display unit 1023 displays the particle area and initial contour line obtained based on the results of the image processing on the display unit 1251E of the GUI 1251.

 次に、ユーザは、第1の実施の形態で述べたマウス操作により、表示部1251Eに表示された初期輪郭線を修正して正解輪郭線を入力する。第2輪郭情報受付部1024が入力された正解輪郭線を受け付けると、第2輪郭情報受付部1024は、例えば、全体制御部120のメモリに正解輪郭線を保持する。 Next, the user modifies the initial contour displayed on the display unit 1251E using the mouse operation described in the first embodiment to input the correct contour. When the second contour information receiving unit 1024 receives the input correct contour, the second contour information receiving unit 1024 stores the correct contour in the memory of the overall control unit 120, for example.

 図29は、粒子試料画像G40の一例を示す図である。図29は、1つの粒子のトップビュー画像を示している。図29においては、1つの粒子に関して、複数の初期輪郭点29、および複数の正解輪郭点30が表示されている。複数の初期輪郭点29は、第1輪郭情報表示部1023により表示されている。複数の正解輪郭点30は、ユーザがマウス等を操作して入力したユーザが想定・期待する輪郭点である。複数の正解輪郭点30の位置はそれぞれ第2輪郭情報受付部1024により受け付けられる。 Figure 29 is a diagram showing an example of a particle sample image G40. Figure 29 shows a top view image of one particle. In Figure 29, multiple initial contour points 29 and multiple correct contour points 30 are displayed for one particle. The multiple initial contour points 29 are displayed by the first contour information display unit 1023. The multiple correct contour points 30 are contour points that the user assumes or expects and inputs by operating a mouse or the like. The positions of the multiple correct contour points 30 are each accepted by the second contour information acceptance unit 1024.

 パラメータP1~P4の各パラメータ値を動作させる最適化アルゴリズムによって、パラメータ値取得部1025は、初期輪郭点29と正解輪郭点30との差異が最小化するようにパラメータP1~P4の各パラメータの値を得ることができる。 By using an optimization algorithm that operates on the values of the parameters P1 to P4, the parameter value acquisition unit 1025 can obtain the values of the parameters P1 to P4 so as to minimize the difference between the initial contour point 29 and the correct contour point 30.

 図30は、粒子試料画像G40の一例を示す図である。図30においては、1つの粒子に関して、複数の修正輪郭点31が粒子試料画像G40上に表示されている。修正輪郭点31は、初期輪郭点29と正解輪郭点30との差異が最小化する輪郭点である。 Figure 30 is a diagram showing an example of a particle sample image G40. In Figure 30, multiple corrected contour points 31 are displayed on the particle sample image G40 for one particle. The corrected contour points 31 are contour points that minimize the difference between the initial contour points 29 and the correct contour points 30.

 図31は、最適化アルゴリズム実行中の表示部1251Eの表示の一例を示す図である。図31においては、粒子試料画像G40に加え、損失計算を表示する損失計算表示G45が表示される。損失計算表示G45には、パラメータに対する損失の計算結果が表示される。損失計算表示G45には、損失結果の計算結果を示すグラフが表示される。グラフには、損失が最小になるパラメータ値が点O´で表示される。この最適化アルゴリズムの処理順序などは第1の実施の形態での図15を用いた説明と同様である。損失結果の計算結果を示すグラフは、表示されなくてもよく、点描されることもあるのは、図15を用いて説明した場合と同様である。 Figure 31 is a diagram showing an example of the display on the display unit 1251E while the optimization algorithm is running. In Figure 31, in addition to the particle sample image G40, a loss calculation display G45 that displays the loss calculation is displayed. The loss calculation display G45 displays the calculation results of the loss for the parameters. The loss calculation display G45 displays a graph showing the calculation results of the loss result. The parameter value that minimizes the loss is displayed as point O' on the graph. The processing order of this optimization algorithm is the same as that explained using Figure 15 in the first embodiment. The graph showing the calculation results of the loss result does not have to be displayed, and may be dotted, as explained using Figure 15.

 既述のパラメータP1~P4の各パラメータ値の初期値の設定を行う処理は、第1の実施の形態に示したような試料断面画像G10に対する輪郭抽出部1022の輪郭線抽出処理においても適用可能である。例えば、既述の図4に示す横並びのピラー11の周期幅(ピッチと称される。)の期待値が事前にわかっていれば、輪郭抽出部1022は、その期待値で抽出した初期輪郭線によって計算されるピッチを得ることができる。そして、輪郭抽出部1022は、輪郭抽出処理で用いるスムース、およびスレッショルドのそれぞれの初期設定値から抽出される輪郭点のピッチが期待値から得られたピッチと表示上の差異が最小となるように、初期設定値を取得することができる。これにより、輪郭抽出部1022は、ピラー11の構造により正確に沿うように、初期輪郭線を抽出することができる。第1輪郭情報表示部1023が初期輪郭線を表示した後の、第2輪郭情報受付部1024による正解輪郭線の受け付け、パラメータ値取得部1025によるパラメータP1~P4の各パラメータ値の最適化などに関しては、第1の実施の形態で説明した処理と同様である。 The process of setting the initial values of the parameters P1 to P4 described above can also be applied to the contour extraction process of the contour extraction unit 1022 for the sample cross-sectional image G10 as shown in the first embodiment. For example, if the expected value of the periodic width (referred to as the pitch) of the horizontally arranged pillars 11 shown in Figure 4 described above is known in advance, the contour extraction unit 1022 can obtain the pitch calculated from the initial contour extracted using that expected value. The contour extraction unit 1022 can then obtain initial settings for the smooth and threshold parameters used in the contour extraction process so that the displayed difference between the pitch of the contour points extracted using the initial settings and the pitch obtained from the expected values is minimized. This allows the contour extraction unit 1022 to extract the initial contour so that it more accurately follows the structure of the pillars 11. After the first contour information display unit 1023 displays the initial contour, the second contour information receiving unit 1024 receives the correct contour, and the parameter value acquisition unit 1025 optimizes the parameter values P1 to P4, etc., are the same as the processing described in the first embodiment.

 以上、本開示の実施の形態について具体的に説明したが、前述の実施の形態に限定されず、要旨を逸脱しない範囲で種々変更可能である。各実施の形態は、必須構成要素を除き、構成要素の追加・削除・置換などが可能である。特に限定しない場合、各構成要素は、単数でも複数でもよい。各実施の形態や変形例を組み合わせた形態も可能である。前述の各構成、機能、処理部等は、それらの一部または全部を、例えば集積回路での設計等のハードウェアで実現してもよいし、プロセッサがプログラムを解釈して実行することによるソフトウェアで実現してもよい。各機能を実現するプログラム、テーブル、ファイル等のデータ・情報は、メモリ、ハードディスク、SSD等の記録装置、または、ICカード、SDカード、DVD等の記録媒体に置くことができる。 The above provides a specific description of the embodiments of the present disclosure, but the present disclosure is not limited to the above embodiments and various modifications are possible without departing from the spirit of the present disclosure. Except for essential components, each embodiment allows for the addition, deletion, or substitution of components. Unless otherwise specified, each component may be singular or plural. Combinations of the embodiments and variations are also possible. Some or all of the aforementioned configurations, functions, processing units, etc. may be realized by hardware, such as an integrated circuit design, or by software in which a processor interprets and executes a program. Data and information such as programs, tables, and files that realize each function may be stored in a storage device such as memory, hard disk, or SSD, or on a storage medium such as an IC card, SD card, or DVD.

11…突起構造(ピラー)、12…谷構造(トレンチ)、13…基準線、16…基準点、17,18…ラインプロファイル、19…輪郭点、20…初期輪郭線、21…正解輪郭線、22…修正輪郭線、29,231,232,261,262…初期輪郭点、30,241,242,271,272…正解輪郭点、31,251,252,281,282…修正輪郭点、100…荷電粒子ビーム装置(SEM)、101…本体、102…コントローラ、111…電子銃、112…集束レンズ、113…偏向レンズ、114…対物レンズ、114…および対物レンズ、115…ステージ、116…検出器、130…試料、1021…画像表示部、1022…輪郭抽出部、1023…第1輪郭情報表示部、1024…第2輪郭線受付部、1024…第2輪郭情報受付部、1025…パラメータ値取得部、1026…第3輪郭情報表示部、1251…グラフィカルユーザインタフェース、1251A…インポートイメージボタン、1251B…ディテクトボタン、1251C…エディットボタン、1251D…設定調整ボタン、1251D…ユーザにより設定調整ボタン、AW1,AW2…両矢印、G10,G20…試料断面画像、G15,G25,G45…損失計算表示、G30…トップビュー画像、G40…粒子試料画像 11...protrusion structure (pillar), 12...valley structure (trench), 13...reference line, 16...reference point, 17, 18...line profile, 19...contour point, 20...initial contour line, 21...correct contour line, 22...corrected contour line, 29, 231, 232, 261, 262...initial contour points, 30, 241, 242, 271, 272...correct contour points, 31, 251, 252, 281, 282...corrected contour points, 100...charged particle beam device (SEM), 101...main body, 102...controller, 111...electron gun, 112...focusing lens, 113...deflection lens, 114...objective lens, 114...and objective lens, 115...stage, 116...detector, 130... Sample, 1021...Image display, 1022...Contour extraction, 1023...First contour information display, 1024...Second contour line reception, 1024...Second contour information reception, 1025...Parameter value acquisition, 1026...Third contour information display, 1251...Graphical user interface, 1251A...Import image button, 1251B...Detect button, 1251C...Edit button, 1251D...Settings adjustment button, 1251D...User settings adjustment button, AW1, AW2...Double arrows, G10, G20...Sample cross-section image, G15, G25, G45...Loss calculation display, G30...Top view image, G40...Particle sample image

Claims (11)

 荷電粒子ビーム装置による試料の観察画像を処理する画像処理システムであって、
 前記観察画像を表示する画像表示部と、
 予め設定されたパラメータの第1設定値に基づいて、前記観察画像から前記試料の構造上の境界に関する第1輪郭情報を抽出する輪郭抽出部と、
 前記輪郭抽出部で抽出された第1輪郭情報を前記観察画像上に表示する第1輪郭情報表示部と、
 前記観察画像において視認可能な境界に関する第2輪郭情報の入力を受け付ける第2輪郭情報受付部と、
 前記観察画像における前記第1輪郭情報と前記第2輪郭情報との差異が最小化するように前記第1設定値と異なる第2設定値を取得するパラメータ値取得部と、を備える、
 画像処理システム。
An image processing system for processing an observation image of a sample obtained by a charged particle beam device, comprising:
an image display unit that displays the observed image;
a contour extraction unit that extracts first contour information related to a structural boundary of the sample from the observation image based on a first set value of a preset parameter;
a first contour information display unit that displays the first contour information extracted by the contour extraction unit on the observation image;
a second contour information receiving unit that receives input of second contour information relating to a boundary visible in the observation image;
a parameter value acquisition unit that acquires a second setting value that is different from the first setting value so as to minimize a difference between the first contour information and the second contour information in the observation image,
Image processing system.
 請求項1に記載の画像処理システムであって、
 前記パラメータ値取得部により取得された前記第2設定値に基づいて前記輪郭抽出部により抽出された第3輪郭情報を前記観察画像上に表示する第3輪郭情報表示部を、さらに備える、
 画像処理システム。
2. The image processing system according to claim 1,
a third contour information display unit that displays, on the observation image, third contour information extracted by the contour extraction unit based on the second setting value acquired by the parameter value acquisition unit.
Image processing system.
 請求項1に記載の画像処理システムであって、
 前記パラメータは、複数種類あり、
 複数の前記パラメータの1つは、画像のノイズを除去するためのパラメータである、
 画像処理システム。
2. The image processing system according to claim 1,
There are multiple types of parameters,
one of the plurality of parameters is a parameter for removing noise from an image;
Image processing system.
 請求項1に記載の画像処理システムであって、
 前記パラメータは、複数種類あり、
 複数の前記パラメータの1つは、画像の領域を区別する閾値を定めるパラメータである、
 画像処理システム。
2. The image processing system according to claim 1,
There are multiple types of parameters,
one of the plurality of parameters is a parameter that defines a threshold for distinguishing regions of the image;
Image processing system.
 請求項1に記載の画像処理システムであって、
 前記輪郭抽出部は、前記画像表示部に表示された観察画像に基づいて、前記構造上の境界を規定するための基準線、および基準点の入力を受け付け、前記基準線、および前記基準点と、前記第1設定値に基づいて、前記観察画像から前記第1輪郭情報を抽出する、
 画像処理システム。
2. The image processing system according to claim 1,
the contour extraction unit receives input of a reference line and a reference point for defining the structural boundary based on the observation image displayed on the image display unit, and extracts the first contour information from the observation image based on the reference line and the reference point and the first setting value;
Image processing system.
 請求項1に記載の画像処理システムであって、
 前記第1輪郭情報が前記画像表示部に表示された前記観察画像上で移動され、
 前記第1輪郭情報が移動されることにより前記第2輪郭情報が整形され、
 前記第1輪郭情報が前記観察画像上で移動できる範囲が前記観察画像上で表示される、
 画像処理システム。
2. The image processing system according to claim 1,
the first contour information is moved on the observed image displayed on the image display unit,
the first contour information is moved to shape the second contour information;
a range within which the first contour information can be moved on the observation image is displayed on the observation image;
Image processing system.
 請求項1に記載の画像処理システムであって、
 前記パラメータは、複数種類あり、
 複数の前記パラメータの1つは、前記観察画像のうち選択された領域の画像を滑らかにする平滑化フィルタ処理のカーネルサイズである、
 画像処理システム。
2. The image processing system according to claim 1,
There are multiple types of parameters,
one of the plurality of parameters is a kernel size of a smoothing filter process that smoothes an image of a selected region of the observed image;
Image processing system.
 請求項1に記載の画像処理システムであって、
 前記パラメータは、複数種類あり、
 複数の前記パラメータの1つは、前記観察画像のうち白色領域については変化させずに前記白色領域から黒点ノイズを除去するオープニング処理のカーネルサイズである、
 画像処理システム。
2. The image processing system according to claim 1,
There are multiple types of parameters,
one of the plurality of parameters is a kernel size of an opening process that removes black dot noise from a white region of the observed image without changing the white region of the observed image;
Image processing system.
 請求項1に記載の画像処理システムであって、
 前記パラメータは、複数種類あり、
 複数の前記パラメータの1つは、前記観察画像のうち黒色領域については変化させずに前記黒色領域から白点ノイズを除去するクロージング処理のカーネルサイズである、
 画像処理システム。
2. The image processing system according to claim 1,
There are multiple types of parameters,
one of the plurality of parameters is a kernel size of a closing process that removes white dot noise from the black region of the observed image without changing the black region of the observed image;
Image processing system.
 荷電粒子ビーム装置による試料の観察画像を処理する画像処理システムが実行する画像処理方法であって、
 前記画像処理システムが、
 前記観察画像を表示し、
 予め設定されたパラメータの第1設定値に基づいて、前記観察画像から前記試料の構造上の境界に関する第1輪郭情報を抽出し、
 抽出された前記第1輪郭情報を前記観察画像上に表示し、
 前記観察画像において視認可能な境界に関する第2輪郭情報の入力を受け付け、
 前記観察画像における前記第1輪郭情報と前記第2輪郭情報との差異が最小化するように前記第1設定値と異なる第2設定値を取得する、
 画像処理方法。
1. An image processing method executed by an image processing system that processes an observation image of a sample observed by a charged particle beam device, comprising:
the image processing system,
Displaying the observed image;
extracting first contour information relating to a structural boundary of the sample from the observed image based on a first set value of a preset parameter;
Displaying the extracted first contour information on the observed image;
accepting input of second contour information relating to a boundary visible in the observation image;
acquiring a second setting value different from the first setting value so as to minimize a difference between the first contour information and the second contour information in the observed image;
Image processing methods.
 荷電粒子ビーム装置による試料の観察画像を処理する画像処理システムに用いられる表示装置であって、
 前記表示装置は、
 予め設定されたパラメータの第1設定値に基づいて、前記観察画像から抽出された前記試料の構造上の境界に関する第1輪郭情報を前記観察画像上に表示し、
 前記観察画像において視認可能な境界に関する第2輪郭情報の入力を受け付け、
 前記観察画像における前記第1輪郭情報と前記第2輪郭情報との差異が最小化するように前記第1設定値と異なる第2設定値が取得され、取得された前記第2設定値に基づいて抽出された第3輪郭情報を前記観察画像上に表示する、
 表示装置。
A display device used in an image processing system that processes an observation image of a sample using a charged particle beam device,
The display device includes:
displaying, on the observed image, first contour information relating to a structural boundary of the sample extracted from the observed image based on a first set value of a preset parameter;
accepting input of second contour information relating to a boundary visible in the observation image;
a second setting value different from the first setting value is acquired so as to minimize a difference between the first contour information and the second contour information in the observation image, and third contour information extracted based on the acquired second setting value is displayed on the observation image.
Display device.
PCT/JP2024/025924 2024-07-19 2024-07-19 Image processing system, image processing method, and display device Pending WO2026018408A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/025924 WO2026018408A1 (en) 2024-07-19 2024-07-19 Image processing system, image processing method, and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/025924 WO2026018408A1 (en) 2024-07-19 2024-07-19 Image processing system, image processing method, and display device

Publications (1)

Publication Number Publication Date
WO2026018408A1 true WO2026018408A1 (en) 2026-01-22

Family

ID=98437367

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/025924 Pending WO2026018408A1 (en) 2024-07-19 2024-07-19 Image processing system, image processing method, and display device

Country Status (1)

Country Link
WO (1) WO2026018408A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006258516A (en) * 2005-03-16 2006-09-28 Hitachi High-Technologies Corp Shape measuring apparatus and shape measuring method
WO2013047047A1 (en) * 2011-09-28 2013-04-04 株式会社日立ハイテクノロジーズ Cross-sectional shape estimating method and cross-sectional shape estimated device
WO2016121265A1 (en) * 2015-01-26 2016-08-04 株式会社日立ハイテクノロジーズ Sample observation method and sample observation device
JP2019020292A (en) * 2017-07-19 2019-02-07 株式会社ニューフレアテクノロジー Pattern inspection apparatus and pattern inspection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006258516A (en) * 2005-03-16 2006-09-28 Hitachi High-Technologies Corp Shape measuring apparatus and shape measuring method
WO2013047047A1 (en) * 2011-09-28 2013-04-04 株式会社日立ハイテクノロジーズ Cross-sectional shape estimating method and cross-sectional shape estimated device
WO2016121265A1 (en) * 2015-01-26 2016-08-04 株式会社日立ハイテクノロジーズ Sample observation method and sample observation device
JP2019020292A (en) * 2017-07-19 2019-02-07 株式会社ニューフレアテクノロジー Pattern inspection apparatus and pattern inspection method

Similar Documents

Publication Publication Date Title
US7932493B2 (en) Method and system for observing a specimen using a scanning electron microscope
JP6240782B2 (en) Wafer shape analysis method and apparatus
JP5500871B2 (en) Template matching template creation method and template creation apparatus
KR101524421B1 (en) Defect observation method and defect observation device
JP5202071B2 (en) Charged particle microscope apparatus and image processing method using the same
JP6255448B2 (en) Apparatus condition setting method for charged particle beam apparatus and charged particle beam apparatus
JP5651428B2 (en) Pattern measuring method, pattern measuring apparatus, and program using the same
US10712152B2 (en) Overlay error measurement device and computer program
WO2016121265A1 (en) Sample observation method and sample observation device
KR101987726B1 (en) Electron-beam pattern inspection system
WO2017159360A1 (en) Evaluation method for charged particle beam, computer program for evaluating charged particle beam, and evaluation device for charged particle beam
JP5308766B2 (en) PATTERN SEARCH CONDITION DETERMINING METHOD AND PATTERN SEARCH CONDITION SETTING DEVICE
JP5341801B2 (en) Method and apparatus for visual inspection of semiconductor wafer
JP4262269B2 (en) Pattern matching method and apparatus
WO2026018408A1 (en) Image processing system, image processing method, and display device
JP6207893B2 (en) Template creation device for sample observation equipment
CN103201819B (en) Pattern judging device and unevenness judging method
JP2011179819A (en) Pattern measuring method and computer program
JP4262288B2 (en) Pattern matching method and apparatus
JP2017003305A (en) Defect image classification device
JP7735582B2 (en) Dimension measurement system, estimation system, and dimension measurement method
KR20230018315A (en) Method, apparatus, and program for determining condition related to captured image of charged particle beam apparatus
US12500061B2 (en) Observation system, observation method, and program
JP6078356B2 (en) Template matching condition setting device and charged particle beam device
JP2015203628A (en) Charged particle beam apparatus and coordinate correction method