[go: up one dir, main page]

WO2018230476A1 - Device for estimating shape of figure pattern - Google Patents

Device for estimating shape of figure pattern Download PDF

Info

Publication number
WO2018230476A1
WO2018230476A1 PCT/JP2018/022100 JP2018022100W WO2018230476A1 WO 2018230476 A1 WO2018230476 A1 WO 2018230476A1 JP 2018022100 W JP2018022100 W JP 2018022100W WO 2018230476 A1 WO2018230476 A1 WO 2018230476A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pattern
original
evaluation point
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2018/022100
Other languages
French (fr)
Japanese (ja)
Inventor
剛哉 下村
洋平 大川
渡辺 智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dai Nippon Printing Co Ltd
Original Assignee
Dai Nippon Printing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2018061898A external-priority patent/JP6508496B2/en
Application filed by Dai Nippon Printing Co Ltd filed Critical Dai Nippon Printing Co Ltd
Publication of WO2018230476A1 publication Critical patent/WO2018230476A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F1/00Originals for photomechanical production of textured or patterned surfaces, e.g., masks, photo-masks, reticles; Mask blanks or pellicles therefor; Containers specially adapted therefor; Preparation thereof
    • G03F1/36Masks having proximity correction features; Preparation thereof, e.g. optical proximity correction [OPC] design processes
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F1/00Originals for photomechanical production of textured or patterned surfaces, e.g., masks, photo-masks, reticles; Mask blanks or pellicles therefor; Containers specially adapted therefor; Preparation thereof
    • G03F1/68Preparation processes not covered by groups G03F1/20 - G03F1/50
    • G03F1/70Adapting basic layout or design of masks to lithographic process requirements, e.g., second iteration correction of mask patterns for imaging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • H10P50/242

Definitions

  • the present invention relates to a figure pattern shape estimation apparatus, and more particularly to an apparatus for estimating the shape of an actual figure pattern formed on a substrate through a lithography process.
  • a fine pattern is formed on a physical substrate through a lithography process that involves drawing using light or an electron beam.
  • the technique to form is taken.
  • a fine mask pattern is designed using a computer, and the resist layer formed on the substrate is exposed based on the obtained mask pattern data. After developing this, the remaining resist layer is masked. Etching is performed to form a fine pattern on the substrate.
  • the lithography process includes steps of exposure, development, and etching, so that the actual graphic pattern finally formed on the substrate does not exactly match the original graphic pattern used in the exposure process.
  • the resist layer is drawn with light or an electron beam.
  • the exposure area actually drawn on the resist layer by the proximity effect PE: Proximity Effect
  • an etching loading phenomenon occurs, so that the shape after development and the pattern after etching differ. It is known that the effect of the etching loading phenomenon varies depending on the area exposed from the resist layer on the surface of the actual substrate.
  • the proximity effect in the drawing process and the loading phenomenon in the etching process are all phenomena that cause a difference between the shape of the original graphic pattern and the shape of the actual graphic pattern.
  • the range (scale size) is different for each phenomenon.
  • a desired original figure pattern is designed on a computer, and then the lithography process using the original figure pattern is simulated on a computer, and an actual substrate is obtained.
  • a procedure is performed to estimate the shape of the actual graphic pattern that will be formed above.
  • the shape (dimension) of the original figure pattern is corrected if necessary, and the actual figure pattern is obtained using the corrected figure pattern obtained by this correction.
  • the actual lithography process is performed to manufacture an actual semiconductor device.
  • Patent Document 1 a feature factor that characterizes the layout of the original graphic pattern and a control factor that affects the size of the resist pattern formed on the substrate by the lithography process are used as an input layer.
  • Patent Document 2 discloses a method for improving the accuracy of simulation using two sets of neural networks
  • Patent Document 3 discloses an appropriate method for extracting feature values from a photomask pattern.
  • a method for improving the accuracy of simulation by setting various extraction parameters is disclosed.
  • the present invention provides a graphic pattern shape estimation apparatus capable of accurately estimating the shape of an actual graphic pattern formed on an actual substrate by extracting an accurate feature amount from the original graphic pattern and performing an accurate simulation.
  • the purpose is to provide.
  • a first aspect of the present invention is a graphic pattern shape estimation apparatus that estimates the shape of an actual graphic pattern formed on an actual substrate by simulating a lithography process using the original graphic pattern.
  • An evaluation point setting unit for setting evaluation points on the original figure pattern;
  • a feature quantity extraction unit that extracts a feature quantity indicating features around the evaluation point for the original graphic pattern;
  • a bias estimation unit that estimates a process bias indicating the amount of deviation between the position of the evaluation point on the original graphic pattern and the position on the actual graphic pattern based on the feature amount;
  • the evaluation point setting unit sets an evaluation point at a predetermined position on the contour line based on the original graphic pattern including information on the contour line indicating the boundary between the inside and the outside of the figure,
  • the feature extraction unit Based on the original graphic pattern, an original image creating unit that creates an original image composed of a collection of pixels each having a predetermined pixel value;
  • An image pyramid creation unit that performs image pyramid creation processing including reduction processing to create a reduced image by reducing the original image, and
  • the original image creation unit superimposes the original figure pattern on a mesh composed of a two-dimensional array of pixels, and based on the relationship between the position of each pixel and the position of the outline of the figure constituting the original figure pattern, The pixel value of the pixel is determined.
  • a third aspect of the present invention is the figure pattern shape estimation apparatus according to the second aspect described above,
  • the original image creation unit recognizes the internal area and external area of each graphic based on the original graphic pattern, and creates an area density map with the occupancy of the internal area in each pixel as the pixel value of the pixel as the original image It is what you do.
  • the original image creation unit recognizes the outline of each figure based on the original figure pattern, and creates an edge length density map having the length of the outline existing in each pixel as the pixel value of the pixel as the original image. It is what I did.
  • the original image creation unit Recognize the area, further recognize the dose amount for each figure, find the "product of the occupancy of the internal area and the dose amount of the figure" for each figure present in each pixel, and calculate the sum of the products
  • a dose density map as a pixel value of the pixel is created as an original image.
  • the image pyramid creation unit has a function of performing a filtering process using a predetermined image processing filter on the original image or the reduced image, and by executing the filtering process and the reducing process alternately, a plurality of layers An image pyramid made up of images is created.
  • the image pyramid creation unit uses the original image created by the original image creation unit as the first preparation image Q1, and the image obtained by the filter processing for the kth preparation image Qk (where k is a natural number) As an image Pk, an image obtained by the reduction process for the kth hierarchical image Pk is set as the (k + 1) th preparation image Q (k + 1), and the filter process and the reduction process are alternately performed until the nth hierarchical image Pn is obtained. By executing this, an image pyramid composed of a plurality of n hierarchical images including the first hierarchical image P1 to the nth hierarchical image Pn is created.
  • An eighth aspect of the present invention is the figure pattern shape estimation apparatus according to the sixth aspect described above,
  • the image pyramid creation unit uses the original image created by the original image creation unit as the first preparation image Q1, and the filtered image Pk obtained by the filter processing for the kth preparation image Qk (where k is a natural number)
  • the difference image Dk from the k preparation image Qk is obtained, the difference image Dk is set as the k-th layer image Dk, and the image obtained by the reduction process on the k-th filtered image Pk is the (k + 1) -th preparation image Q ( k + 1), by alternately executing the filtering process and the reduction process until the n-th hierarchical image Dn is obtained, from a plurality of n hierarchical images including the first hierarchical image D1 to the n-th hierarchical image Dn.
  • An image pyramid is created.
  • a ninth aspect of the present invention is the figure pattern shape estimation apparatus according to the sixth aspect described above,
  • the image pyramid creation part The original image created by the original image creation unit is defined as the first preparation image Q1, the image obtained by filtering the kth preparation image Qk (where k is a natural number) is the kth main hierarchy image Pk, The image obtained by the reduction process for the k main hierarchy image Pk is defined as the (k + 1) th preparation image Q (k + 1), and the filter process and the reduction process are alternately executed until the nth main hierarchy image Pn is obtained.
  • a main image pyramid composed of a plurality of n hierarchical images including the first main hierarchical image P1 to the n-th main hierarchical image Pn
  • a difference image Dk between the k-th main layer image Pk and the k-th preparation image Qk is obtained, and the difference image Dk is set as the k-th sub-layer image Dk.
  • the feature amount calculation unit calculates a feature amount for each hierarchical image constituting the main image pyramid and the sub image pyramid based on the pixel value of the pixel corresponding to the position of the evaluation point.
  • the image pyramid creation unit creates the image pyramid by executing filter processing by a convolution operation using a Gaussian filter or a Laplacian filter as an image processing filter.
  • An eleventh aspect of the present invention is the figure pattern shape estimation apparatus according to the first to tenth aspects described above,
  • the image pyramid creation unit executes, as a reduction process, an average pooling process that replaces a plurality of m adjacent pixels with a single pixel having an average value of pixel values of the plurality of m adjacent pixels as a pixel value.
  • a reduced image is created.
  • a twelfth aspect of the present invention is the figure pattern shape estimation apparatus according to the first to tenth aspects described above,
  • the image pyramid creation unit executes, as a reduction process, a max pooling process in which a plurality of m adjacent pixels are replaced with a single pixel having a maximum pixel value of the plurality of m adjacent pixels as a pixel value.
  • a reduced image is created.
  • the original image creation unit performs an original image creation process based on a plurality of different algorithms, creates a plurality of original images
  • the image pyramid creation unit performs image pyramid creation processing based on multiple original images to create multiple image pyramids
  • the feature amount calculation unit calculates a feature amount for each hierarchical image constituting each of the plurality of image pyramids based on the pixel value of the pixel corresponding to the position of the evaluation point.
  • the image pyramid creation unit performs image pyramid creation processing based on a plurality of different algorithms for one original image, creates a plurality of image pyramids,
  • the feature amount calculation unit calculates a feature amount for each hierarchical image constituting each of the plurality of image pyramids based on the pixel value of the pixel corresponding to the position of the evaluation point.
  • the feature quantity calculation unit calculates a feature quantity for a specific evaluation point on a specific hierarchical image, it pays attention to a total of j pixels in order from the pixel constituting the specific hierarchical image to the specific evaluation point.
  • a pixel is extracted as a pixel, and an operation for obtaining a weighted average considering the weight according to the distance between a specific evaluation point and each target pixel is performed on the pixel values of the extracted j target pixels.
  • the estimation calculation unit includes a neural network in which the feature quantity input by the feature quantity input unit is used as an input layer and the process bias estimation value is used as an output layer.
  • the neural network included in the estimation calculation unit is obtained from the dimension value obtained by measuring the actual dimension of the actual figure pattern formed on the actual substrate by the lithography process using a large number of test pattern figures and each test pattern figure.
  • the parameter obtained by the learning stage using the feature amount is used as learning information to perform process bias estimation processing.
  • An eighteenth aspect of the present invention is the figure pattern shape estimation apparatus according to the sixteenth or seventeenth aspect described above,
  • the estimation calculation unit obtains an estimated value of the deviation amount of the evaluation point in the normal direction of the contour line as the estimated value of the process bias for the evaluation point located on the contour line of the predetermined figure.
  • a figure pattern shape correcting apparatus for correcting the shape of the original figure pattern using the figure pattern shape estimating apparatus according to the first to eighteenth aspects described above. for, In addition to the evaluation point setting unit, the feature amount extraction unit, and the bias estimation unit that constitute the shape estimation device of the graphic pattern A pattern correction unit for correcting the original figure pattern based on the estimated value of the process bias output from the bias estimation unit is further provided. The correction figure pattern obtained by the correction by the pattern correction unit is given as a new original figure pattern to the figure pattern shape estimation apparatus, so that the correction for the figure pattern is repeatedly executed.
  • a graphic pattern shape estimation apparatus according to the first to eighteenth aspects described above or a graphic pattern shape correction apparatus according to the nineteenth aspect described above is stored in a computer. This is realized by incorporating a program.
  • a graphic pattern shape estimation method for estimating a shape of an actual graphic pattern formed on an actual substrate by simulating a lithography process using the original graphic pattern.
  • An original graphic pattern input stage in which a computer inputs an original graphic pattern including contour information indicating the boundary between the inside and the outside of the graphic;
  • An evaluation point setting stage in which the computer sets an evaluation point at a predetermined position on the contour line;
  • a feature amount extraction stage in which the computer extracts a feature amount indicating features around the evaluation point for the original graphic pattern;
  • a process bias estimation stage in which a computer estimates a process bias indicating a deviation amount between the position of the evaluation point on the original graphic pattern and the position on the actual graphic pattern based on the feature amount;
  • An original image creating stage for creating an original image composed of a collection of pixels each having a predetermined pixel value based on the original graphic pattern;
  • An image pyramid creation stage that performs an image pyramid creation process including a reduction process for creating a
  • an image pyramid is created in which the image after filtering, or the difference image between the image after filtering and the image before filtering, is used as a hierarchical image.
  • a shape estimation apparatus for a graphic pattern that estimates the shape of a real graphic pattern formed on a real substrate by simulating a lithography process using the original graphic pattern
  • An evaluation point setting unit for setting evaluation points on the original figure pattern
  • a feature quantity extraction unit that extracts a feature quantity indicating features around the evaluation point for the original graphic pattern
  • a bias estimation unit that estimates a process bias indicating the amount of deviation between the position of the evaluation point on the original graphic pattern and the position on the actual graphic pattern based on the feature amount
  • the evaluation point setting unit sets an evaluation point at a predetermined position on the contour line based on the original graphic pattern including information on the contour line indicating the boundary between the inside and the outside of the figure
  • the feature extraction unit A rectangular aggregate replacement unit that replaces a graphic included in the original graphic pattern with a rectangular aggregate;
  • a calculation function providing unit that provides a calculation function for calculating a feature amount based on a positional relationship with respect to a
  • a twenty-fifth aspect of the present invention is the graphic pattern shape estimation apparatus according to the twenty-fourth aspect described above,
  • the calculation function providing unit obtains a plurality of n types of feature amounts in which the consideration range is changed from a feature amount considering a narrow range near the evaluation point to a feature amount considering a wide range including far from the evaluation point.
  • a plurality of n calculation functions are provided,
  • the feature amount calculation unit calculates a plurality of n types of feature amounts for each evaluation point using the plurality of n types of calculation functions.
  • the calculation function providing unit provides a calculation function for calculating a feature amount based on a positional relationship with respect to four sides of a rectangle positioned around one evaluation point.
  • the shape estimation apparatus for a graphic pattern in the shape estimation apparatus for a graphic pattern according to the twenty-sixth aspect described above, in the XY two-dimensional orthogonal coordinate system in which the rectangular aggregate replacement unit takes the X-axis positive direction to the right and the Y-axis positive direction to the upper direction, the upper and lower sides parallel to the X-axis And a rectangular assembly having a left side and a right side parallel to the Y axis,
  • the calculation function providing unit has, on the XY two-dimensional orthogonal coordinate system, a left side position deviation indicating a distance from the left side in the X axis direction of the evaluation point, a right side position deviation indicating a distance from the right side, and a Y axis direction of the evaluation point.
  • the calculation function for calculating the feature amount is provided based on the upper side position deviation indicating the distance from the upper side and the lower side position deviation indicating the distance from the lower side.
  • the calculation function provider For one particular rectangle of interest, The function value increases monotonously as the variable value increases, and the function value decreases monotonously as the variable value increases, as well as the X-axis monotonically increasing function when the X coordinate value of the left side of the target rectangle is given as a variable.
  • a horizontal function that is the sum of an X-axis monotonically decreasing function that gives a function value of 0 when the X coordinate value of the right side of the target rectangle is given as a variable,
  • the function value increases monotonously as the variable value increases, and the function value decreases monotonously as the variable value increases.
  • a vertical function that is the sum of a Y-axis monotonically decreasing function that has a function value of 0 when the Y coordinate value of the upper side of the target rectangle is given as a variable, , And a function value of a horizontal function having the X coordinate value of the target evaluation point as a variable and a Y coordinate value of the target evaluation point Calculate based on the product of the function value of the vertical function as a variable, A calculation function is provided in which the sum of the amounts indicating the positional relationship with respect to each rectangle located around the target evaluation point is used as a feature amount for the target evaluation point.
  • a twenty-ninth aspect of the present invention is the figure pattern shape estimation apparatus according to the twenty-eighth aspect described above,
  • the calculation function providing unit provides a plurality of n types of calculation functions using functions having different degrees of monotonic increase or monotonous decrease as calculation functions for calculating a plurality of n types of feature amounts with different consideration ranges. It is a thing.
  • the calculation function provider prepares a calculation function including a monotonically increasing function or a monotonically decreasing function with the value obtained by dividing the left side position deviation, right side position deviation, upper side position deviation, and lower side position deviation by the spreading coefficient ⁇ , and the spreading coefficient.
  • the rectangular aggregate replacement unit Recognize the external area, further recognize the dose amount for each figure, set the dose amount for each rectangle corresponding to each figure,
  • the calculation function providing unit provides a calculation function including a dose amount set for each rectangle as a variable.
  • the rectangle assembly replacement unit recognizes the unit line segments that form the contour lines of each figure based on the original figure pattern, and sets the minute width for each unit line segment, so that the figure included in the original figure pattern is It is replaced with a rectangular aggregate with a very small width.
  • the feature quantity calculation unit calculates the feature quantity for the evaluation point, it defines a reference circle having a predetermined radius with the evaluation point as the center, and the position of the rectangle belonging to the predetermined neighborhood range according to the reference circle The calculation is performed considering only the relationship.
  • the estimation calculation unit includes a neural network in which the feature quantity input by the feature quantity input unit is used as an input layer and the process bias estimation value is used as an output layer.
  • a thirty-sixth aspect of the present invention is the figure pattern shape estimation apparatus according to the thirty-fifth aspect described above,
  • the neural network included in the estimation calculation unit is obtained from the dimension value obtained by measuring the actual dimension of the actual figure pattern formed on the actual substrate by the lithography process using a large number of test pattern figures and each test pattern figure.
  • the parameter obtained by the learning stage using the feature amount is used as learning information to perform process bias estimation processing.
  • a thirty-seventh aspect of the present invention is the figure pattern shape estimation apparatus according to the thirty-fifth or thirty-sixth aspect described above,
  • the estimation calculation unit obtains an estimated value of the deviation amount of the evaluation point in the normal direction of the contour line as the estimated value of the process bias for the evaluation point located on the contour line of the predetermined figure.
  • a graphic pattern shape correcting apparatus for correcting the shape of an original graphic pattern is configured using the graphic pattern shape estimating apparatus according to the twenty-fourth to thirty-seventh aspects described above. for, In addition to the evaluation point setting unit, the feature amount extraction unit, and the bias estimation unit that constitute the shape estimation device of the graphic pattern A pattern correction unit for correcting the original figure pattern based on the estimated value of the process bias output from the bias estimation unit is further provided. The correction figure pattern obtained by the correction by the pattern correction unit is given as a new original figure pattern to the figure pattern shape estimation apparatus, so that the correction for the figure pattern is repeatedly executed.
  • a thirty-ninth aspect of the present invention provides a computer with the graphic pattern shape estimating apparatus according to the twenty-fourth to thirty-seventh aspects described above or the graphic pattern shape correcting apparatus according to the thirty-eighth aspect described above. This is realized by incorporating a program.
  • a graphic pattern shape estimation method for estimating a shape of an actual graphic pattern formed on an actual substrate by simulating a lithography process using the original graphic pattern.
  • An original graphic pattern input stage in which a computer inputs an original graphic pattern including contour information indicating the boundary between the inside and the outside of the graphic;
  • An evaluation point setting stage in which the computer sets an evaluation point at a predetermined position on the contour line;
  • a feature amount extraction stage in which the computer extracts a feature amount indicating features around the evaluation point for the original graphic pattern;
  • a process bias estimation stage in which a computer estimates a process bias indicating a deviation amount between the position of the evaluation point on the original graphic pattern and the position on the actual graphic pattern based on the feature amount;
  • a rectangular assembly replacement stage for replacing a graphic included in the original graphic pattern with a rectangular assembly;
  • For each evaluation point a feature amount calculation stage for calculating a feature amount based on a positional relationship with respect to a rectangle positioned
  • an evaluation point is set on the original graphic pattern, and a process bias indicating a deviation amount with respect to the evaluation point is estimated.
  • a reduction process for reducing the original image corresponding to the original graphic pattern is performed, an image pyramid composed of a plurality of hierarchical images having different sizes is created, and the evaluation point position is determined for each hierarchical image.
  • Features are extracted as feature values.
  • the graphic included in the original graphic pattern is replaced with a rectangular aggregate, and the feature amount is extracted based on the positional relationship with respect to the rectangle for each evaluation point. For this reason, it becomes possible to perform accurate simulation by extracting accurate feature values from the original graphic pattern, and it is possible to accurately estimate the shape of the actual graphic pattern formed on the actual substrate.
  • the figure pattern shape estimation apparatus according to the present invention, the original figure pattern can be corrected based on the estimation result, so that the figure pattern that can accurately correct the shape of the original figure pattern can be obtained. It is also possible to provide a shape correction apparatus.
  • FIG. 2 It is a block diagram which shows the structure of the shape correction apparatus 100 of the figure pattern which concerns on fundamental embodiment of this invention. It is a top view which shows an example in which the difference in shape produced between the original figure pattern and the real figure pattern. In the example shown in FIG. 2, it is a top view which shows the example of a setting of each evaluation point, and the process bias which arises in each evaluation point. It is a flowchart which shows the design / manufacturing process of a product using the shape correction apparatus 100 of the graphic pattern shown in FIG. It is a top view which shows the concept which grasps
  • FIG. 10 It is a figure which shows the edge length density map M2 produced based on the original figure pattern 10 shown in FIG. It is a top view which shows the original figure pattern 10 containing the information of dose amount. It is a figure which shows the dose density map M3 produced based on the original figure pattern 10 with a dose amount shown in FIG. It is a top view which shows the procedure which produces the kth hierarchy image Pk by performing the filter process which used the Gaussian filter GF33 to the kth preparation image Qk. It is a top view which shows the kth hierarchy image Pk obtained by the filter process shown in FIG. It is a top view which shows the example of the image processing filter utilized for the filter process shown in FIG.
  • FIG. 7 is a plan view showing a procedure for creating an image pyramid PP composed of n hierarchical images P1 to Pn in the image pyramid creation unit 122 shown in FIG. 1;
  • FIG. 6 is a plan view showing a procedure for calculating a feature value for an evaluation point E from each hierarchical image in the feature value calculation unit 123 shown in FIG. It is a figure which shows the specific calculating method used with the feature-value calculation procedure shown in FIG.
  • FIG. 7 is a plan view showing a procedure for creating an image pyramid PD composed of n types of difference images D1 to Dn in the image pyramid creation unit 122 shown in FIG. 1; It is a block diagram which shows the Example using a neural network as the estimation calculating part 132 shown in FIG.
  • FIG. 25 is a diagram showing a specific calculation process executed by the neural network shown in FIG. 24.
  • FIG. 26 is a diagram illustrating an arithmetic expression for obtaining each value of the first hidden layer in the diagram illustrated in FIG. 25. It is a figure which shows the specific example of the activation function f ( ⁇ ) shown in FIG.
  • FIG. 26 is a diagram illustrating an arithmetic expression for obtaining each value of a second hidden layer to an Nth hidden layer in the diagram illustrated in FIG. 25.
  • FIG. 26 is a diagram illustrating an arithmetic expression for obtaining a value y of an output layer in the diagram illustrated in FIG. 25. It is a flowchart which shows the procedure of the learning step for obtaining the learning information L which the neural network shown in FIG. 24 uses.
  • FIG. 33 is a plan view illustrating an example of a process of replacing the original graphic pattern 10 with a rectangular aggregate 50 by the rectangular aggregate replacing unit 221 illustrated in FIG. 32.
  • FIG. 33 is a plan view illustrating another example of a process of replacing the original graphic pattern 10 with the rectangular aggregate 50 by the rectangular aggregate replacing unit 221 illustrated in FIG. 32.
  • FIG. 33 is a diagram illustrating an example of a feature amount calculation principle by a feature amount calculation unit 222 and a calculation function provided by a calculation function providing unit 223 illustrated in FIG.
  • FIG. 37 is a diagram showing a positional relationship between an X-axis monotonically increasing function + erf [(X ⁇ Li / ⁇ k)] used in the calculation function shown in FIG. 36 and a rectangle Fi.
  • FIG. 37 is a diagram showing a positional relationship between an X-axis monotone decreasing function ⁇ erf [(X ⁇ Ri / ⁇ k)] used in the calculation function shown in FIG. 36 and a rectangle Fi.
  • FIG. 37 is a diagram showing a positional relationship between a horizontal direction function fhi ( ⁇ k) and a vertical direction function fvi ( ⁇ k) used in the calculation function shown in FIG. 36 and a rectangle Fi.
  • FIG. 37 shows the role of the expansion coefficient (sigma) k used for the calculation function shown in FIG.
  • FIG. 37 shows an example of the calculation function in consideration of the dose amount.
  • It is a top view which shows the process substituted by the rectangular aggregate
  • FIG. 33 is a plan view showing a process of replacing a rectangular aggregate by setting a minute width to a unit line segment constituting a contour line of a graphic by the rectangular aggregate replacing unit 221 shown in FIG. 32. It is a figure which shows an example of the calculation function applied about the rectangular aggregate shown in FIG.
  • FIG. 33 is a plan view illustrating a method for improving the efficiency of calculation of feature amounts by the feature amount calculation unit 222 illustrated in FIG. 32. It is a figure which shows the 1st example (example of Line & space pattern) which compares the processing time of feature-value extraction about basic embodiment and additional embodiment of this invention.
  • the figure pattern shape correction apparatus 100 includes an evaluation point setting unit 110, a feature amount extraction unit 120, a bias estimation unit 130, and a pattern correction unit 140.
  • the figure pattern shape estimation apparatus 100 ′ according to the present invention is configured by the three units of the evaluation point setting unit 110, the feature amount extraction unit 120, and the bias estimation unit 130, and the figure pattern shape correction apparatus 100. Is configured by adding a pattern correction unit 140 as a fourth unit to the figure pattern shape estimation apparatus 100 ′.
  • the figure pattern shape estimation apparatus 100 ′ serves to estimate the shape of the actual figure pattern 20 formed on the actual substrate S by simulating a lithography process using the original figure pattern 10.
  • an alternate long and short dash line arrow from the original graphic pattern 10 is shown, and an actual substrate S having a real graphic pattern 20 is drawn at the tip of the arrow.
  • This dashed-dotted arrow indicates a physical lithography process.
  • the original graphic pattern 10 shown in the figure is graphic pattern data created by design work using a computer, and the alternate long and short dash line arrow indicates a lithography process such as physical exposure, development, and etching based on this data. It is shown that the actual substrate S is manufactured by carrying out. On the actual substrate S, an actual graphic pattern 20 corresponding to the original graphic pattern 10 is formed. However, when such a lithography process is performed, there is a slight discrepancy between the original graphic pattern 10 and the actual graphic pattern 20. This is because, as described above, it is difficult to form an accurate figure on the actual substrate S according to the various conditions of the steps of exposure, development, and etching included in the lithography process.
  • FIG. 2 is a plan view showing a specific example in which a difference in shape occurs between the original graphic pattern 10 and the actual graphic pattern 20.
  • a semiconductor device or the like it is actually necessary to form a very fine and complicated figure pattern on the surface of an actual substrate S such as silicon.
  • an actual substrate S such as silicon.
  • FIG. A case where a simple figure is given as the original figure pattern 10 will be described.
  • the illustrated original figure pattern 10 is a pattern composed of one rectangle, for example, original figure data indicating that a material layer corresponding to the hatched rectangular inner region is formed on the actual substrate S. Become.
  • a resist layer is formed on the upper surface of the material layer on the actual substrate S, and exposure to light or an electron beam is performed to perform drawing on the resist layer.
  • the resist layer is exposed to the inner region (hatched portion) of the original graphic pattern 10 shown in FIG. 2 (a)
  • the resist layer is developed to remove the non-exposed portion.
  • the exposed portion remains as a remaining resist layer (hatched portion).
  • the remaining resist layer is used as a mask and the material layer is etched, theoretically, the inner region (hatched portion) of the original figure pattern 10 can be left also for the material layer.
  • the actual graphic pattern 20 obtained on the actual substrate S does not exactly match the original graphic pattern 10. This is because the conditions of the steps of exposure, development, and etching included in the lithography process affect the shape of the actual figure pattern 20 finally obtained.
  • the resist layer is drawn with light or an electron beam.
  • the exposure area actually drawn on the resist layer by the proximity effect PE: Proximity Effect
  • the area is slightly wider than the pattern 10.
  • the actual graphic pattern 20 obtained on the actual substrate S is an area wider than the original graphic pattern 10 (indicated by a broken line) as shown in FIG. become.
  • 1 is an apparatus having a function of performing such estimation, and does not actually perform a lithography process (exposure, development, etching process) for creating an actual substrate S. In addition, it has a function of estimating the shape of the actual figure pattern 20 that will be formed on the actual substrate S by simulation.
  • an evaluation point E is set on the original figure pattern 10 by the evaluation point setting unit 110.
  • the shape estimation apparatus 100 ′ is provided with graphic data including contour information indicating the boundary between the inside and the outside of the figure as the original figure pattern 10, and the evaluation point setting unit 110 performs such processing. Based on the original graphic pattern 10, a process for setting an evaluation point at a predetermined position on the contour line is performed.
  • FIG. 3 is a plan view showing an example in which several evaluation points are set for the original graphic pattern 10 shown in FIG. 2 (a) and the process bias (dimensional error) occurring at each evaluation point.
  • FIG. 3A is a plan view showing an example in which evaluation points E11, E12, and E13 are set on the outline of the original graphic pattern 10 (rectangular figure) shown in FIG. 2A.
  • FIG. 3 shows a simple example in which three evaluation points E11, E12, and E13 are set for convenience of explanation, but actually, a larger number of evaluation points are set on each side of the rectangle. The For example, if it is determined that the evaluation points are set continuously at a predetermined pitch along the contour line, a large number of evaluation points can be automatically set.
  • FIG. 3B is a plan view showing the contour (solid line) of the real graphic pattern 20 shown in FIG. 2B compared with the contour (broken line) of the original graphic pattern 10 shown in FIG.
  • the contour line of the real graphic pattern 20 indicated by a solid line extends outward by a dimension y as compared to the contour line of the original graphic pattern 10 indicated by a broken line.
  • the horizontal width a of the original graphic pattern 10 extends to the horizontal width b in the actual graphic pattern 20.
  • the vertical width slightly expands.
  • the evaluation points E21, E22, E23 on the actual graphic pattern 20 are evaluation points determined as points corresponding to the evaluation points E11, E12, E13 on the original graphic pattern 10.
  • the evaluation point E21 is defined as a point shifted from the evaluation point E11 by a predetermined dimension y11 toward the outer side in the normal direction of the contour line indicated by a broken line.
  • the evaluation point E22 is defined as a point shifted from the evaluation point E12 by a predetermined dimension y12 toward the outside in the normal direction of the contour line indicated by a broken line
  • the evaluation point E23 is a contour line indicating the evaluation point E13 by a broken line. Is defined as a point shifted by a predetermined dimension y13 toward the outside in the normal direction.
  • the deviation amount y in the normal direction of the contour line generated for each evaluation point E is set. I will use it. Further, this deviation amount y is referred to as “process bias y” because it is a bias amount caused by the lithography process.
  • the process bias y is a value having a positive / negative sign, and in the embodiment described below, the direction in which the figure exposure part (drawing part) is fattened is defined as a positive value, and the direction in which the figure is thinned is defined as a negative value.
  • the inside of the figure surrounded by the contour line is the exposure part (drawing part), so if it is shifted in the outer direction of the contour line, it is a positive value, and it shifts in the inner direction of the contour line. Is negative.
  • the process bias y11, y12, y13 takes a positive value.
  • the value of the process bias y of each evaluation point E differs for each evaluation point.
  • the values of the process biases y11, y12, and y13 are individual values. This is because the relative positions of the evaluation points E11, E12, and E13 with respect to the original graphic pattern 10 are different, so that the influence of the lithography process is also different, and the amount of deviation that occurs is also different. Therefore, in order to improve the estimation accuracy when the shape of the actual graphic pattern 20 is estimated by simulation based on the original graphic pattern 10, the influence of the lithography process is appropriately predicted for each evaluation point, and an appropriate process bias is determined. It is important to obtain y.
  • each evaluation point E is set at a predetermined position on the contour line based on the contour line information indicating the boundary between the inside and the outside of the figure included in the original figure pattern 10.
  • evaluation points can be set continuously at predetermined intervals along the contour line.
  • the feature amount extraction unit 120 extracts feature amounts indicating features around each evaluation point E for the original graphic pattern 10.
  • the feature amount x for one evaluation point E is a value indicating the features around the evaluation point E.
  • the feature quantity extraction unit 120 includes an original image creation unit 121, an image pyramid creation unit 122, and a feature quantity calculation unit 123, as shown in FIG.
  • the original image creation unit 121 creates an original image composed of a collection of pixels each having a predetermined pixel value based on the given original graphic pattern 10. For example, when an original figure pattern 10 as shown in FIG. 2A is given, a pixel value 1 for a pixel inside a rectangle (a hatched pixel in the figure) and a pixel value 0 for a pixel outside the rectangle. Is added, an original image composed of binary images is created.
  • the image pyramid creation unit 122 performs an image pyramid creation process including a reduction process for creating a reduced image by reducing the original image, and creates an image pyramid including a plurality of n layer images.
  • Each of the n layer images constituting each layer of the image pyramid is an image obtained by performing predetermined image processing on the original image created by the original image creation unit 121, and has different sizes. Yes.
  • Such a set of hierarchical images is called an “image pyramid” because each layered image is stacked in order from the largest to the smallest to form a hierarchical structure, as if a pyramid is formed. This is because it can be seen.
  • the feature amount calculation unit 123 calculates a feature amount based on the pixel values of the pixels in the vicinity of the evaluation point E for each of the n layer images constituting the layer of the image pyramid. Specifically, the feature amount x1 is calculated based on the pixel value of the pixel near the evaluation point E in the first hierarchical image, and the pixel value of the pixel near the evaluation point E in the second hierarchical image is calculated. On the basis of the feature amount x2, similarly, the feature amount xn is calculated based on the pixel value of the pixel in the vicinity of the evaluation point E in the nth hierarchical image. n feature amounts x1 to xn are extracted. For example, in the example shown in FIG.
  • n feature values x1 (E11) to xn (E11) are extracted for the evaluation point E11, and n feature values x1 (E12) for the evaluation point E12.
  • n feature quantities x1 (E13) to xn (E13) are extracted for the evaluation point E13.
  • the bias estimation unit 130 is a process that indicates the amount of deviation between the position of the evaluation point E on the original graphic pattern 10 and the position on the actual graphic pattern 20 based on the feature value x extracted by the feature value extraction unit 120.
  • a process for estimating the bias y is performed.
  • the bias estimation unit 130 includes a feature amount input unit 131 and an estimation calculation unit 132.
  • the feature amount input unit 131 is a component that inputs the feature amounts x1 to xn calculated for the evaluation point E by the feature amount calculation unit 123, and the estimation calculation unit 132 performs the learning obtained by the learning stage performed in advance. Based on the information L, an estimated value corresponding to the feature amounts x1 to xn is obtained, and the obtained estimated value is output as the estimated value y of the process bias for the evaluation point E.
  • the estimation calculating unit 132 estimates the process bias for each evaluation point E located on the contour line of the graphic constituting the original graphic pattern 10 as a deviation amount in the normal direction of the contour line. y is output.
  • the process bias y11 for the evaluation point E11, the process bias y12 for the evaluation point E12, and the process bias y13 for the evaluation point E13. are output from the estimation calculation unit 132 as estimated values.
  • the estimated value y of the process bias for each evaluation point E is obtained, a new position of each evaluation point E (a position shifted by the process bias y in the normal direction of the contour line) can be determined. Therefore, as shown in FIG. 3B, the shape of the actual graphic pattern 20 can be estimated.
  • the figure pattern shape correction apparatus 100 is an apparatus that corrects the shape of the original figure pattern 10 using the figure pattern shape estimation apparatus 100 'described above. As shown in FIG. 1, the figure pattern shape estimation apparatus 100' is used. In addition to the evaluation point setting unit 110, the feature amount extraction unit 120, and the bias estimation unit 130 which are constituent elements of the above, a pattern correction unit 140 is further provided.
  • the pattern correction unit 140 is a component that corrects the original graphic pattern 10 based on the estimated value y of the process bias output from the bias estimation unit 130, and the corrected graphic pattern obtained by the correction by the pattern correction unit 140 15 is the final output of the shape correction apparatus 100 for this graphic pattern.
  • the actual graphic pattern 20 is formed on the actual substrate S.
  • a rectangle having a width a as originally designed can be obtained.
  • the evaluation point E11 is moved to the left (inside the rectangle) by the process bias y11, and the evaluation point E12 is moved to the left (inside the rectangle) by the process bias y12. Then, the correction may be performed by moving the evaluation point E13 upward (inside the rectangle) by the process bias y13.
  • correction is performed to move all the evaluation points to the inside of the rectangle by a dimension corresponding to the process bias, and new evaluation points are connected. If a simple contour line is defined, a corrected graphic pattern 15 including a graphic defined by the contour line is obtained. Since such correction processing itself is a known technique, detailed description thereof is omitted here.
  • the actual graphic pattern 25 obtained is It does not exactly match the original figure pattern 10 at the beginning of the design (for example, the lateral width of the rectangle formed on the actual substrate S is not exactly a). This is because the graphic included in the original graphic pattern 10 and the graphic included in the corrected graphic pattern 15 are different in size and shape, so that there is a difference in the influence such as the proximity effect when the lithography process is executed.
  • the actual figure pattern 25 obtained as a result of executing the lithography process using the corrected figure pattern 15 is Since the pattern becomes closer to the original figure pattern 10, the actual lithography process is performed using the corrected figure pattern 15 obtained by the correction by the pattern correction unit 140, rather than executing the actual lithography process using the original figure pattern 10 as it is.
  • the more accurate figure pattern can be obtained on the actual substrate S by executing the above. That is, if correction is performed by the pattern correction unit 140, it is certain that the error is reduced.
  • the process of giving the corrected figure pattern 15 output from the pattern correction unit 140 to the figure pattern shape correcting apparatus 100 is repeated. That is, the corrected figure pattern 15 is given as a new original figure pattern to the figure pattern shape estimation apparatus 100 ', and this new original figure pattern (corrected figure pattern 15) is described in ⁇ 1.1. Processing is executed. Specifically, each evaluation point E is set on the corrected graphic pattern 15 by the evaluation point setting unit 110, the feature amount for each evaluation point E is extracted by the feature amount extraction unit 120, and the bias estimation unit 130 performs the setting. An estimated process bias value y for each evaluation point E is calculated. Then, the correction process is performed again in the pattern correction unit 140 using the calculated process bias estimated value y.
  • the graphic pattern shape correction apparatus 100 shown in FIG. 1 has a function of repeatedly executing correction on a graphic pattern in this way.
  • a first corrected graphic pattern 15 is obtained based on the original graphic pattern 10
  • a second corrected graphic pattern is obtained based on the first corrected graphic pattern 15, and based on the second corrected graphic pattern.
  • a third corrected graphic pattern is obtained, and the correction process of... Is repeated.
  • the shape error between the original graphic pattern and the graphic pattern obtained by the simulation is reduced.
  • the figure pattern shape correcting apparatus 100 when the shape error between the original figure pattern and the figure pattern obtained by simulation converges within a predetermined tolerance, the correction is completed, and the actual lithography process is executed using the last obtained figure pattern Then, a real graphic pattern close to the original graphic pattern 10 at the initial design can be formed on the real substrate S.
  • the figure pattern shape correcting apparatus 100 by using the figure pattern shape correcting apparatus 100 according to the present invention, the shape of the original figure pattern can be accurately corrected.
  • each of the evaluation point setting unit 110, the feature amount extraction unit 120, the bias estimation unit 130, and the pattern correction unit 140 shown in FIG. 1 is configured by incorporating a predetermined program into a computer. Therefore, the figure pattern shape estimation apparatus 100 ′ and the figure pattern shape correction apparatus 100 according to the present invention are actually realized by incorporating a dedicated program into a general-purpose computer.
  • FIG. 4 is a flowchart showing a product design / manufacturing process using the figure pattern shape correcting apparatus 100 shown in FIG.
  • a product design stage is performed.
  • This product design stage is a process of creating a graphic pattern for configuring a semiconductor device or the like, and the original graphic pattern 10 shown in FIG. 1 is created at this product design stage.
  • An apparatus for designing a product such as a semiconductor device and creating a graphic pattern is already a known apparatus, and therefore detailed description thereof is omitted here.
  • the evaluation point setting stage in the next step S2 is a stage executed in the evaluation point setting unit 110 shown in FIG.
  • the subsequent feature quantity extraction stage of step S3 is a stage executed in the feature quantity extraction unit 120 shown in FIG. 1.
  • n kinds of feature quantities x1 to xn are extracted for one evaluation point E.
  • the process bias estimation stage of step S4 is a stage executed in the bias estimation unit 130 shown in FIG. 1, and as described above, a process using n feature quantities x1 to xn is performed for one evaluation point E.
  • the estimated bias value y is obtained (the detailed calculation procedure is described in ⁇ 3).
  • the pattern shape correction stage in step S5 is a stage executed in the pattern correction unit 140 shown in FIG. 1, and as described above, using the process bias estimated value y obtained for each evaluation point E, the original figure A corrected graphic pattern 15 is obtained by correcting the pattern 10. Since such correction is not sufficient once, the process of returning to step S2 is repeated until it is determined that “correction is completed” in step S6. That is, by treating the corrected graphic pattern 15 obtained in step S5 as a new original graphic pattern 10, the processes in steps S2 to S5 are repeatedly executed.
  • step S6 If it is determined in step S6 that “correction is complete” as a result of such repeated processing, the process proceeds to step S7, and the lithography process is executed.
  • the determination of “correction completion” is, for example, “the error between the position on the original graphic pattern and the position on the graphic pattern obtained by the simulation is equal to or less than a predetermined reference value for a certain percentage of evaluation points E”. This can be done by satisfying certain specific conditions.
  • step S7 actual processes such as exposure, development, and etching are performed based on the finally obtained corrected graphic pattern, and the actual substrate S is manufactured.
  • steps S1 to S6 are processes executed on a computer (computer), and step S7 is a process executed on an actual substrate S.
  • the most characteristic component of the present invention is a feature quantity extraction unit 120.
  • the present invention produces an effect of extracting an accurate feature amount from the original graphic pattern 10 and performing an accurate simulation to accurately estimate the shape of the actual graphic pattern 20 formed on the actual substrate S.
  • the component that plays the most important role in obtaining such an effect is the feature quantity extraction unit 120.
  • an important feature of the present invention is that feature amounts are extracted from the original graphic pattern 10 by a very unique method. Therefore, here, the basic concept of feature quantity extraction in the present invention will be described.
  • FIG. 5 is a plan view showing the concept of grasping the surrounding features of each evaluation point E11, E12, E13 defined on the outline of the graphic pattern 10 made of a rectangle.
  • FIG. 5A shows a state in which features inside the reference circle C1 and the inside of the reference circle C2 are extracted for the evaluation point E11 set at the center of the right side of the rectangle.
  • the reference circles C1 and C2 are both circles centered on the evaluation point E11, but the reference circle C2 is larger than the reference circle C1.
  • FIG. 5 (b) shows two reference circles C1, for the evaluation point E12 set below the right side of the rectangle
  • FIG. 5 (c) shows the evaluation point E13 set for the center of the lower side of the rectangle.
  • the state where the internal features of C2 are extracted is shown.
  • the internal features of the reference circle C1 are compared for each evaluation point, for the evaluation points E11 and E12, the left half is inside the figure (hatching area) and the right half is outside the figure (blank area). There is no difference in the internal features of the reference circle C1.
  • the internal features of the reference circle C1 for the evaluation point E13 are such that the upper half is inside the figure (hatched area) and the lower half is outside the figure (blank area), and the inside of the reference circle C1 of the evaluation points E11 and E12 However, there is no difference in the occupation ratio of the hatched area.
  • comparing the internal features of the reference circle C2 with respect to the respective evaluation points E11, E12, E13 it can be seen that the distributions of the hatched regions are different and have different features.
  • the feature of the narrow neighborhood region such as the reference circle C1 is extracted or the reference circle C2
  • the extracted features differ depending on whether or not features in a slightly wide neighboring region are extracted. Therefore, when a feature of a neighboring region is quantitatively extracted as some feature amount x with respect to a certain evaluation point E, the feature amount can be extracted by various methods by changing the range of the neighboring region stepwise. You can see that
  • the proximity effect in electron beam exposure includes various effects such as an effect caused by forward scattering having a narrow influence range and an effect caused by back scattering having a wide influence range.
  • forward scattering is explained as a phenomenon in which, when an electron beam is irradiated onto a molding layer composed of a resist layer or the like, electrons having a small mass spread while being scattered by molecules in the resist, It is described as a phenomenon in which electrons scattered and bounced near the surface of a metal substrate or the like under the resist layer are diffused in the resist layer.
  • a process bias is also generated by the etching process, and the magnitude of the process bias varies depending on a loading phenomenon during etching. Similar to the proximity effect in the exposure process described above, this loading phenomenon is also caused by combining various components such as a component having a narrow influence range and a component having a wide influence range.
  • the value of the process bias y for a certain evaluation point E becomes a value determined by the fusion of phenomena having various scale feelings. Therefore, extracting various feature values from feature values related to a narrow range to feature values related to a wide range is different from the various phenomena in each process that affect the process bias that have different influence ranges. This is important for accurate simulation. Therefore, in the present invention, for one evaluation point E, feature quantities for various regions around it are extracted from the vicinity to the distance. As described above, in order to extract a plurality of feature amounts for one evaluation point E, the present invention employs a method of creating an “image pyramid” composed of a plurality of hierarchical images each having a different size. This image pyramid includes information obtained by multiplexing various phenomena having different influence ranges.
  • FIG. 6 is a diagram showing an outline of processing executed in the feature amount extraction unit 120 and the bias estimation unit 130 shown in FIG.
  • An original image Q1 shown in the upper part of the figure is an image created by the original image creating unit 121 shown in FIG. 1 and is an image corresponding to the given original figure pattern 10.
  • the original graphic pattern 10 is data created by a semiconductor device design apparatus or the like, and is data indicating a graphic as shown in FIG. 2 (a), but usually indicates a contour line of the graphic. It is given as vector data (data indicating the coordinate value of each vertex and the connection relationship between each vertex).
  • the original image creation unit 121 executes a process of creating an original image Q1 composed of an aggregate of pixels each having a predetermined pixel value based on the data of the given original graphic pattern 10. For example, if a pixel having a pixel value of 1 is arranged inside a graphic constituting the original graphic pattern 10 and a pixel having a pixel value of 0 is arranged outside, the original image Q1 composed of an aggregate of a large number of pixels U Can be created.
  • An original image Q1 shown in FIG. 6 is an image composed of such an assembly of pixels U, and has a rectangular figure included in the original figure pattern 10 as image information, as indicated by a broken line.
  • the evaluation point E is set on the contour line of the figure by the evaluation point setting unit 110. In FIG. 6, only one evaluation point E is drawn for convenience, but actually, a large number of evaluation points are set along the contour line of the figure.
  • the image pyramid creation unit 122 shown in FIG. 1 creates an image pyramid PP based on the original image Q1.
  • the image pyramid PP is composed of a plurality of hierarchical images having different sizes.
  • the figure shows an image pyramid PP composed of a plurality of n (n ⁇ 2) hierarchical images P1 to Pn.
  • the specific procedure for creating the hierarchical images P1 to Pn from the original image Q1 will be described in Section 2.
  • the hierarchical images P1 to Pn are created by a reduction process for reducing the number of pixels.
  • the hierarchical image P1 is an image having the same size as the original image Q1 (an image having the same number of vertical and horizontal pixels), whereas the hierarchical image P2 is reduced to a small size image.
  • the hierarchical image P3 is an image having a smaller size obtained by further reducing the hierarchical image P2.
  • hierarchical images P1 to Pn whose image size is gradually reduced are created based on the original image Q1.
  • the state in which a plurality of hierarchical images having different sizes are arranged one above the other is in the form of a pyramid as shown in the figure. Therefore, in the present application, an aggregate of the plurality of hierarchical images P1 to Pn is referred to as an image pyramid PP. It is out. Since each of the hierarchical images P1 to Pn is created based on the original image Q1, it has information of the original graphic pattern 10, and the position of the evaluation point E can be defined for each of the hierarchical images P1 to Pn. In the figure, a rectangular figure is drawn on each of the hierarchical images P1 to Pn, and an evaluation point E is arranged on the contour line.
  • FIG. 6 shows a state in which feature amounts x1 to xn of the evaluation points E are extracted from n layer images P1 to Pn constituting the image pyramid PP, respectively.
  • the illustrated feature amounts x1 to xn are values indicating features around the same evaluation point E, but the feature amount x1 is a pixel value of a pixel near the evaluation point E on the first hierarchical image P1.
  • the feature amount x2 is a value calculated based on the pixel values of the neighboring pixels of the evaluation point E on the second hierarchical image P2, and the feature amount x3 is the third value. This is a value calculated based on the pixel values of the neighboring pixels of the evaluation point E on the hierarchical image P3.
  • a specific calculation procedure for each of the feature amounts x1 to xn will be described in Section 2.
  • FIG. 6 shows only one evaluation point E for convenience, but in actuality, n feature values x1 to xn are calculated for each of a large number of evaluation points.
  • Each of the feature amounts x1 to xn is a predetermined scalar value, but n feature amounts x1 to xn are obtained for each evaluation point E, respectively. Therefore, if the n feature quantities x1 to xn are grasped as n-dimensional vectors, the feature quantity extraction unit 120 performs a process of extracting feature quantities composed of n-dimensional vectors for one evaluation point E. .
  • the feature amount (n-dimensional vector) for each evaluation point extracted in this way is input by the feature amount input unit 131 in the bias estimation unit 130 and given to the estimation calculation unit 132.
  • the estimation calculation unit 132 is configured by a neural network, and based on the learning information L obtained in advance by the learning stage, the estimation calculation unit 132 applies the feature amounts x1 to xn given as n-dimensional vectors. Accordingly, an operation for calculating an estimated value y (scalar value) of the process bias for the evaluation point E is performed. A specific calculation procedure will be described in ⁇ 3.
  • an accurate feature amount can be extracted for every original figure pattern 10 even if a physical / experimental simulation model is not constructed for an actual lithography process. Can do.
  • a physical / experimental simulation model since it is not necessary to construct a physical / experimental simulation model in the first place, it is not necessary to consider various set values of material properties and process conditions in the learning stage of the neural network described later in ⁇ 3.2.
  • the present invention is not limited to use, and can be used for manufacturing processes of various products including lithography processes.
  • the present invention can be used in manufacturing processes of various products including lithography processes such as NIL (Nano Imprint Lithography) and EUV (Extreme UltraViolet Lithography).
  • NIL Nano Imprint Lithography
  • EUV ExtraViolet Lithography
  • the original graphic pattern may be corrected so that the actual graphic pattern on the master template produced from the original graphic pattern through exposure lithography matches the original graphic pattern.
  • the present invention can be applied to all product fields including lithography processes such as MEMS (Micro Electro Mechanical Systems), LSPM (Large-size Photomask), lead frames, metal masks, metal mesh sensors, color filters, etc. is there.
  • lithography processes such as MEMS (Micro Electro Mechanical Systems), LSPM (Large-size Photomask), lead frames, metal masks, metal mesh sensors, color filters, etc. is there.
  • the feature amount extraction unit 120 includes an original image creation unit 121, an image pyramid creation unit 122, and a feature amount calculation unit 123, and performs the feature amount extraction process in step S3 in the flowchart of FIG. Has the function to execute.
  • This feature amount extraction processing is actually executed by each procedure shown in FIG.
  • steps S31 and S32 are procedures executed by the original image creation unit 121
  • steps S33 to S36 are procedures executed by the image pyramid creation unit 122
  • step S37 is a feature amount calculation unit 123.
  • the procedure executed by Hereinafter, a procedure executed by each unit will be described in detail with a specific example.
  • the original image creation unit 121 performs a function of creating an original image composed of a collection of pixels each having a predetermined pixel value based on the given original graphic pattern 10, and performs steps S31 and S32 in the flowchart of FIG. Execute. First, in step S31, the process of inputting the original graphic pattern 10 is performed, and in the subsequent step S32, the original image creation process is performed.
  • the original image creation process of step S32 is a process of creating data (raster data) of the original image Q1 composed of a collection of pixels based on the original graphic pattern 10 given as such vector data.
  • the original image creation unit 121 defines a mesh composed of a two-dimensional array of pixels U, overlays the original graphic pattern 10 on this mesh, and configures the position of each pixel U and the original graphic pattern 10.
  • the pixel value of each pixel U is determined based on the relationship with the positions of the contour lines of the graphics F1 to F5.
  • FIG. 9 is a plan view showing a state where the original image creation unit 121 has performed processing for superimposing the original figure pattern 10 on a mesh composed of a two-dimensional array of pixels U.
  • a mesh is defined in which pixels U having pixel dimensions u both vertically and horizontally are two-dimensionally arranged, and a large number of pixels U are arranged at a pitch u both vertically and horizontally.
  • the pixel size u is set to an appropriate value that can represent the shapes of the figures F1 to F5 with sufficient resolution. The smaller the pixel size u is set, the higher the resolution of shape expression, but the later processing load becomes heavier.
  • pixel values are defined for the individual pixels U based on the relationship with the positions of the contour lines of the graphics F1 to F5. There are several methods for defining pixel values.
  • the most basic definition method is to recognize the internal area and external area of each graphic F1 to F5 based on the original graphic pattern 10, and use the occupancy of the internal area in each pixel U as the pixel value of the pixel.
  • the hatched area is the internal area of each of the figures F1 to F5
  • the white area is the external area. Therefore, when this method is adopted, the pixel value of each pixel is defined as the occupancy rate (0 to 1) of the internal area (hatched area) in the pixel in the state of being overlapped as shown in FIG.
  • An image in which pixel values are defined by such a method is generally called an “area density map”.
  • FIG. 10 is a diagram showing an area density map M1 created based on the “original graphic pattern 10 + two-dimensional array of pixels” shown in FIG.
  • each cell is each pixel defined in FIG. 9, and the numbers in the cell are pixel values defined for each pixel.
  • a blank cell is a pixel having a pixel value of 0 (illustration of the pixel value of 0 is omitted).
  • this area density map M1 for example, a pixel having a pixel value of 1.0 is a pixel whose occupancy ratio of the hatching area in FIG. 9 is 100%, and a pixel having a pixel value of 0.5 is the hatching area in FIG. This is a pixel with an occupation ratio of 50%.
  • This area density map M1 is basically a binary image in which the inside of the figure is represented by a pixel value of 1 and the outside of the figure is represented by a pixel value of 0. However, for the pixel where the outline of the figure is located, Since the value indicating the ratio is given as the pixel value, the whole image is a monochrome gradation image.
  • a method of recognizing the contour lines of the graphics F1 to F5 based on the original graphic pattern 10 and setting the length of the contour line existing in each pixel U as the pixel value of the pixel. can also be taken.
  • the pixel value of each pixel is defined as the total sum of the lengths of the contour lines existing in the pixel in the superimposed state as shown in FIG.
  • An image in which pixel values are defined by such a method is generally called an “edge length density map”.
  • As a unit of the length of the contour line for example, a unit in which the pixel dimension u is 1 may be used.
  • FIG. 11 is a diagram showing an edge length density map M2 created based on the “original graphic pattern 10 + two-dimensional array of pixels” shown in FIG. Again, each cell is each pixel defined in FIG. 9, and the numbers in the cell are defined as the sum of the lengths of the contour lines existing in each pixel (the length when the pixel dimension u is 1). Pixel value.
  • a blank cell is a pixel having a pixel value of 0 (illustration of the pixel value of 0 is omitted).
  • a pixel having a pixel value of 1.0 is a pixel having a length u of the contour line existing in the pixel in FIG.
  • the edge length density map M2 is basically a monochrome gradation image showing the density distribution of the contour line, the image is considerably different from the area density map M1 described above. However, as in the present invention, it is a very useful image in extracting the feature amount for the evaluation point E defined on the contour line.
  • FIG. 12 is a plan view showing the original graphic pattern 10 including information on such a dose amount.
  • the original figure pattern 10 with dose shown in FIG. 12 includes the outline information of the figures F1 to F5, as in the original figure pattern 10 shown in FIG. For each of F5, information defining the dose amount is included.
  • a dose amount of 100% is defined for the figures F1 to F3, a dose amount of 50% is defined for the figure F4, and a dose amount of 10% is defined for the figure F5. .
  • These dose amounts indicate the intensity of light or electron beam to be irradiated (including the case where the total energy amount is controlled by the number of exposures) in the exposure process of the lithography process.
  • light and electron beams are irradiated with 100% intensity when exposing the internal area of the figures F1 to F3, but 50% of the figure F5 is exposed when the internal area of the figure F4 is exposed.
  • light or an electron beam is irradiated with an intensity of 10%.
  • the dimensions of the actual figure pattern 20 formed on the actual substrate S can be further finely adjusted.
  • the pixel values of each pixel in the state of being overlapped as shown in FIG. 9 are the occupancy ratio (0 to 1) of the internal area (hatched area) of the specific graphic in the pixel and the specific value. It is defined as the sum of products of doses related to figures.
  • An image in which pixel values are defined by such a method is generally called a “dose density map”. This dose density map also becomes a monochrome gradation image as a whole.
  • FIG. 13 is a diagram showing a dose density map M3 created based on the original figure pattern 10 with a dose amount shown in FIG.
  • each cell is each pixel defined in FIG. 9, and the numbers in the cell are pixel values defined for each pixel.
  • a blank cell is a pixel having a pixel value of 0 (illustration of the pixel value of 0 is omitted).
  • the pixel value of the pixel in which the figure F4 to which the figure is given and the figure F5 to which the dose amount is 10% is arranged is reduced by an amount corresponding to the dose quantity.
  • FIG. 7 shows an example in which the area density map M1 is the original image Q1.
  • the original image Q1 it is also possible to create the original image Q1 by a method other than the three methods described above.
  • the first preparation image Q1 is an original image that is first created based on the original graphic pattern 10, and is a reference image that is first used in an image pyramid creation process described later.
  • the image pyramid creation unit 122 has a function of performing a reduction process for reducing the number of pixels based on the original image Q1 created in step S32 of FIG. 7 (for example, the area density map M1 shown in FIG. 10).
  • An image pyramid creation process is performed to create an image pyramid composed of a plurality of hierarchical images having different sizes. In the case of the embodiment described here, this image pyramid creation process is executed by the procedure shown in steps S33 to S36 in the flowchart of FIG.
  • step S31 an original graphic pattern 10 as shown in FIG. 8 is input, and in step S32, a specific image is taken as an example where the area density map M1 shown in FIG. 10 is created as the original image Q1.
  • the procedure of the pyramid creation process will be described.
  • step S33 a filter process for creating the kth hierarchical image Pk by applying an image processing filter to the kth preparation image Qk is executed.
  • this filter processing is executed, for example, as a convolution operation using a Gaussian filter as an image processing filter.
  • FIG. 14 is a plan view showing a procedure for creating the k-th hierarchical image Pk by performing the filtering process using the Gaussian filter GF33 on the preparation image Qk.
  • the kth preparation image Qk shown in FIG. 14 is actually the same as the area density map M1 shown in FIG.
  • the area density map M1 shown in FIG. 10 is shown as an 8 ⁇ 8 pixel array for convenience, and the description of the pixel value 0 is omitted, whereas the preparation image Qk shown in FIG. Although shown as a pixel array of 10 and a pixel value of 0 is also shown, they are substantially the same image.
  • FIG. 14 for the convenience of executing the filtering process, pixels having a pixel value of 0 are arranged around the area density map M1 including the 8 ⁇ 8 pixel array shown in FIG. It is only doing.
  • the parameter k has an initial value of 1, and the preparation image Qk shown in FIG. 14 is the first preparation image Q1.
  • the first preparation image Q1 is nothing but the original image created by the original image creation process in step S32.
  • FIG. 14 a convolution operation using the Gaussian filter GF33 is executed.
  • the Gaussian filter GF33 has a 3 ⁇ 3 pixel array as shown in the figure.
  • the Gaussian filter GF33 is superposed on a predetermined position of the k-th preparation image Qk to perform a product-sum operation process, thereby performing the k-th operation.
  • Layer image Pk (filtered image) is obtained.
  • FIG. 15 is a plan view showing the k-th hierarchical image Pk obtained by the filtering process shown in FIG.
  • This k-th hierarchical image Pk has a 10 ⁇ 10 pixel array as in the k-th preparation image Qk, and the pixel value of each pixel is obtained by a product-sum operation using the Gaussian filter GF33. Value.
  • a pixel value of 0.375 is given to the target pixel.
  • the pixel value is obtained by superimposing the illustrated Gaussian filter GF33 on the 3 ⁇ 3 pixel array (9 pixels centered on the pixel in the fourth row and the third column) surrounded by a thick frame in FIG. , The product of the pixel values of pixels superposed at the same position is obtained, and the product is obtained as the sum of nine products.
  • the pixel value 0.375 of the target pixel is “(1/16 ⁇ 0) + (2/16 ⁇ 0.25) + (1/16 ⁇ 0.5) + (2/16 ⁇ 0). ) + (4/16 ⁇ 0.5) + (2/16 ⁇ 1.0) + (1/16 ⁇ 0) + (2/16 ⁇ 0.25) + (1/16 ⁇ 0.5) ” Is obtained as a product-sum operation value.
  • Such filter processing by product-sum operation is generally known as convolution operation processing for an image, and thus detailed description thereof is omitted here.
  • a convolution operation is performed using a Gaussian filter GF33 having a 3 ⁇ 3 pixel array as shown in FIG. 16A as the image processing filter.
  • a convolution operation using a Laplacian filter LF33 having a 3 ⁇ 3 pixel array as shown in may be performed.
  • filter processing using a Gaussian filter gives an effect of blurring the contour of an image
  • filter processing using a Laplacian filter gives an effect of enhancing the contour of an image.
  • the k-th hierarchical image Pk having slightly different characteristics from the k-th prepared image Qk can be obtained, and thus includes a plurality of hierarchical images each having different characteristics. It is effective in creating an image pyramid.
  • the image processing filter used for the filter processing in step S33 is not limited to the Gaussian filter GF33 shown in FIG. 16 (a) and the Laplacian filter LF33 shown in FIG. 16 (b). Processing filters can be used.
  • the size of the image processing filter to be used is not limited to the 3 ⁇ 3 pixel array, and an image processing filter of an arbitrary size can be used. For example, a Gaussian filter GF55 having a 5 ⁇ 5 pixel array as shown in FIG. 17A or a Laplacian filter LF55 having a 5 ⁇ 5 pixel array as shown in FIG. 17B may be used. .
  • step S34 it is determined in step S34 whether or not the parameter k has reached a predetermined set value n. If k ⁇ n, the reduction in step S35 is performed. Processing is executed.
  • This reduction processing is processing for creating an image having a smaller number of pixels than the target image based on the predetermined target image.
  • the (k + 1) th preparation image Q (k + 1) is created by executing the reduction process on the kth hierarchical image Pk created by the filtering process of step S33. Processing is executed. Therefore, the preparation image Q (k + 1) is an image having a smaller size than the hierarchical image Pk (an image having a small number of vertical and horizontal pixels in the pixel array).
  • FIG. 18 is a plan view showing a procedure for creating an (k + 1) th preparation image Q (k + 1) as a reduced image by performing an average pooling process on the kth hierarchical image Pk. .
  • an average pooling process is performed on the hierarchical image Pk having the 4 ⁇ 4 pixel array shown in FIG. 18 (a) to obtain a 2 ⁇ 2 image as shown in FIG. 18 (b).
  • a preparation image Q (k + 1) having a pixel array is created as a reduced image.
  • the average pooling process shown in FIG. 18 is a process of converting (reducing) four pixels having a 2 ⁇ 2 pixel array into one pixel, and an average value of pixel values of the original four pixels is expressed as follows:
  • a reduced image is created. For example, four pixels (pixels within a thick frame) having a 2 ⁇ 2 pixel arrangement arranged at the upper left of the hierarchical image Pk shown in FIG. 18A are shown in FIG. On (k + 1), it is converted (reduced) to one pixel indicated by a thick frame.
  • the pixel value 0.5 of the thick frame pixel after this conversion (reduction) is an average value of the pixel values of the original four pixels.
  • FIG. 19 is a plan view showing a procedure for creating a (k + 1) th preparation image Q (k + 1) as a reduced image by performing a max pooling process on the kth hierarchical image Pk. It is. Specifically, by applying a max pooling process (reduction process) to the hierarchical image Pk having the 4 ⁇ 4 pixel array shown in FIG. 19A, the 2 ⁇ 2 as shown in FIG. A preparation image Q (k + 1) having a pixel array is created as a reduced image.
  • a max pooling process reduction process
  • the maximum pooling process shown in FIG. 19 is a process for converting (reducing) four pixels having a 2 ⁇ 2 pixel array into one pixel, similar to the average pooling process shown in FIG.
  • a reduced image is created. For example, four pixels (pixels within a thick frame) having a 2 ⁇ 2 pixel array arranged at the upper left of the hierarchical image Pk shown in FIG. 19A are shown in FIG. On (k + 1), it is converted (reduced) to one pixel indicated by a thick frame.
  • the pixel value 1.0 of the thick frame pixel after this conversion (reduction) is the maximum value of the pixel values of the original four pixels.
  • Each of the pooling processes shown in FIGS. 18 and 19 is a reduction process for converting four pixels having a 2 ⁇ 2 pixel array into a single pixel, but of course, having a 3 ⁇ 3 pixel array. It is also possible to perform a reduction process for converting nine pixels into a single pixel, and it is also possible to perform a reduction process for converting six pixels having a 3 ⁇ 2 pixel array into a single pixel. It is.
  • the image pyramid creation unit 122 replaces the plurality of m adjacent pixels with a single pixel having an average value of the pixel values of the plurality of m adjacent pixels as a pixel value as the reduction processing in step S35. It is also possible to create a reduced image by executing the pooling process, and to replace a plurality of m adjacent pixels with a single pixel having the maximum pixel value of the plurality of m adjacent pixels as a pixel value. -It is also possible to create a reduced image by executing a pooling process. Of course, other reduction processing can be performed as the reduction processing in step S35.
  • any reduction process may be executed in step S35.
  • step S35 when the reduction process of step S35 is completed, the parameter k is increased by 1 in step S36, and the filter process of step S33 is executed again.
  • step S33 the filter for the second preparatory image Q2 is again performed. By performing the process, the second hierarchical image P2 is created.
  • n an appropriate value may be set in advance as the number of layers of the image pyramid (that is, the total number of layer images constituting the image pyramid).
  • the value of n is set larger, the number n of feature quantities extracted for one evaluation point E increases, so that more accurate simulation becomes possible, but the calculation burden increases.
  • the reduction process in step S35 is repeated, the size of the image becomes smaller. Therefore, if the value of n is set too large, the reduction process in step S35 cannot be performed. Therefore, in practice, the value of n may be set appropriately in consideration of the size of the original image Q1 and the calculation burden.
  • FIG. 20 is a plan view showing a procedure (steps S33 to S36 in FIG. 7) for creating an image pyramid PP composed of n hierarchical images P1 to Pn in the image pyramid creation unit 122.
  • the area density map M1 shown in FIG. The pixel value is defined as follows.
  • the filter process is performed on the first preparation image Q1.
  • a first hierarchical image P1 as shown in the upper right of FIG. 20 is created by a convolution operation using a Gaussian filter GF33 having a 3 ⁇ 3 pixel array.
  • the size of the first hierarchical image P1 is the same as the size of the first preparation image Q1.
  • step S35 a reduction process (for example, average pooling process) is performed on the first hierarchical image P1, and a second preparation image Q2 shown in the middle left of FIG. 20 is created.
  • the size of the second preparation image Q2 is smaller than the size of the first hierarchical image P1.
  • step S36 the value of the parameter k is updated to 2, and the filtering process of step S33 is executed again. That is, the second hierarchical image P2 as shown in the middle right of FIG. 20 is created by the convolution operation using the Gaussian filter GF33 having a 3 ⁇ 3 pixel array.
  • the size of the second hierarchical image P2 is the same as the size of the second preparation image Q2.
  • step S35 the reduction process in step S35 is executed again. That is, reduction processing (for example, average pooling processing) is performed on the second hierarchical image P2, and a third preparation image Q3 shown on the lower left of FIG. 20 is created.
  • the size of the third preparation image Q3 is smaller than the size of the second hierarchical image P2.
  • step S36 the value of the parameter k is updated to 3, and the filtering process of step S33 is executed again. That is, a third layer image P3 as shown in the lower right of FIG. 20 is created by a convolution operation using a Gaussian filter GF33 having a 3 ⁇ 3 pixel array.
  • the size of the third layer image P3 is the same as the size of the third preparation image Q3.
  • the image pyramid PP is constituted by n hierarchical images having different sizes from the first hierarchical image P1 to the nth hierarchical image Pn.
  • the image pyramid creation unit 122 has a function of performing filter processing using a predetermined image processing filter on the original image Q1 or the reduced image Q (k + 1).
  • an image pyramid PP composed of a plurality of hierarchical images P1 to Pn is created.
  • the image pyramid creation unit 122 uses the original image created by the original image creation unit 121 as the first preparation image Q1, and performs a filtering process on the kth preparation image Qk (where k is a natural number).
  • the obtained image is the k-th hierarchical image Pk
  • the image obtained by the reduction process for the k-th hierarchical image Pk is the (k + 1) -th prepared image Q (k + 1), and is filtered until the n-th hierarchical image Pn is obtained.
  • an image pyramid PP composed of a plurality of n hierarchical images including the first hierarchical image P1 to the nth hierarchical image Pn is created.
  • the reduction process of step S35 is an essential process in the procedure shown in the flowchart of FIG.
  • the filtering process in step S33 is not necessarily a necessary process.
  • the influence of the pixel values of the surrounding pixels can be applied to the pixel values of the individual pixels.
  • by adding filter processing it is possible to create a plurality of hierarchical images rich in variations, and it is possible to extract feature quantities including more diverse information, resulting in more accurate results. Simulation becomes possible. Therefore, in practice, it is preferable to alternately perform the reduction process and the filter process as in the procedure shown in the flowchart of FIG.
  • Feature Quantity Calculation Unit 123 performs a process of calculating the feature amounts x1 to xn for each evaluation point E based on the hierarchical images P1 to Pn constituting the image pyramid PP.
  • the procedure for calculating the feature amounts x1 to xn will be specifically described.
  • FIG. 21 is a plan view showing a procedure for calculating feature amounts x1 to xn for a specific evaluation point E from the hierarchical images P1 to Pn in the feature amount calculation unit 123.
  • FIG. Specifically, on the left side of FIGS. 21A to 21C, the original graphic pattern 10 is arranged in each pixel arrangement of the first hierarchical image P1, the second hierarchical image P2, and the third hierarchical image P3, respectively.
  • a rectangle (indicated by a thick frame) is formed on the right side of FIGS. 21 (a) to 21 (c), and specific evaluation points are shown on the right side based on the hierarchical images P1, P2, and P3.
  • the principle of calculating feature amounts x1, x2, and x3 for E is shown.
  • FIG. 21 is a part of an image constituting each layer of the image pyramid PP. Actually, n hierarchical images from P1 to Pn are prepared and n sets of feature values x1 to xn are extracted. In FIG. 21, for convenience of explanation, three hierarchical images P1, A state in which three sets of feature values x1, x2, and x3 are extracted from P2 and P3 is shown.
  • the first hierarchical image P1 is an image obtained by performing a filtering process on the original image Q1 (first preparation image).
  • the first hierarchical image P1 has a 16 ⁇ 16 pixel array. is doing.
  • the second hierarchical image P2 is an image obtained by performing reduction processing and filtering processing on the first hierarchical image P1, and in the illustrated example, an 8 ⁇ 8 pixel array is formed.
  • the third hierarchical image P3 is an image obtained by performing reduction processing and filtering processing on the second hierarchical image P2, and has a 4 ⁇ 4 pixel array in the illustrated example. Yes.
  • each of the hierarchical images P1, P2, and P3 is drawn so that the outline thereof becomes a square having the same size, the images are all the same size, but the pixel arrangement is 16
  • the image size is gradually reduced to x16, 8x8, and 4x4, and the image size is gradually reduced.
  • the outer frame of each hierarchical image P1, P2, P3 is drawn as a square of the same size, so the size of the pixels gradually increases. In other words, the resolution of the image decreases in the order of the hierarchical images P1, P2, and P3, and gradually becomes a coarse image.
  • the rectangle constituting the original figure pattern 10 is drawn with a thick frame. Since each of the hierarchical images P1, P2, and P3 is a raster image made up of a collection of pixels, the rectangular outline drawn with a thick frame in the figure is actually included as information on the outline itself. However, it is included as information on pixel values of individual pixels. However, in FIG. 21, for the convenience of explanation, the positions of the rectangles on the hierarchical images P1, P2, and P3 are indicated by bold lines.
  • a process of extracting feature amounts x1 to xn for a specific evaluation point E defined on the rectangular outline will be described.
  • the rectangles indicated by the thick frames are arranged at the same relative position, and the specific evaluation point E is also at the same relative position. Placed in position.
  • the feature amount for one evaluation point E is calculated based on the pixel values of the neighboring pixels.
  • a feature quantity x1 for the evaluation point E is extracted based on the first hierarchical image P1.
  • the feature quantity calculation unit 123 as shown on the right side of FIG. 21A, four pixels (in the figure) located in the vicinity of the evaluation point E from the pixels constituting the first hierarchical image P1. Are extracted as the target pixel, and the feature amount x1 is calculated by the calculation using the pixel values of these four target pixels.
  • attention is paid to four pixels (hatched pixels in the figure) located in the vicinity of the evaluation point E from the pixels constituting the second hierarchical image P2.
  • the feature amount x2 is calculated by calculation using the pixel values of these four target pixels. Further, as shown on the right side of FIG. 21 (c), four pixels (pixels hatched in the figure) located in the vicinity of the evaluation point E from the pixels constituting the third hierarchical image P3 are focused pixels. And the feature amount x3 is calculated by calculation using the pixel values of these four target pixels.
  • n sets of feature quantities x1 to xn can be extracted for a specific evaluation point E.
  • These n sets of feature amounts x1 to xn are parameters indicating the surrounding features of the same evaluation point E on the original graphic pattern 10, but the ranges affected by the original graphic pattern 10 are different from each other.
  • the feature quantity x1 extracted from the first hierarchical image P1 becomes a value indicating the feature in the narrow area hatched in the diagram on the right side of FIG. 21 (a), but the second hierarchical image P2
  • the feature amount x2 extracted from FIG. 21B is a value indicating the feature in the wider area hatched in FIG. 21B
  • the feature amount x3 extracted from the third hierarchical image P3 is The value on the right side of FIG. 21 (c) is a value indicating a characteristic in a wider area where hatching is performed.
  • the value of the process bias y for a certain evaluation point E is a value determined by merging phenomena having various scale feelings such as forward scattering and back scattering. Therefore, if various feature amounts x1 to xn are extracted from the feature amount x1 related to a very narrow range to the feature amount xn related to a wider range as the feature amount for the same evaluation point E, the influence range is extracted. It is possible to perform an accurate simulation in consideration of various phenomena different from each other.
  • FIG. 21 shows a process of extracting n sets of feature amounts x1 to xn for one evaluation point E. Actually, each of a large number of evaluation points defined on the original graphic pattern 10 is shown. N sets of feature quantities x1 to xn are extracted by the same procedure.
  • a simple method can be adopted in which the simple average of the pixel values of the pixel of interest is the feature amount x.
  • the simple average of the pixel values of the pixel of interest is the feature amount x.
  • FIG. 22 is a diagram showing a specific calculation method (calculation method using a weighted average value as a feature amount) used in the feature amount calculation procedure shown in FIG.
  • a specific calculation method calculation method using a weighted average value as a feature amount
  • the target pixels A, B, C, and D can be determined by performing a process of selecting a total of four pixels in order from the evaluation point E on the hierarchical image P to be processed. Therefore, for the pixel values of the four target pixels A, B, C, and D, a calculation may be performed using a weighted average considering the weight according to the distance between the evaluation point E and each pixel as the feature amount x.
  • a mark x is displayed at the center point of each pixel of interest A, B, C, D, and a broken line connecting these marks x is drawn.
  • the pixel size of each pixel of interest A, B, C, and D is u both vertically and horizontally, and the broken line is a dividing line that divides the pixel having the pixel size u in half.
  • the distance between the evaluation point E and each pixel of interest A, B, C, D is the lateral distance between the evaluation point E and the center point of each pixel of interest A, B, C, D, and The vertical distance is adopted. Specifically, in the example shown in FIG.
  • the target pixel A has a horizontal distance a and a vertical distance c
  • the target pixel B has a horizontal distance b and a vertical distance c
  • the pixel of interest C has a horizontal distance a and a vertical distance d
  • the pixel of interest D has a horizontal distance b and a vertical distance d.
  • the method of calculating the feature amount x from the pixel values of the four target pixels A, B, C, and D is not limited to the method illustrated in FIG.
  • Various other calculation methods can be employed as long as the feature amount x reflecting the pixel value can be calculated.
  • the number of target pixels used for calculating the feature amount x is as follows. The number is not limited to four. For example, nine pixels constituting a 3 ⁇ 3 pixel array located in the vicinity of the evaluation point E are selected as the target pixels, and the pixel values of these nine target pixels are set according to the distance from the evaluation point E. It is also possible to obtain a weighted average in consideration of the weight and use this as the feature quantity x for the evaluation point E.
  • the feature value calculating unit 123 calculates the specific evaluation from the pixels constituting the specific hierarchical image P.
  • a total of j pixels are extracted as the target pixel in the order close to the point E, and the weight of the extracted pixel values of the j target pixels in consideration of the weight according to the distance between the specific evaluation point E and each target pixel
  • An arithmetic operation for obtaining an average is performed, and the obtained weighted average value can be used as the feature amount x.
  • the modification described here is based on the processing of steps S33 to S36 shown in the flowchart of FIG. 7, and further, when the filtering processing of step S33 is completed, the k-th obtained by the filtering processing.
  • the difference calculation “Pk ⁇ Qk” is performed by subtracting the k-th preparation image Qk from the filtered image Pk (the image called the k-th layer image Pk in ⁇ 2.2) to obtain the k-th difference image Dk.
  • the required processing is added.
  • the processing of steps S33 to S36 shown in the flowchart of FIG. 7 is executed as it is, but furthermore, from the kth filtered image Pk to the kth prepared image Qk.
  • the difference calculation “Pk ⁇ Qk” for subtracting is reduced.
  • the difference calculation “Pk ⁇ Qk” defines pixels arranged at the same position on the pixel array for the k-th filtered image Pk and the k-th prepared image Qk as corresponding pixels. This is a process for subtracting the pixel value of the corresponding pixel on the image Qk from the pixel value of each pixel to obtain a difference, and obtaining a difference image Dk composed of a new pixel aggregate with the obtained difference as the pixel value.
  • FIG. 23 is a plan view showing a procedure for creating an image pyramid PD composed of n kinds of difference images D1 to Dn by such a difference calculation “Pk ⁇ Qk”.
  • the first hierarchical image D1 shown on the upper right side is a difference image obtained by the difference calculation “P1-Q1”, and specifically, the first filtered image P1 ( In FIG. 20, the difference is calculated by subtracting the first preparation image Q1 shown on the upper left side (referred to as the first hierarchical image P1) (subtraction of pixel values of pixels at corresponding positions).
  • the second hierarchical image D2 shown on the right side of the middle stage in FIG. 23 is a difference image obtained by the difference calculation “P2-Q2”. Specifically, the second filter processing shown on the right side of the middle stage in FIG. The difference is calculated by subtracting the second preparation image Q2 shown on the left side of the middle stage from the image P2 (referred to as the second hierarchical image P2 in FIG. 20). Further, the third hierarchical image D3 shown on the lower right side of FIG. 23 is a difference image obtained by the difference calculation “P3-Q3”. Specifically, the third filtered image shown on the lower right side of FIG.
  • the difference is calculated by subtracting the third preparation image Q3 shown on the left side of the lower stage from P3 (referred to as the third hierarchical image P3 in FIG. 20). Thereafter, the same difference calculation is performed, and finally, the difference image obtained by the difference calculation “Pn ⁇ Qn” is the n-th layer image Dn.
  • the image pyramid PP is generated by the first hierarchical image (first filtered image) P1 to the nth hierarchical image (nth filtered image) Pn.
  • an image pyramid is formed by the first layer image (first difference image) D1 to n-th layer image (n-th difference image) Dn.
  • the PD is configured.
  • a difference calculation procedure may be added to the procedure of the example described in ⁇ 2.2.
  • the image pyramid creation unit 122 uses the original image created by the original image creation unit 121 as the first preparation image Q1, and obtains it by filtering the kth preparation image Qk (where k is a natural number).
  • the difference image Dk between the filtered image Pk and the kth preparation image Qk is obtained, the difference image Dk is set as the kth hierarchical image Dk, and the image obtained by the reduction processing on the kth filtered image Pk is the ( As the preparation image Q (k + 1) of (k + 1), the first hierarchical image D1 to the nth hierarchical image Dn are included by alternately executing the filtering process and the reduction process until the nth hierarchical image Dn is obtained.
  • An image pyramid PD composed of a plurality of n hierarchical images may be created.
  • the k-th layer image Dk constituting the k-th layer of the image pyramid PD is an image after the filter process (filtered image Pk) and an image before the filter process (prepared image Qk).
  • This is a difference image, and the pixel value of each pixel corresponds to the difference between the pixel values before and after the filtering process. That is, the k-th hierarchical image Pk in the embodiment described in ⁇ 2.2 shows the image after the filtering process, whereas the k-th hierarchical image Dk in the modification described here is a filter. It shows the difference caused by the process.
  • the image pyramid PP created in the embodiment described in ⁇ 2.2 and the image pyramid PD created in the modification described here are significantly different in the meaning of the hierarchical image as a component.
  • the points that are images showing some characteristics about the evaluation point E Therefore, it is possible to extract feature amounts from each of the hierarchical images D1 to Dn created in the modified example described here, and the feature amount calculating unit 123 in this modified example has the hierarchical images D1 to Dn.
  • processing for extracting the feature amounts x1 to xn is performed.
  • the number of image pyramids to be used is not necessarily one, and a plurality of image pyramids can be created by a plurality of algorithms, and feature amounts can be extracted from individual image pyramids. Is possible.
  • the image pyramid creation unit 122 has a function of performing image pyramid creation processing based on a plurality of different algorithms for one original image (first preparation image Q1), thereby creating a plurality of image pyramids. You may be made to do.
  • the feature amount calculation unit 123 for each hierarchical image constituting each of the plurality of image pyramids, based on the pixel value of a pixel (a pixel located around the evaluation point) corresponding to the evaluation point position. What is necessary is just to perform the process which calculates a feature-value.
  • the image pyramid creation unit 122 performs the image pyramid creation process
  • the algorithm of the embodiment described in ⁇ 2.2 is adopted as the main algorithm, as shown in FIG.
  • a main image pyramid PP composed of P1 to Pn (filtered images) can be created, and as a sub-algorithm, a modified algorithm using the difference image Dk described in ⁇ 2.4 (1) 2 as a hierarchical image is used.
  • a sub-image pyramid PD composed of n sub-layer images D1 to Dn (difference images) can be created. Therefore, if the image pyramid creation unit 122 performs image pyramid creation processing using the above two algorithms, it is possible to create two types of image pyramids, the main image pyramid PP and the sub image pyramid PD. .
  • the main image pyramid PP created by the filter processing using the Gaussian filter illustrated in FIG. 14 can be called a Gaussian pyramid.
  • the sub-image pyramid PD configured using the difference image can be called a Laplacian pyramid. Since the Gaussian pyramid and the Laplacian pyramid are image pyramids that are greatly different from each other, they are adopted as the main image pyramid PP and the sub image pyramid PD, and feature amounts are extracted using two image pyramids. This makes it possible to extract feature values with more diversity.
  • the feature amount calculation unit 123 sets the pixel values of the pixels in the vicinity of the evaluation point E for the main layer images P1 to Pn constituting the main image pyramid PP and the sub layer images D1 to Dn constituting the sub image pyramid PD, respectively. If the process of calculating the feature amount is performed based on the feature amounts, the feature amounts xp1 to xpn calculated from the main layer images P1 to Pn and the feature amounts xd1 to xdn calculated from the sub layer images D1 to Dn are obtained. . That is, a total of 2n feature amounts are extracted for one evaluation point E. In this case, since the feature amount for one evaluation point E is given to the estimation calculation unit 132 as a 2n-dimensional vector, more accurate estimation calculation can be performed.
  • the image pyramid creation unit 122 sets the original image created by the original image creation unit 121 as the first preparation image Q1, and the kth preparation image Qk (where k is The image obtained by the filtering process for (natural number) is the k-th main layer image Pk, the image obtained by the reduction process for the k-th main layer image Pk is the (k + 1) th preparation image Q (k + 1), and the nth
  • a main image pyramid including a plurality of n hierarchy images including the first main hierarchy image P1 to the nth main hierarchy image Pn.
  • a difference image Dk between the k-th main hierarchy image Pk and the k-th preparation image Qk is obtained, and the difference image Dk is set as the k-th sub-layer image Dk. It is sufficient to create a sub image pyramid consisting of hierarchy images of a plurality n as including a sub-hierarchy image Dn image D1 ⁇ No. n.
  • the feature amount calculation unit 123 may calculate the feature amount based on the pixel values of the pixels near the evaluation point E for each hierarchical image constituting the main image pyramid PP and the sub image pyramid PD. . If it does so, it will become possible to extract a 2n-dimensional vector as a feature-value about one evaluation point E, and it will become possible to perform a more exact estimation calculation.
  • the original image creation unit 121 performs a process of creating an original image based on the given original figure pattern 10.
  • the original image created here as described in ⁇ 2.1, the area density map M1 (FIG. 10), the edge length density map M2 (FIG. 11), the dose density map M3 (FIG. 13), etc.
  • Various forms of images can be employed.
  • the original image creation unit 121 can employ various creation algorithms when creating an original image based on the original graphic pattern 10, and the contents differ depending on which algorithm is employed.
  • Various original images can be created.
  • the area density map M1 shown in FIG. 10, the edge length density map M2 shown in FIG. 11, and the dose density map M3 shown in FIG. 13 are all images created based on the same original figure pattern 10.
  • the pixel values of the individual pixels are different from each other, and each has a different image.
  • a plurality of density maps in other words, a plurality of maps having different resolutions
  • map sizes number of pixels arranged vertically and horizontally
  • a plurality of image pyramids may be created using each of the plurality of density maps as an original image.
  • a density map with a small pixel size and a large map size (high-resolution density map) as the original image.
  • the memory of a computer is limited, it is practically used.
  • a plurality of density maps having different pixel sizes and map sizes are used as the original image.
  • the original image creation unit 121 has a function of performing original image creation processing based on a plurality of different algorithms to create a plurality of types of original images
  • the image pyramid creation unit 122 has a plurality of types of original images.
  • a plurality of image pyramids can be created by providing a function of performing processing for creating separate image pyramids based on images.
  • the feature amount calculation unit 123 has a function of calculating the feature amount based on the pixel values of the pixels in the vicinity of the evaluation point E for each hierarchical image constituting each of the plurality of image pyramids, It is possible to extract a feature amount composed of a higher-dimensional vector, and it is possible to perform a more accurate estimation calculation.
  • the original image creation unit 121 performs a first original image composed of the area density map M1 shown in FIG. 10, a second original image composed of the edge length density map M2 shown in FIG. 11, and a dose density shown in FIG.
  • the image pyramid creation unit 122 creates three sets of independent image pyramids based on the three original images. Can do. All of the image pyramids are image pyramids created based on the same original graphic pattern 10.
  • the feature amount calculation unit 123 can perform a process of calculating a feature amount based on the pixel values of pixels in the vicinity of the evaluation point E for each hierarchical image constituting each of the three image pyramids.
  • n feature amounts x1 to xn are extracted from one image pyramid, a total of 3n feature amounts can be extracted for the same evaluation point E. That is, since a 3n-dimensional vector can be given as a feature amount for one evaluation point E, it is possible to perform more accurate estimation calculation.
  • ⁇ 2.4 (2) ⁇ it is also possible to combine the above-described modification of ⁇ 2.4 (2) ⁇ with the modification described here.
  • the image pyramid creation unit 122 employs two different types of algorithms when creating the image pyramid, the main image pyramid PP and the sub-image pyramid PD An image pyramid can be created. Therefore, if two image pyramids of the main image pyramid PP and the sub image pyramid PD are created based on each of the above three original images, a total of six image pyramids based on the same original graphic pattern 10 are created. Therefore, a 6n-dimensional vector can be given as a feature amount for one evaluation point E.
  • the bias estimation unit 130 includes a feature amount input unit 131 and an estimation calculation unit 132, and has a function of executing the process bias estimation process in step S4 in the flowchart of FIG. .
  • feature quantities n-dimensional vectors
  • x1 to xn are input to the feature quantity input unit 131, and an estimation calculation by the estimation calculation unit 132 is executed.
  • An estimate y of the process bias for E is determined.
  • a neural network is used as the estimation calculation unit 132. Therefore, the detailed configuration and operation of this neural network will be described below.
  • neural networks have attracted attention as a technology that forms the basis of artificial intelligence, and are used in various fields including image processing.
  • This neural network is a computer-like structure that mimics the structure of a biological brain, and is composed of neurons and edges that connect them.
  • FIG. 24 is a block diagram showing an embodiment using a neural network as the estimation calculation unit 132 shown in FIG.
  • an input layer, an intermediate layer (hidden layer), and an output layer are defined in the neural network, and predetermined information processing is performed on the information given to the input layer in the intermediate layer (hidden layer). The result is output to the output layer.
  • feature quantities x1 to xn for one evaluation point E are given to the input layer as n-dimensional vectors, and an estimated value of the process bias for the evaluation point E is given to the output layer.
  • y is output.
  • the estimated value y of the process bias is an estimated value indicating the amount of deviation in the normal direction of the contour line for the evaluation point E located on the contour line of the predetermined graphic as described in ⁇ 1. .
  • the estimation calculation unit 132 has a neural network having the feature amounts x1 to xn input by the feature amount input unit 131 as input layers and the process bias estimate y as an output layer.
  • the intermediate layer of this neural network is composed of N hidden layers, which are a first hidden layer, a second hidden layer,. These hidden layers have a large number of neurons (nodes), and edges connecting these neurons are defined.
  • the feature values x1 to xn given to the input layer are transmitted as signals to each neuron via the edge. Finally, a signal corresponding to the estimated value y of the process bias is output from the output layer.
  • a signal in the neural network is transmitted from one hidden layer neuron to the next hidden layer neuron through computation via an edge. The calculation via the edge is performed using learning information L (specifically, parameters W and b described later) obtained in the learning stage.
  • FIG. 25 is a diagram showing a specific calculation process executed by the neural network shown in FIG.
  • the part shown by the bold line in the figure shows the first hidden layer, the second hidden layer,..., The Nth hidden layer, and each circle in each hidden layer is a neuron (node) and connects each circle. Lines indicate edges.
  • feature values x1 to xn for one evaluation point E are given to the input layer as n-dimensional vectors, and an estimated value y of the process bias for the evaluation point E is scalar in the output layer. It is output as a value (a dimension value indicating the amount of deviation in the normal direction of the contour line).
  • the first hidden layer is an M (1) -dimensional layer, and is composed of a total of M (1) neurons h (1,1) to h (1, M (1)).
  • the second hidden layer is an M (2) -dimensional layer and is configured by a total of M (2) neurons h (2,1) to h (2, M (2)), and the Nth hidden layer is M (N ) Dimension layer, and is composed of a total of M (N) neurons h (N, 1) to h (N, M (N)).
  • the calculation values of the signals transmitted to the neurons h (1,1) to h (1, M (1)) of the first hidden layer are respectively expressed as the calculation values h (1,1) using the same symbols. .., H (1, M (1)), the values of these calculated values h (1, 1) to h (1, M (1)) are given by the matrix equation shown in the upper part of FIG. It is done.
  • the function f ( ⁇ ) on the right side of this equation the sigmoid function shown in FIG. 27A, the normalized linear function ReLU shown in FIG. 27B, the normalized linear function Leakey ⁇ ReLU shown in FIG.
  • the activation function can be used.
  • ⁇ described as an argument of the function f ( ⁇ ) is a matrix [W] and a matrix [x1 to xn] (features given to the input layer as n-dimensional vectors, as shown in the middle of FIG. ) And the product of the matrix [b].
  • the contents of the matrix [W] and the matrix [b] are as shown in the lower part of FIG. 26, and each component of the matrix (weight parameter W (u, v) and bias parameter b (u, v)).
  • the values of the individual components (parameters W (u, v), b (u, v)) constituting the matrix [W] and the matrix [b] are given as learning information L, and are shown in FIG.
  • the calculation values h (1,1) to h (1, M (1)) of the first hidden layer are calculated based on the feature values x1 to xn given to the input layer. Can do.
  • FIG. 28 is a diagram showing an arithmetic expression for obtaining each value of the second hidden layer to the Nth hidden layer in the diagram shown in FIG. Specifically, the same sign is used for the operation value of the signal transmitted to the neurons (i + 1, 1) to h (i + 1, M (i + 1)) of the (i + 1) th hidden layer (1 ⁇ i ⁇ N).
  • operation values h (i + 1,1) to h (i + 1, M (i + 1) the values of these operation values h (i + 1,1) to h (i + 1, M (i + 1)) are It is given by the matrix equation shown in the upper part of FIG.
  • the function f ( ⁇ ) on the right side of this equation, as described above, each function shown in FIG. 27 can be used.
  • ⁇ described as an argument of the function f ( ⁇ ) is a matrix [W] and a matrix [h (i, 1) to h (i, M (i))] as shown in the middle part of FIG. It is a value obtained by adding the matrix [b] to the product of (the calculated value of the neuron h (i, 1) to h (i, M (i)) of the previous i-th hidden layer).
  • the contents of the matrix [W] and the matrix [b] are as shown in the lower part of FIG. 28, and the individual components of the matrix (parameters W (u, v), b (u, v)) are This is learning information L obtained by a learning stage described later.
  • the values of the individual components (parameters W (u, v), b (u, v)) constituting the matrix [W] and the matrix [b] are given as learning information L, and are shown in FIG.
  • the arithmetic expression shown based on the arithmetic values [h (i, 1) to h (i, M (i))] obtained in the i-th hidden layer, the arithmetic values in the (i + 1) -th hidden layer h (i + 1, 1) to h (i + 1, M (i + 1)) can be calculated. Therefore, each value of the second hidden layer to the Nth hidden layer in the diagram shown in FIG. 25 can be obtained sequentially based on the arithmetic expression shown in FIG.
  • FIG. 29 is a diagram showing an arithmetic expression for obtaining the output layer value y in the diagram shown in FIG.
  • the output value y (estimated value of process bias for the evaluation point E: scalar value) is given by the matrix equation shown in the upper part of FIG. That is, the output value y is expressed as matrix [W] and matrix [h (N, 1) to h (N, M (N))] (Nth hidden layer neurons h (N, 1) to h (N, M (N)) and the scalar value b (N + 1).
  • the contents of the matrix [W] are as shown in the lower part of FIG. 29, and the individual components (parameters W (u, v)) and the scalar value b (N + 1) of the matrix [W] are also described later.
  • the values of the first hidden layer in the diagram shown in FIG. 25 are the parameters W (1, v), b prepared in advance as learning information L for the feature amounts x1 to xn given as the input layers.
  • (1, v) can be obtained by applying each parameter of the second hidden layer to each value of the first hidden layer using parameters W (2, v), b (2, v) can be obtained by applying ..., each value of the Nth hidden layer is prepared in advance as learning information L in each value of the (N-1) th hidden layer.
  • the values of the output layer y are prepared in advance as learning information L for each value of the Nth hidden layer.
  • Parameter Parameter W (N + 1, v), b (N + 1) It can be obtained by. Specific arithmetic expressions are as shown in FIGS.
  • the feature value given as the input layer is an n-dimensional vector ( x1 to xn), but a vector of V ⁇ n dimensions (V is the total number of image pyramids), but the numerical value of the input layer in the diagram shown in FIG. There is no change in the basic configuration and operation.
  • the calculation for obtaining the estimated value y of the process bias for a certain evaluation point E has been described above using the neural network shown in FIG. 25. In practice, however, the calculation is performed on the outline of the figure included in the original figure pattern 10. The same calculation is performed for a large number of defined evaluation points, and an estimated value y of the process bias is obtained for each evaluation point.
  • the pattern correction unit 140 corrects the pattern shape based on the estimated value y of the process bias for each evaluation point thus obtained (step S5 in FIG. 4). Will be executed.
  • the estimation calculation unit 132 is configured by a neural network, and uses the learning information L set in advance to calculate the signal value transmitted to each neuron. Will do.
  • the substance of the learning information L is the parameters W (u, v), b () described as the components of the matrices [W], [b] in the lower part of FIG. 26, the lower part of FIG. 28, and the lower part of FIG. u, v) and the like. Therefore, in order to construct such a neural network, it is necessary to obtain learning information L by a learning stage executed in advance.
  • the neural network included in the estimation calculation unit 132 includes dimension values obtained by actual dimension measurement of the actual figure pattern 20 actually formed on the actual substrate S by a lithography process using a large number of test pattern figures.
  • Process bias estimation processing is performed using, as learning information L, parameters W (u, v), b (u, v), and the like obtained in the learning stage using the feature amounts obtained from each test pattern figure. become.
  • Such a process in the learning stage of the neural network is a known technique, but here, an outline of a process suitable for the learning stage of the neural network used in the present invention will be briefly described.
  • FIG. 30 is a flowchart showing a learning stage procedure for obtaining learning information L used by the neural network shown in FIG.
  • a test pattern graphic creation process is executed.
  • the test pattern figure corresponds to, for example, the original figure pattern 10 as shown in FIG. 2A, and a simple figure such as a rectangle or an L-shaped figure is usually used. Actually, thousands of test pattern figures having different sizes and shapes are created.
  • step S82 an evaluation point E is set on each test pattern figure. Specifically, a process of defining a large number of evaluation points E at predetermined intervals on the contour lines of individual test pattern figures may be performed.
  • step S83 a feature amount is extracted for each evaluation point E.
  • the feature quantity extraction process in step S83 is the same as the process described in ⁇ 2, and is executed by a unit having a function equivalent to that of the feature quantity extraction unit 120. According to the procedure described in ⁇ 2, the feature quantities x1 to xn are extracted for each evaluation point E.
  • the estimation calculation unit learning process in step S84 is a process of determining learning information L (that is, parameters W (u, v), b (u, v), etc.) using the feature amount extracted in step S83. is there.
  • learning information L that is, parameters W (u, v), b (u, v), etc.
  • step S85 the lithography process is actually executed based on the test pattern graphic created in step S81, and the actual substrate S is created.
  • step S86 the actual dimension of each figure is measured for the actual figure pattern formed on the actual substrate S. This measurement result is used for the learning process in step S84.
  • the learning stage process shown in FIG. 30 includes a process executed on the computer composed of steps S81 to S84 (a process executed by the computer program) and a process executed on the actual substrate composed of steps S85 and S86. And composed of In the estimation calculation unit learning process in step S84, learning information L used for the neural network is determined based on the feature amounts x1 to xn obtained by the processing on the computer and the actual dimensions measured on the actual board. It will be processing.
  • FIG. 31 is a flowchart showing a detailed procedure of learning of the estimation calculation unit in step S84 in the flowchart shown in FIG.
  • step S841 the design position and feature amount of each evaluation point are input.
  • the design position of the evaluation point is a position on the test pattern figure of the evaluation point set in step S82
  • the feature amount of the evaluation point is the feature amount x1 to xn extracted in step S83.
  • step S842 the actual position of each evaluation point is input.
  • the actual position of the evaluation point is determined based on the actual dimension of each figure on the actual substrate S actually measured in step S86.
  • step S843 the actual bias of each evaluation point is calculated. This actual bias corresponds to the amount of deviation between the design position of the evaluation point input in step S841 and the actual position of the evaluation point input in step S842.
  • evaluation points E11, E12, E13, etc. are formed on the outline of the rectangle in step S82.
  • feature amounts x1 to xn are extracted for each of the evaluation points E11, E12, and E13.
  • the position and feature amount of each evaluation point E11, E12, E13 are input in step S841.
  • the actual figure pattern 20 as shown in FIG. 3B is formed on the actual substrate S by the lithography process in step S85, and the actual figure pattern 20 is formed by measuring the actual dimension in step S86.
  • the actual dimensions are measured for each side of the rectangle.
  • the actual positions E21, E22, and E23 of the evaluation points are determined as the points where the evaluation points E11, E12, and E13 are moved in the normal direction of the contour line, respectively.
  • the actual positions E21, E22, E23 of each evaluation point are input in step S842.
  • the actual bias y11, y12, y13 is calculated as the difference between the design position of each evaluation point E11, E12, E13 and the actual position E21, E22, E23.
  • the actual bias may be determined by obtaining a value y obtained by dividing “a” by 2.
  • test pattern figures having a scale of several thousand are created, and a large number of evaluation points are set on the contour lines of each figure.
  • steps S841 to S843 is a huge number of evaluation points. Will be executed for each.
  • a combination of the feature amounts x1 to xn and the actual bias y is prepared as a learning material.
  • the parameters W and b are set to initial values.
  • the parameters W and b are the parameters W (u, v) and b (u described as the components of the matrices [W] and [b] in the lower part of FIG. 26, the lower part of FIG. 28, and the lower part of FIG. , V) and the like, and becomes a value constituting the learning information L.
  • a random value may be given by a random number.
  • the parameters W and b that constitute the learning information L have frank values.
  • step S845 an operation for estimating the process bias y from the feature amounts x1 to xn is executed.
  • a neural network as shown in FIG. 24 is prepared.
  • random numbers are given as initial values to the parameters W and b constituting the learning information L, and this neural network cannot perform a normal function as the estimation calculation unit 132. Is something.
  • the feature values x1 to xn input in step S841 are given to the input layer of this incomplete neural network, the calculation using the learning information L consisting of incomplete values is performed, and the estimated value y of the process bias is output as the output layer. calculate.
  • the estimated value y of the process bias obtained initially is far from the observed actual bias.
  • step S846 a residual with respect to the actual bias at that time is calculated. That is, a difference between the estimated value y of the process bias obtained in step S845 and the calculated actual bias value obtained in step S843 is obtained, and this difference is set as a residual for the evaluation point E. If this residual is less than or equal to a predetermined tolerance, the learning information L at that time (that is, parameters W (u, v), b (u, v), etc.) is sufficiently practical learning information. Therefore, the learning stage can be completed.
  • Step S847 is a procedure for determining whether or not the learning stage can be completed. Actually, since a residual is obtained for each of a large number of evaluation points, in practice, for example, if the amount of improvement in the residual sum of squares is below a specified value, a determination is made to end learning. A determination method may be adopted. If it is not determined that the learning has ended, the parameters W (u, v), b (u, v), etc. are updated in step S848. Specifically, updating is performed to increase or decrease the values of the parameters W (u, v), b (u, v), etc. by a predetermined amount so that the effect of reducing the residual occurs.
  • the steps S845 to S848 are repeatedly executed until a positive determination is made in step S847.
  • the values of the parameters W (u, v), b (u, v), etc. constituting the learning information L are gradually corrected in the direction of decreasing the residual, and finally in step S847.
  • a positive determination is made and the learning phase ends.
  • the learning information L (parameters W (u, v), b (u, v), etc.) obtained at the learning end stage is information suitable for obtaining an estimated value y of the process bias close to the actual bias in the output layer. It has become. Therefore, the neural network including the learning information L obtained at the learning end stage functions as the estimation calculation unit 132 in the present invention.
  • Figure pattern shape estimation method >> The present invention has been described as the figure pattern shape estimation apparatus 100 ′ or the figure pattern shape correction apparatus 100 having the structure shown in FIG. Here, a brief description will be given of the present invention as a method invention called a figure pattern shape estimation method.
  • the method estimates the shape of an actual figure pattern formed on an actual substrate by simulating a lithography process using the original figure pattern. It will be a method. Then, in this method, the computer inputs the original graphic pattern input stage in which the original graphic pattern 10 including the contour line information indicating the boundary between the inside and the outside of the graphic is input (the pattern created in step S1 in FIG. 4 is input). And an evaluation point setting step (step S2 in FIG. 4) in which the computer sets an evaluation point E at a predetermined position on the contour line of the input graphic, and each evaluation for the original graphic pattern 10 input by the computer.
  • a feature amount extraction stage step S3 in FIG.
  • a process bias estimation step (step S4 in FIG. 4) for estimating a process bias y indicating the amount of deviation between the position on the pattern 10 and the position on the actual graphic pattern 20; Constructed.
  • an original image creation stage (see FIG. 4) that creates an original image Q1 composed of a collection of pixels U each having a predetermined pixel value. 7 and step S32) and image pyramid creation processing including reduction processing (step S35 in FIG. 7) for creating a reduced image Qk (preparation image) based on the original image Q1.
  • An image pyramid creation stage (steps S33 to S35 in FIG. 7) for creating an image pyramid PP composed of the hierarchical images P1 to Pn, and each evaluation point E for each of the hierarchical images P1 to Pn constituting the created image pyramid PP.
  • a feature amount calculating step (step S37 in FIG. 7) for calculating feature amounts x1 to xn based on pixel values of pixels in the vicinity of.
  • the estimated values corresponding to the feature quantities x1 to xn for the evaluation point E are calculated based on the learning information L obtained in the learning stage performed in advance. And an estimation calculation step of outputting the calculated estimated value as the estimated value y of the process bias for the evaluation point E.
  • a filter processing stage that performs a filter process using a predetermined image processing filter on the original image Q1 or the reduced image Qk in the image pyramid creation stage.
  • a reduction processing stage step S35 in FIG. 7 for performing reduction processing on the image Pk after the filter processing.
  • An image pyramid PP consisting of can be created.
  • FIG. 32 is a block diagram showing a configuration of a figure pattern shape correcting apparatus 200 according to an additional embodiment of the present invention.
  • the figure pattern shape correction apparatus 200 includes an evaluation point setting unit 110, a feature amount extraction unit 220, a bias estimation unit 130, and a pattern correction unit 140.
  • a figure pattern shape estimation apparatus 200 ′ according to an additional embodiment of the present invention is configured by the three units of the evaluation point setting unit 110, the feature amount extraction unit 220, and the bias estimation unit 130.
  • the shape correction apparatus 200 is configured by further adding a pattern correction unit 140 as a fourth unit to the figure pattern shape estimation apparatus 200 ′.
  • the figure pattern shape estimation apparatus 200 ′ shown in FIG. 32 simulates the lithography process using the original figure pattern 10 in the same manner as the figure pattern shape estimation apparatus 100 ′ shown in FIG. It has a function of estimating the shape of the real graphic pattern 20 formed on S.
  • the figure pattern shape estimation apparatus 200 ′ shows an evaluation point setting unit 110 for setting an evaluation point E on the original figure pattern 10, and features around the individual evaluation points E for the original figure pattern 10.
  • a feature amount extraction unit 220 that extracts a feature amount x, and a process bias that indicates a deviation amount between the position on the original graphic pattern 10 and the position on the actual graphic pattern 20 of each evaluation point E based on the feature amount x.
  • a bias estimation unit 130 for estimating y.
  • the evaluation point setting unit 110 sets a plurality of evaluation points E at predetermined positions on the contour line based on the original graphic pattern 10 including the contour line information indicating the boundary between the inside and the outside of the figure.
  • the specific evaluation point setting process is as already described in ⁇ 1.
  • the feature quantity extraction unit 220 extracts a feature quantity x indicating a feature around the evaluation point E on the original graphic pattern 10 for each evaluation point E set in this way.
  • a plurality of n feature amounts x1 to xn are extracted for each evaluation point E.
  • the feature amounts x1 to xn are actually calculated by an operation based on a predetermined calculation function defined in advance.
  • the bias estimation unit 130 based on the feature amount input unit 131 for inputting the feature amounts x1 to xn calculated for the individual evaluation points E in this way, and the learning information L obtained by the learning stage performed in advance, An estimation calculation unit 132 that calculates an estimated value according to the feature amounts x1 to xn and outputs the calculated estimated value as the estimated value y of the process bias for the evaluation point E.
  • the specific estimation calculation method is as already described in ⁇ 3.
  • the figure pattern shape correction apparatus 200 is an apparatus having a function of correcting the shape of the original figure pattern 10 by using the figure pattern shape estimation apparatus 200 ′ described above.
  • a pattern correction unit 140 is further added to the evaluation point setting unit 110, the feature amount extraction unit 220, and the bias estimation unit 130 that constitute the estimation device 200 ′.
  • the pattern correction unit 140 performs processing for correcting the original graphic pattern 10 based on the estimated value y of the process bias output from the bias estimation unit 130. Then, the corrected graphic pattern 15 obtained by the correction by the pattern correction unit 140 is given as a new original graphic pattern to the graphic pattern shape estimation device 200 ′, thereby repeatedly performing correction on the graphic pattern.
  • the specific method of such correction processing is as already described in ⁇ 1.
  • the shape correction apparatus 100 (or shape estimation apparatus 100 ′) shown in FIG. 1 and the figure pattern shape correction apparatus 200 (or shape estimation apparatus 200 ′) shown in FIG. is doing.
  • the feature extraction mechanism is different between the two. Therefore, in order to explain this difference, the basic configuration and basic operation of the feature quantity extraction unit 220 shown in FIG. 32 will be described below.
  • the feature quantity extraction unit 120 shown in FIG. 1 performs image pyramid creation processing including reduction processing on the original image Q1, and uses a plurality of n hierarchical images P1 to Pn having different sizes.
  • the image pyramid PP is created, and the feature amount for the evaluation point E is calculated based on the pixel value of the pixel corresponding to the position of the evaluation point E using each of the hierarchical images P1 to Pn. .
  • the feature quantity extraction unit 220 shown in FIG. 32 uses n calculation functions Z1 (X, Y) to Zn (X, Y) instead of using n hierarchy images P1 to Pn. Then, a process of calculating n feature values x1 to xn for the evaluation point E (X, Y) located at the coordinates (X, Y) is performed.
  • n calculation functions Z1 (X, Y) to Zn (X, Y) are obtained from the evaluation point E (X, Y) from the feature amount x1 in consideration of a narrow range in the vicinity of the evaluation point E (X, Y).
  • the calculation function is suitable for calculating a plurality of n types of feature quantities x1 to xn with different consideration ranges up to the feature quantity xn taking into account a wide range including Y). Therefore, it is possible to obtain individual features that show various features from the vicinity of each evaluation point to the distance, and accurate simulation that takes into account the effects of various phenomena with different scales, such as proximity effects and etching loading phenomena. It becomes possible to do.
  • the basic embodiment described in ⁇ 1 to ⁇ 4 and the additional embodiment described in ⁇ 5 are based on a common technical idea, and perform an accurate simulation considering the effects of various phenomena of different scales.
  • the specific methods for extracting the feature amounts x1 to xn are slightly different.
  • the feature amounts x1 to xn are extracted using the image pyramid PP including a plurality of n kinds of hierarchical images P1 to Pn, whereas in the additional embodiment described here, a plurality of feature quantities x1 to xn are extracted.
  • Feature amounts x1 to xn are extracted using n calculation functions Z1 (X, Y) to Zn (X, Y).
  • a feature amount extraction method in the additional embodiment will be described in detail with reference to a specific example.
  • the feature quantity extraction unit 220 includes a rectangular aggregate replacement unit 221, a feature quantity calculation unit 222, and a calculation function provision unit 223.
  • the rectangular aggregate replacement unit 221 performs processing for replacing a graphic included in the original graphic pattern 10 with a rectangular aggregate.
  • the calculation function providing unit 223 provides a calculation function for calculating a feature amount for one evaluation point based on a positional relationship with respect to a rectangle positioned around the evaluation point. Then, the feature amount calculation unit 222 performs a process of calculating the feature amount for each evaluation point set by the evaluation point setting unit 110 using the calculation function provided from the calculation function providing unit 223.
  • FIG. 33 is a plan view showing an example of processing for replacing the original graphic pattern 10 with the rectangular aggregate 50 by the rectangular aggregate replacing unit 221.
  • the original figure pattern 10 including a figure having an arbitrary shape as shown in FIG. 33 (a) has four rectangles as shown in FIG. 33 (b) (each shown by hatching). Is replaced by a rectangular assembly 50 made of As shown in the figure, the outline of the entire rectangular aggregate 50 obtained by this replacement processing matches the outline of the figure included in the original figure pattern 10.
  • Such replacement can be performed by a process of dividing the graphic included in the original graphic pattern 10 into a plurality of rectangles.
  • the figure included in the original figure pattern 10 has sides parallel to the X axis and Y It can be seen that the polygon has a side parallel to the axis.
  • a pattern used in a semiconductor integrated circuit is often constituted by a regular polygon having a side parallel to the X axis and a side parallel to the Y axis.
  • the figure included in the original figure pattern 10 is not necessarily a regular figure consisting of sides parallel to the X-axis or the Y-axis. By approximating the outline, it can be divided into a plurality of regular rectangles.
  • FIG. 34 is a plan view showing an example of a process of replacing the arbitrarily shaped figure included in the original figure pattern 10 with the rectangular set 50 made of regular rectangles by the rectangular set replacing unit 221.
  • FIG. 34A shows an example of the original graphic pattern 10 including two sets of graphics.
  • the original figure pattern 10 includes a pentagon having an arbitrary shape drawn in the upper part of the figure and a circle drawn in the lower part of the figure.
  • Each side of the pentagon shown in the upper part faces an arbitrary direction and is not necessarily a side parallel to the X axis or the Y axis.
  • the boundary line is not a side but a circumference.
  • the outline (indicated by a broken line in the figure) is a regular figure outline. Can be divided into a plurality of regular rectangles (indicated by solid lines in the figure).
  • the outline of the rectangular aggregate 50 obtained by the replacement processing by the rectangular aggregate replacement unit 221 does not need to exactly match the outline of the graphic included in the original graphic pattern 10, and the outline of both It is sufficient if the lines are approximately matched.
  • the rectangular aggregate 50 obtained by the replacement processing of the rectangular aggregate replacement unit 221 is not necessarily a regular rectangle (two sides parallel to the X axis and two parallel to the Y axis when an XY two-dimensional orthogonal coordinate system is defined). Although it is not necessary to reduce the calculation burden of the feature amount calculation unit 222, it is preferable to use a regular rectangular aggregate. Therefore, an embodiment in the case where a rectangular aggregate 50 composed of regular rectangles is obtained by the rectangular aggregate replacement unit 221 will be described below.
  • each rectangle constituting the rectangular aggregate 50 can be expressed by vector data indicating its four sides.
  • each rectangle since the rectangular aggregate 50 is a set of regular rectangles arranged in an XY two-dimensional orthogonal coordinate system, each rectangle has two diagonal points (for example, an upper left corner point and a lower right corner point). It can be defined by the XY coordinate value of point).
  • the feature amount calculation unit 222 calculates a feature amount for a certain evaluation point based on a positional relationship with respect to a rectangle positioned around the evaluation point.
  • FIG. 35A is a diagram illustrating the principle of feature amount calculation by the feature amount calculation unit 222.
  • the rectangular aggregate replacement unit 221 defines a rectangular aggregate 50 having five regular rectangles F1 to F5 as shown in the figure on the XY two-dimensional orthogonal coordinate system. Is used to explain the principle of calculating the feature quantity x for one evaluation point. More specifically, for example, consider a case where the feature amount x is calculated for the evaluation point E (X, Y) set on the right side of the rectangle F3.
  • uppercase X and Y are used as the coordinate values of the XY two-dimensional orthogonal coordinate system.
  • the feature amount x for the evaluation point E (X, Y) set at the coordinate (X, Y) on the XY two-dimensional orthogonal coordinate system is the evaluation point E (X, Y) is calculated based on the positional relationship between the individual rectangles F1 to F5.
  • FIG. 35 (b) is a diagram showing an example of the n feature values x1 to xn and a calculation function used to calculate them. Specifically, in order to calculate the first feature quantity x1, the first calculation function Z1 (X, Y) is used, and in order to calculate the second feature quantity x2, the second feature quantity x1 is calculated. The calculation function Z2 (X, Y) is used,..., And the nth calculation function Zn (X, Y) is used to calculate the nth feature quantity xn.
  • Each of the calculation functions is a function that gives a predetermined function value using the coordinate values X and Y of the evaluation point E (X, Y) as variables.
  • fhi ( ⁇ 1) on the right side is a horizontal function for the i-th rectangle Fi, and the horizontal direction between the evaluation point E (X, Y) and the rectangle Fi (example shown in FIG. 35 (a)).
  • it plays a role of indicating the positional relationship with respect to the X-axis direction) as a numerical value.
  • the function value of the horizontal function fh1 ( ⁇ 1) depends on the deviation between the X coordinate values of the left and right sides of the rectangle F1 and the X coordinate value of the evaluation point E (X, Y). Will be determined.
  • ⁇ 1 is a spreading coefficient, and is a parameter that determines the degree of function spreading in the X-axis direction.
  • fvi ( ⁇ 1) on the right side is a vertical direction function for the i-th rectangle Fi, and the vertical direction between the evaluation point E (X, Y) and the rectangle Fi (in the case of the example shown in FIG. 35A) , Y-axis direction) as a numerical value.
  • the function value of the vertical function fv1 ( ⁇ 1) depends on the deviation between the Y coordinate values of the upper and lower sides of the rectangle F1 and the Y coordinate value of the evaluation point E (X, Y). Will be determined.
  • ⁇ 1 is a spread coefficient, and is a parameter that determines the extent of the function spread in the Y-axis direction.
  • the coefficient K on the right side is a predetermined constant for adjusting the scaling of the finally obtained feature quantity x, and is referred to as a feature quantity calculation coefficient here.
  • the feature amount calculation coefficient K is set to 1/4, but the value of K may be set to an arbitrary constant.
  • the first calculation function Z1 (X, Y) indicates the horizontal direction function fhi ( ⁇ 1) that indicates the positional relationship in the horizontal direction as a numerical value and the positional relationship in the vertical direction as a numerical value for the i-th rectangle Fi.
  • the product of the vertical function fvi ( ⁇ 1) and the feature amount calculation coefficient K is obtained for each of the five rectangles F1 to F5, and the sum is obtained.
  • the first calculation function Z1 (X, Y) is in the horizontal direction between the individual rectangles F1 to F5 constituting the rectangular aggregate 50 with respect to a certain evaluation point E (X, Y).
  • This is a function for calculating the sum of numerical values indicating the positional relationship in the vertical direction.
  • the horizontal direction function fh1 ( ⁇ 1) and the vertical direction function fv1 ( ⁇ 1) include the coordinate values X and Y of the evaluation point E (X, Y) as variables, and the function values are The coordinate values X and Y of the evaluation point E (X, Y) are given as variables.
  • the function value calculated using the first calculation function Z1 (X, Y) in this way is used as the first feature amount x1.
  • the difference between the first calculation function Z1 (X, Y) and the second calculation function Z2 (X, Y) described above is that the former function fhi ( ⁇ 1) and fvi ( Whereas ⁇ 1) is used, the latter is only the point where fhi ( ⁇ 2) ⁇ fvi ( ⁇ 2) is used.
  • fhi ( ⁇ 1) and fvi ( ⁇ 1) are functions using the spread coefficient ⁇ 1
  • fhi ( ⁇ 2) and fvi ( ⁇ 2) are functions using the spread coefficient ⁇ 2.
  • calculation functions Z3 (X, Y) and thereafter and the calculation functions Z1 (X, Y) to Zn (X, Y) are defined in total by changing the spread coefficient ⁇ . Can do.
  • these calculation functions can be referred to as calculation functions for calculating a feature quantity x based on a positional relationship with respect to four sides of a rectangle positioned around one evaluation point E.
  • the calculation function providing unit 223 uses n calculation functions Z1 (X, Y) to Zn (X, Y) using n expansion coefficients ⁇ 1 to ⁇ n. Fulfills the function of providing). Then, the feature quantity calculation unit 222 calculates n feature quantities x1 to xn for one evaluation point E by an arithmetic process using the n calculation functions Z1 (X, Y) to Zn (X, Y). calculate. That is, each function value obtained by giving the coordinate values X and Y of the evaluation point E (X, Y) as variables to the respective calculation functions Z1 (X, Y) to Zn (X, Y) is a characteristic. Calculated as quantities x1 to xn.
  • the expansion coefficient ⁇ is a parameter that determines the degree of expansion of the function in the X-axis direction or the Y-axis direction. Accordingly, n calculation functions Z1 (X, Y) to Zn (n) are used by using n expansion coefficients ⁇ 1 to ⁇ n from a expansion coefficient ⁇ 1 indicating a narrow expansion condition to an expansion coefficient ⁇ n indicating a wide expansion condition. If X, Y) is provided, a wide range including from a feature amount x1 considering a narrow range near the evaluation point E (X, Y) to a distance from the evaluation point E (X, Y) is considered. A plurality of n feature amounts x1 to xn with different consideration ranges can be calculated until the feature amount xn is reached.
  • the n calculation functions Z1 (X, Y) to Zn (X, Y) provide the same operational effects as the image pyramid PP including the n hierarchical images P1 to Pn shown in FIG. Will be able to.
  • the calculation function providing unit 223 moves from the feature amount x1 considering a narrow range in the vicinity of the evaluation point E (X, Y) to a distance of the evaluation point E (X, Y).
  • the feature amount calculation unit 222 uses the plurality of n types of calculation functions to calculate a plurality of n types of feature amounts x1 to xn for each evaluation point.
  • the bias estimation unit 130 includes a feature amount input unit 131 that inputs feature amounts x1 to xn for a specific evaluation point E extracted by the feature amount extraction unit 220, and predetermined learning information L. And an estimation calculation unit 132 that outputs an estimated value corresponding to the feature amount x1 to xn as an estimated value y of a process bias for the specific evaluation point E.
  • the estimation calculation unit 132 has a neural network having the feature amounts x1 to xn input by the feature amount input unit 131 as input layers and the process bias estimate y as an output layer. .
  • this neural network includes the dimension value obtained by the actual dimension measurement of the actual figure pattern 20 formed on the actual substrate S by the lithography process using a large number of test pattern figures, and each test.
  • Process bias estimation processing is performed by using, as learning information L, a parameter obtained in a learning step using a feature amount obtained from a pattern figure.
  • the estimation calculation unit 132 obtains an estimated value of the deviation amount of the evaluation point E in the normal direction of the contour line as the estimated value y of the process bias for the evaluation point E located on the contour line of the predetermined graphic. Process.
  • the method can be realized by simulating a lithography process using the original graphic pattern 10 to form a real graphic pattern formed on the real substrate S.
  • This is a figure pattern shape estimation method for estimating 20 shapes.
  • the method includes an original graphic pattern input stage for inputting an original graphic pattern 10 including information on a contour line indicating the boundary between the inside and the outside of the graphic, and an evaluation for setting an evaluation point E at a predetermined position on the contour line.
  • a process bias estimation stage for estimating a process bias y indicating the amount of deviation between the position and the position on the actual graphic pattern 20 is realized by causing the computer to execute.
  • the feature quantity extraction stage is based on a rectangular aggregate replacement stage that replaces the graphic included in the original graphic pattern 10 with a rectangular aggregate, and a feature quantity based on the positional relationship of each evaluation point E with respect to the surrounding rectangle.
  • a feature amount calculation stage for calculating x, and the process bias estimation stage obtains an estimated value corresponding to the feature quantity x based on the learning information L obtained by the learning stage performed in advance.
  • an estimation calculation stage for outputting the estimated value as the estimated value y of the process bias for the evaluation point E is included.
  • FIG. 35 (b) shows an example of the basic form of n calculation functions Z1 (X, Y) to Zn (X, Y) provided by the calculation function providing unit 223.
  • n calculation functions Z1 (X, Y) to Zn (X, Y) provided by the calculation function providing unit 223.
  • the function value of the function Zk (X, Y) is output from the feature amount calculation unit 222 as the value of the kth feature amount xk.
  • q is the total number of surrounding rectangles for which the positional relationship with the evaluation points is to be obtained
  • i is a parameter indicating a rectangle number (1 ⁇ i ⁇ q)
  • ⁇ k is the kth expansion coefficient
  • K is a feature amount It is a calculation coefficient.
  • K is set to 1/4, but the feature quantity calculation coefficient K is a coefficient that determines the scaling factor of the feature quantity x, and may be set to an arbitrary value.
  • fhi ( ⁇ k) is a horizontal function for the i-th rectangle Fi and relates to the horizontal direction (X-axis direction) between the evaluation point E (X, Y) and the rectangle Fi. This is a factor indicating the positional relationship.
  • fvi ( ⁇ k) is a vertical function for the i-th rectangle Fi, and is a factor indicating the positional relationship in the vertical direction (Y-axis direction) between the evaluation point E (X, Y) and the rectangle Fi.
  • fhi ( ⁇ k) and the vertical function fvi ( ⁇ k) will be described.
  • fhi ( ⁇ k) erf [(X ⁇ Li) / ⁇ k] ⁇ erf [(X ⁇ Ri) / ⁇ k]
  • fvi ( ⁇ k) erf [(Y ⁇ Bi) / ⁇ k] ⁇ erf [(Y ⁇ Ti) / ⁇ k]
  • i is a parameter indicating a rectangle number (1 ⁇ i ⁇ q)
  • q is a total number of rectangles
  • k is a parameter indicating a calculation function number (1 ⁇ k ⁇ n)
  • n is a total number of feature values (calculation). (Total number of functions), ⁇ k is the kth spreading factor.
  • Li, Ri, Ti, and Bi are the coordinate values of the four sides of the i-th regular rectangle Fi, as shown in the lower part of FIG. Specifically, Li is the X coordinate value of the left side L of the rectangle Fi, Ri is the X coordinate value of the right side R of the rectangle Fi, Ti is the Y coordinate value of the upper side T of the rectangle Fi, Bi is Y of the lower side B of the rectangle Fi. It is a coordinate value.
  • the regular rectangle Fi arranged on the XY two-dimensional orthogonal coordinate system can be represented by, for example, the coordinate value (Li, Bi) of the lower left corner point LB and the coordinate value (Ri, Ti) of the upper right corner point RT.
  • X and Y assigned as variables to the above functions are the X coordinate value and the Y coordinate value of the evaluation point E (X, Y) that is the calculation target of the feature amount x, as shown in the lower part of FIG. is there.
  • the rectangular aggregate replacement unit 221 replaces the graphic included in the original graphic pattern 10 with a plurality of regular rectangles, and represents the data representing the individual regular rectangles with coordinate values (Li, Bi), (Ri, Ti).
  • the amount can be given to the amount calculation unit 222. Therefore, the feature amount calculation unit 222 uses the coordinate values Li, Bi, Ri, Ti indicating the rectangle Fi given from the rectangle assembly replacement unit 221 for the horizontal direction function fhi ( ⁇ k) and the vertical direction function fvi ( ⁇ k).
  • Each function value can be calculated by performing an operation in which the coordinate values X and Y indicating the evaluation point E (X, Y) given from the evaluation point setting unit 110 are respectively entered as variables.
  • FIG. 37 is a diagram for explaining the error function erf ( ⁇ ).
  • the error function erf ( ⁇ ) is a function defined by the mathematical formula shown in FIG. 37 (a), and takes a function value in a range of ⁇ 1 ⁇ erf ( ⁇ ) ⁇ + 1 with respect to an arbitrary variable ⁇ . Then, + erf ( ⁇ ) becomes a function whose function value monotonously increases from ⁇ 1 to +1 as the variable ⁇ increases, as shown in FIG. 37 (b), and ⁇ erf ( ⁇ ) As shown in, the function value monotonically decreases from +1 to ⁇ 1 as the variable ⁇ increases.
  • FIG. 38 (a) is a diagram showing a mutual positional relationship between the i-th rectangle Fi arranged on the XY two-dimensional orthogonal coordinate system and the evaluation point E arranged at an arbitrary position. Since the rectangle Fi is a regular rectangle, the upper and lower sides are parallel to the X axis, and the left and right sides are parallel to the Y axis. Since the horizontal function fhi ( ⁇ k) is a factor indicating the positional relationship in the horizontal direction, eleven evaluation points E1 to E11 arranged at the same interval on the horizontal line indicated by the broken line are shown in FIG. Illustrated. Points Li and Ri on the coordinate axis X indicate the X coordinate values of the left and right sides of the rectangle Fi.
  • the positional relationship in the horizontal direction between the specific evaluation point E and the rectangle Fi includes the left side position deviation indicating the distance between the left side L of the rectangle Fi and the evaluation point E, and the right side indicating the distance between the right side R of the rectangle Fi and the evaluation point E. Quantified using position deviation.
  • the left side position deviation of each of the evaluation points E1 to E11 when focusing on the left side L of the rectangle Fi (in the figure, the left side L is indicated by a thick line), the X coordinate value Li of the left side L and each evaluation point E1
  • the horizontal function fhi ( ⁇ k) takes a function value corresponding to the distance from the X coordinate value of E11 to E11.
  • FIG. 38 (b) shows a state in which the graph of the error function + erf (X ⁇ Li) is arranged with reference to the position of the left side L of the rectangle Fi shown in FIG. 38 (a).
  • the error function + erf ( ⁇ ) is a function whose function value monotonously increases from ⁇ 1 to +1 as ⁇ increases, as shown in FIG. ) Is arranged so that the function value becomes 0 at the position of the left side L of the rectangle Fi, a graph as shown in FIG.
  • the value of + erf (X ⁇ Li) for point E3) is zero.
  • the value of + erf (X ⁇ Li) is negative.
  • negative function values e2 and e1 can be obtained by substituting the X coordinate value into the error function + erf (X ⁇ Li).
  • the value of + erf (X ⁇ Li) is positive for evaluation points located on the right side of the left side L, such as evaluation points E4, E5,.
  • positive function values e4 and e5 are obtained by substituting the X coordinate value into the error function + erf (X ⁇ Li) (the value e1 etc. is the point e1 etc. on the graph). The ordinate value of.)
  • the function value + erf (X ⁇ Li) eventually reaches the upper limit value + 1 and becomes saturated. Therefore, in the case of the illustrated example, the function values e8, e9, e10, e11 are all the upper limit value +1, and the function value is also +1 for the evaluation point (not shown) arranged on the right side. Conversely, as the variable value “X ⁇ Li” decreases, the function value + erf (X ⁇ Li) eventually reaches the lower limit value ⁇ 1 and becomes saturated.
  • FIG. 39A shows the positional relationship in the horizontal direction between the i-th rectangle Fi arranged on the XY two-dimensional orthogonal coordinate system and the eleven evaluation points E1 to E11, as in FIG. FIG.
  • the positional relationship in the horizontal direction is quantified by the left side position deviation indicating the distance from the left side L and the right side position deviation indicating the distance from the right side R.
  • the right side position deviation is quantified by the left side position deviation indicating the distance from the left side L and the right side position deviation indicating the distance from the right side R.
  • the right side position deviation of each of the evaluation points E1 to E11 is the X coordinate value Ri of the right side R and each evaluation point E1 when the right side R of the rectangle Fi is noticed (in the figure, the right side R is indicated by a thick line).
  • the horizontal function fhi ( ⁇ 1) corresponds to a distance from the X coordinate value of E11 to E11, and takes a function value corresponding to the distance.
  • FIG. 39 (b) shows a state in which the graph of the error function ⁇ erf (X ⁇ Ri) is arranged with reference to the position of the right side R of the rectangle Fi shown in FIG. 39 (a).
  • the value of -erf (X-Ri) for point E8) is zero.
  • the value of -erf (X-Ri) is negative.
  • the negative function values e9 and e10 are obtained by substituting the X coordinate values into the error function -erf (X-Ri).
  • the evaluation points located on the left side of the right side R such as the evaluation points E7, E6, E5,...
  • the value of -erf (X-Ri) is positive.
  • positive function values e7 and e6 are obtained by substituting the X coordinate values into the error function -erf (X-Ri).
  • the function value ⁇ erf (X ⁇ Ri) eventually reaches the lower limit value ⁇ 1 and becomes saturated. Conversely, as the variable value “X ⁇ Ri” decreases, the function value ⁇ erf (X ⁇ Ri) eventually reaches the upper limit value + 1 and becomes saturated. Therefore, in the case of the illustrated example, the function values e1, e2, e3 are all the upper limit value +1.
  • fhi ( ⁇ 1) erf (X-Li) -erf (X-Ri)
  • the horizontal function fhi ( ⁇ 1) defined by the following formula is based on the function value + erf (X ⁇ Li) corresponding to the left side position deviation indicating the distance from the left side L and the right side position deviation indicating the distance from the right side R.
  • ⁇ 1 1, the horizontal function fhi ( ⁇ 1) is simply described as fhi.
  • FIG. 40 (a) is similar to FIG. 38 (a) and FIG. 39 (a), the i-th rectangle Fi arranged on the XY two-dimensional orthogonal coordinate system, and eleven evaluation points E1 to E11. It is a figure which shows the positional relationship regarding horizontal direction.
  • the left side L which is the reference for the left side position deviation
  • the right side R which is the reference for the right side position deviation
  • 40 (b) b is a graph showing the sum of the function + erf (X ⁇ Li) shown in FIG. 38 (b) and the function ⁇ erf (X ⁇ Ri) shown in FIG. 39 (b).
  • the horizontal function fhi is shown in FIG. (The horizontal axis is the coordinate value X that is a variable of the horizontal function fhi).
  • the graph of the horizontal function fhi is a bilaterally symmetric graph with the position of the center of gravity G of the rectangle Fi as shown in the figure.
  • the left and right ends of the function + erf (X ⁇ Li) take a saturation value of ⁇ 1 or +1
  • the function ⁇ erf (X ⁇ Ri) The left and right ends of the signal take a saturation value of +1 or -1. Therefore, in the graph of the horizontal function fhi shown in FIG. 40 (b), the peak in the vicinity of the center is a peak (the center position may be depressed), and a mountain-like curve that gradually decreases as it goes to the left and right. Draw and the left and right edges are zero.
  • the width of this mountain-shaped curve changes according to the lateral width dX (X-axis direction width) of the rectangle Fi.
  • the horizontal direction function fhi shown by the graph having such a curve is a function that gives a larger value to the evaluation point E closer to the peak position of the graph with respect to the horizontal positional relationship.
  • the larger the X coordinate value is closer to the X coordinate value of the center of gravity G, such as the evaluation points E5 and E6, the larger function values such as values e5 and e6 are given (upper limit value).
  • the smaller the X coordinate value is from the X coordinate value of the center of gravity G, such as the evaluation points E1 and E11, the smaller function values such as the values e1 and e11 are given (the graph value is negative).
  • the lower limit is -2.
  • the meaning of the horizontal function fhi has been described above, the meaning of the vertical function fvi is the same. That is, the horizontal function fhi is a factor indicating the positional relationship in the horizontal direction (X-axis direction) for the specific evaluation point E of interest and the i-th rectangle Fi, whereas the vertical function fvi is This is a factor indicating the positional relationship in the vertical direction (Y-axis direction) between the specific evaluation point E of interest and the i-th rectangle Fi.
  • fhi ( ⁇ k) erf [(X ⁇ Li) / ⁇ k] ⁇ erf [(X ⁇ Ri) / ⁇ k] described in the second row of FIG.
  • fvi ( ⁇ k) erf [(Y ⁇ Bi) / ⁇ k] ⁇ erf [(Y ⁇ Ti) / ⁇ k] described in the third line of FIG.
  • a vertical function fvi ( ⁇ k) is obtained.
  • the graph of the vertical function fvi also has a mountain-like curve that has a peak in the vicinity of the center and gradually decreases toward the left and right.
  • the horizontal axis of the graph is the Y axis, and the width of this mountain-shaped curve changes according to the vertical width dY (Y-axis direction width) of the rectangle Fi.
  • FIG. 41 is a diagram showing a positional relationship between the rectangle Fi, the graph of the horizontal function fhi, and the graph of the vertical function fvi.
  • the graph of the horizontal function fhi is a graph of a function that gives the function value fhi using the X coordinate value as a variable, and is a symmetric graph centered on the X coordinate of the gravity center G of the rectangle Fi ( The center of the graph is indicated by a one-dot chain line), and a mountain-like curve showing a peak in the vicinity of the center is drawn.
  • the graph of the vertical function fvi is a reverse graph in which the direction of the Y axis is reversed in order to match the coordinate system in which the rectangle Fi is arranged as shown on the left side of the figure. It is a graph of a function that gives a function value fvi with the coordinate value as a variable, and is a symmetric graph centered on the Y coordinate of the center of gravity G of the rectangle Fi (the center of the graph is indicated by a one-dot chain line), which also has a peak approximately at the center. A mountain-shaped curve is drawn.
  • the width of the graph of the horizontal function fhi is larger than the width of the graph of the vertical function fvi, because the horizontal width of the rectangle Fi is wider than the vertical width. It is.
  • the horizontal direction function fhi is a factor indicating the positional relationship in the horizontal direction with respect to the specific evaluation point E of interest and the i-th rectangle Fi
  • the vertical direction function fvi is related to the specific evaluation point E and Since this is a factor indicating the positional relationship in the vertical direction with respect to the i-th rectangle Fi, the product fhi ⁇ fvi (or K ⁇ fhi ⁇ fvi further multiplied by the feature amount calculation coefficient K) is the particular of interest. This is a factor indicating the positional relationship between the evaluation point E and the i-th rectangle Fi in both the horizontal direction and the vertical direction.
  • the value of the calculation function Z1 (X, Y) obtained as the sum is the horizontal and vertical directions of the specific evaluation point E of interest and all the q rectangles F1 to Fq positioned around it. This is an amount indicating the positional relationship between the two. In the additional embodiment described here, such an amount is adopted as the feature amount x for the specific evaluation point E.
  • the function value fhi which is a factor indicating the positional relationship between the evaluation point E1 and the rectangle Fi in the horizontal direction, is obtained when the X coordinate value of the evaluation point E1 is entered in the horizontal function fhi. It is given as a function value fhi, that is, a value e11.
  • the function value fvi serving as a factor indicating the positional relationship in the vertical direction is given as a function value fvi when the Y coordinate value of the evaluation point E1 is inserted into the vertical function fvi, that is, a value e12.
  • e11> e12 but this is because the evaluation point E1 is close to the rectangle Fi in the horizontal direction but slightly apart in the vertical direction.
  • the two-dimensional positional relationship between the evaluation point E1 and the rectangle Fi can be quantified by the product fhi ⁇ fvi (in the above example, the product e11 ⁇ e12) (actually, the scaling factor) As a feature quantity calculation coefficient K).
  • the first calculation function is a function that obtains such a two-dimensional positional relationship with respect to all the q rectangles F1 to Fq and takes the sum thereof, and the function value is the evaluation point E1 and the surroundings. This is a value obtained by quantifying the total positional relationship with q rectangles. Then, the function value is output as the first feature amount x1 for the evaluation point E1.
  • the function value fhi serving as a factor indicating the positional relationship between the evaluation point E2 and the rectangle Fi in the horizontal direction is a function value fhi when the X coordinate value of the evaluation point E2 is inserted into the horizontal function fhi, that is, the value
  • the function value fvi which is given as e21 and is a factor indicating the positional relationship in the vertical direction is given as the function value fvi when the Y coordinate value of the evaluation point E2 is put in the vertical function fvi, that is, the value e22.
  • the i-th rectangle Fi having a width dX is shown, the middle part shows a graph of the horizontal function fhi ( ⁇ 1) including the expansion coefficient ⁇ 1, and the lower part shows the expansion coefficient ⁇ 2.
  • a graph of the horizontal function fhi ( ⁇ 2) including is shown.
  • the horizontal width of the fhi ( ⁇ 2) graph shown in the lower part is wider than that of the fhi ( ⁇ 1) shown in the middle part.
  • the horizontal width of the mountain-shaped graph basically corresponds to the horizontal width dX of the rectangle Fi.
  • the horizontal width can be increased or decreased by the expansion coefficient ⁇ . That is, if the spread coefficient ⁇ is doubled, the horizontal width of the mountain-shaped graph is doubled.
  • the feature quantity x1 calculated using the first calculation function Z1 (X, Y) having a small value of the spread coefficient ⁇ is a value considering only the positional relationship with the rectangle located in the vicinity of the evaluation point E.
  • the feature amount xn calculated using the nth calculation function Zn (X, Y) having a large value of the spread coefficient ⁇ is a value that takes into account the positional relationship with a rectangle positioned far from the evaluation point E. .
  • the n calculation functions Z1 (X, Y) to Zn (X, Y) provided by the calculation function providing unit 223 are characteristics considering a narrow range in the vicinity of the evaluation point E (X, Y). Suitable for calculating a plurality of n feature quantities x1 to xn with different consideration ranges from the quantity x1 to the feature quantity xn that takes into account a wide range including the distance from the evaluation point E (X, Y). It is a calculation function.
  • the technique of extracting the feature amounts x1 to xn using the image pyramid PP composed of a plurality of n kinds of hierarchical images P1 to Pn is adopted, whereas the additional embodiment described here is used.
  • a method of extracting the feature amounts x1 to xn using a plurality of n types of calculation functions Z1 (X, Y) to Zn (X, Y) having different values of the spread coefficient ⁇ is employed.
  • individual feature quantities indicating various features from the vicinity of each evaluation point to the distance can be obtained, and various phenomena with different scales such as proximity effects and etching loading phenomena can be obtained. It becomes possible to perform an accurate simulation considering the influence.
  • the rectangular aggregate replacement unit 221 shown in FIG. 32 has a graphic included in the original graphic pattern 10 in an XY two-dimensional orthogonal coordinate system in which the X-axis positive direction is the right direction and the Y-axis positive direction is the upward direction.
  • a rectangular aggregate 50 of regular rectangles Fi having an upper side T and a lower side B parallel to the X axis and a left side L and a right side R parallel to the Y axis is replaced.
  • the feature amount calculation unit 222 performs an operation for calculating a feature amount x based on the positional relationship with respect to the four sides of the rectangle Fi positioned around one evaluation point E, the rectangle Fi is a regular rectangle. This is because the burden of calculation can be reduced.
  • the calculation function providing unit 223 determines, for each regular rectangle Fi defined on the XY two-dimensional orthogonal coordinate system, the left side position deviation “X-Li” indicating the distance from the left side L in the X-axis direction of the evaluation point E. And the right side position deviation “X-Ri” indicating the distance from the right side R, and the upper side position deviation “Y-Ti” indicating the distance from the upper side T in the Y-axis direction of the evaluation point E and the distance from the lower side B.
  • a calculation function for calculating the feature amount x may be provided based on the lower side position deviation “Y ⁇ Bi”.
  • the function value monotonously increases as the variable value increases, and the function value becomes 0 when the X coordinate value Li of the left side L of the target rectangle Fi is given as a variable.
  • the X-axis monotonically increasing function for example, + erf [(X ⁇ Li / ⁇ k)]
  • the function value monotonously decrease as the variable value increases
  • the X coordinate value Ri of the right side R of the target rectangle Fi is given as a variable.
  • a horizontal function fhi ( ⁇ k) that is the sum of an X-axis monotonically decreasing function (for example, ⁇ erf [(X ⁇ Ri / ⁇ k)]) for which the function value of is defined as 0 is defined.
  • the function value monotonously increases as the variable value increases, and a Y-axis monotonically increasing function (for example, + erf [(Y -Bi / ⁇ k)]), and the function value monotonously decreases as the variable value increases, and the Y-axis monotonically decreasing function becomes 0 when the Y coordinate value Ti of the upper side T of the target rectangle Fi is given as a variable.
  • ⁇ erf [(Y ⁇ Ti / ⁇ k)] and a vertical direction function fvi ( ⁇ k) that is the sum of these.
  • the amount indicating the positional relationship with respect to the target rectangle Fi for one target evaluation point E the function value of the horizontal function fhi ( ⁇ k) with the X coordinate value X of the target evaluation point E as a variable, and the target evaluation point Calculated based on the product of the function value of the vertical function fvi ( ⁇ k) with the Y coordinate value Y of E as a variable (if necessary, it may be multiplied by a feature value calculation coefficient K for scaling).
  • the sum of the amounts indicating the positional relationship with respect to each rectangle located around the target evaluation point E may be calculated as the feature amount x for the target evaluation point E.
  • n types of feature amounts x1 to xn indicating various features from the vicinity of each evaluation point E to the far side are calculated.
  • the calculation function providing unit 223 changes a plurality of n types of calculation functions Z1 (X, Y) to Zn (X, Y) using functions having different monotonic increases or monotonic decreases, with different consideration ranges. It is sufficient to provide a calculation function for calculating a plurality of n types of feature amounts x1 to xn.
  • the calculation function providing unit 223 expands the left side position deviation “X-Li”, the right side position deviation “X-Ri”, the upper side position deviation “X-Ti”, and the lower side position deviation “X-Bi”.
  • a calculation function including a monotonically increasing function or a monotonically decreasing function having a value divided by ⁇ as a variable is prepared, and a plurality of n calculation functions are obtained by changing the value of the spread coefficient ⁇ into a plurality of n values ( ⁇ 1 to ⁇ n).
  • Z1 (X, Y) to Zn (X, Y) may be provided.
  • ⁇ k 2 (k ⁇ when the kth spread coefficient is expressed as ⁇ k using a parameter k that takes a range of 1 ⁇ k ⁇ n. It is preferable to set to 1) .
  • FIG. 43 (a) is a plan view of the rectangular aggregate 60 created based on the graphic included in the original graphic pattern 10 with the dose amount.
  • five regular rectangles F1d to F5d are defined on the XY two-dimensional orthogonal coordinate system.
  • the shapes of the regular rectangles F1d to F5d constituting the rectangular aggregate 60 are exactly the same as the shapes of the regular rectangles F1 to F5 constituting the rectangular aggregate 50 shown in FIG. 35 (a).
  • a predetermined dose amount is defined for each of the regular rectangles F1d to F5d constituting the body 60. These dose amounts are information added to the graphic originally included in the original graphic pattern 10.
  • the original figure pattern 10 with a dose amount includes information on a dose amount for each figure in the lithography process in addition to information on a contour line indicating the boundary between the inside and the outside of the figure. Therefore, the rectangular aggregate replacement unit 221 recognizes the internal area and the external area of each graphic based on the original graphic pattern 10, and further recognizes the dose amount for each graphic, and corresponds to each graphic. A process for setting a dose amount may be performed for each of the rectangles F1d to F5d.
  • the rectangular aggregate 60 shown in FIG. 43 (a) is created by such processing, and the information on the dose amount of the original figure is added as it is to the individual rectangles F1d to F5d.
  • the calculation function providing unit 223 sets the dose amount set to each of the rectangles F1d to F5d. Is provided as a variable, and the feature amount calculation unit 222 calculates the feature amount x of the desired evaluation point E (X, Y) based on the calculation function including the dose amount as a variable. Good.
  • the calculation function Zk (X, Y) shown here is the kth (1 ⁇ k ⁇ n) calculation function, and the function value calculated by this calculation function is output as the kth feature amount xk.
  • the feature amount calculation coefficient K is further multiplied by the dose amount Di for the i-th rectangle.
  • the feature amount calculation coefficient K is a common constant for scaling, whereas the dose amount Di is an individual value set for each rectangle.
  • a specific arithmetic expression of the calculation function Zk (X, Y) is as shown in FIG. 43 (c).
  • the expression for the i-th rectangle includes an operation of multiplying the dose amount Di, and a feature amount xk in consideration of the dose amount of each rectangle is obtained.
  • the process of extracting the feature amount xk in consideration of the dose amount has been performed by creating the image pyramid PP using the dose density map M3 shown in FIG. .
  • the feature quantity xk taking the dose amount into consideration is calculated by an operation based on a calculation function including the dose amount as a variable. It will be.
  • an image pyramid PP is used by using an edge length density map M2 as shown in FIG.
  • the process of extracting the feature amount x by creating the above has been described.
  • the edge length density map M2 is information that focuses on the arrangement of the outlines (edges) of the graphic included in the original graphic pattern 10, not the information on the area inside / outside the graphic.
  • the rectangular aggregate replacement unit 221 performs a process of dividing the area of the graphic included in the original graphic pattern 10 and replacing it with the rectangular aggregate 50.
  • the original figure pattern 10 includes a figure as shown in FIG. 44 (a)
  • the figure is divided into two parts, and two pieces as shown in FIG. 44 (b) are shown by hatching. It was replaced with a rectangular aggregate consisting of rectangles F1 and F2.
  • Such a replacement method is based on the idea of handling graphics as area information.
  • the rectangular aggregate replacement unit 221 recognizes the unit line segments that form the outline of each graphic based on the original graphic pattern 10, and sets a minute width for each unit line segment. Then, a process of replacing the graphic included in the original graphic pattern 10 with a rectangular aggregate having the minute width is performed.
  • Such a replacement method is a method based on the idea of handling a graphic as contour line information.
  • FIG. 45 is a plan view showing a process of replacing the rectangular aggregate by setting a minute width to the unit line segment constituting the outline of the figure by the rectangular aggregate replacing unit 221.
  • FIG. 45 (a) and (b) are shown by broken lines in FIG. 44 (a).
  • This figure is composed of six sides. In the modification described here, these six sides are replaced with rectangles each having a very small width. As a result, this figure is replaced with a collection of six elongated rectangles.
  • the rectangular assembly replacement unit 221 sets the horizontal unit line segment to a horizontal rectangle by setting a minute width in the vertical direction, and sets the vertical unit line segment to a vertical rectangle by setting a minute width in the horizontal direction. Perform replacement processing.
  • FIG. 45A is a plan view showing a state in which three horizontal unit line segments are replaced with horizontal rectangles Fh1, Fh2, and Fh3 (rectangles that form hatched areas), respectively.
  • FIG. 5 is a plan view showing a state in which three vertical unit line segments are replaced with vertical rectangles Fv1, Fv2, and Fv3 (rectangles that form hatched regions), respectively.
  • each horizontal rectangle Fh1 Data indicating Fh2 and Fh3 and the respective vertical rectangles Fv1, Fv2 and Fv3 can be created based on the data constituting the original graphic pattern 10.
  • the X coordinate value of the left side can be defined as the X coordinate value of the left end of the horizontal unit line segment constituting the upper side of the original figure indicated by the broken line.
  • the X coordinate value of the right side can be defined as the X coordinate value of the right end of this horizontal unit line segment.
  • the Y coordinate value of the upper side can be defined as T1 + w where the Y coordinate value of this horizontal unit line segment is T1, and the Y coordinate value of the lower side is the Y coordinate value of the horizontal unit line segment.
  • B1-w it can be defined as B1-w.
  • w is a value corresponding to half of the minute width, and may be set to an arbitrary numerical value.
  • the Y coordinate value of the upper side can be defined as the Y coordinate value of the upper end of the vertical unit line segment constituting the left side of the original figure indicated by the broken line.
  • the Y coordinate value of the lower side can be defined as the Y coordinate value of the lower end of this vertical unit line segment.
  • the X coordinate value of the right side can be defined as R1 + w when the X coordinate value of the vertical unit line segment is R1, and the X coordinate value of the left side is the X coordinate value of the vertical unit line segment.
  • L1 it can be defined as L1-w.
  • w is a value corresponding to half of the minute width, and may be set to an arbitrary numerical value.
  • the figure having the six sides shown in FIG. 44 (a) ⁇ has six regular rectangles Fh1, Fh2, Fh3 shown in FIGS. 45 (a) and 45 (b) with hatching. It is replaced with Fv1, Fv2, and Fv3.
  • These regular rectangles are elongate rectangles having a minute width 2w, and are arranged along the unit line segments constituting the contour line of the original figure.
  • the feature amount calculation unit 222 may calculate the feature amount x for a certain evaluation point E based on the positional relationship with these six rectangles.
  • FIG. 46 is a diagram illustrating an example of a calculation function Zk (X, Y) applied to the rectangular aggregate illustrated in FIG. 45 (aggregate including six rectangles illustrated by hatching).
  • This calculation function Zk (X, Y) is a function for extracting the k-th (1 ⁇ k ⁇ n) feature quantity xk. As shown in the upper part of FIG.
  • fhi ( ⁇ k) erf [(X ⁇ Li) / ⁇ k] -Erf [(X-Ri) / ⁇ k]
  • fvi ( ⁇ k) erf [(Y ⁇ Bi) / ⁇ k] -Erf [(Y-Ti) / ⁇ k]
  • fhi ′ ( ⁇ k) erf [(X ⁇ (Li ⁇ w)) / ⁇ k] ⁇ erf [(X ⁇ (Ri + w)) / ⁇ k]
  • fvi ′ ( ⁇ k) erf [(Y ⁇ (Bi ⁇ w)) / ⁇ k] ⁇ erf [(Y ⁇ (Ti + w)) / ⁇ k] It is.
  • Li, Ri, Bi, and Ti are coordinate values indicating the end point position or line segment position of the i-th horizontal unit line segment or the i-th vertical unit line segment, as shown in FIG.
  • the X coordinate value of the left end of the horizontal unit line segment indicated by the broken line is L1
  • the X coordinate value of the right end is R1
  • the Y of the line segment is T1 and B1.
  • the Y coordinate value of the upper end of the vertical unit line segment indicated by the broken line is T1
  • the Y coordinate value of the lower end is B1
  • the line segment The X coordinate values are L1 and R1.
  • k is a parameter indicating the number of the calculation function, and the k-th spread coefficient ⁇ k is used for the k-th calculation function Zk (X, Y).
  • the coefficient K is a feature amount calculation coefficient for scaling as described above. It can be easily understood that the feature quantity xk for the specific evaluation point E (X, Y) can be calculated by such a calculation function Zk (X, Y) in view of the contents of ⁇ 5.2.
  • the feature quantity calculation unit 222 calculates the feature quantity x for the evaluation point E, a reference circle C having a predetermined radius r around the evaluation point E is defined. It is only necessary to perform the calculation considering only the positional relationship with the rectangle belonging to the predetermined neighborhood range corresponding to C.
  • FIG. 47 is a plan view showing an example in which a reference circle C having a predetermined radius r is defined on a rectangular aggregate in order to make the calculation operation efficient in this way.
  • the rectangular aggregate includes a total of 12 rectangles F1 to F12.
  • the k-th feature amount xk for the evaluation point E set on the right side of the rectangle F6 is calculated using the k-th calculation function Zk (X, Y).
  • all the 12 rectangles F1 to F12 are subject to calculation.
  • a reference having a predetermined radius r around the evaluation point E as shown in the figure.
  • a calculation is performed in which a circle C is defined and only rectangles belonging to a predetermined neighborhood range corresponding to the reference circle C are considered.
  • the rectangle to be calculated is selected based on the selection criterion that “the rectangle in which at least a part is included in the reference circle C” is to be calculated.
  • the seven rectangles F2, F5 to F10 indicated by hatching in the figure are selected as calculation targets. Therefore, the calculation function Zk (X, Y) is calculated only for these seven rectangles.
  • other selection criteria may be used. For example, if the selection criterion “a rectangle that is entirely contained in the reference circle C” is used as the calculation target, only four rectangles F5, F6, F9, and F10 are selected as the calculation targets.
  • the selection criterion “the rectangle whose position of the center of gravity G is included in or on the circumference of the reference circle C” is selected, only six rectangles F5 to F10 are selected for calculation.
  • a rectangle that contributes to the value of the calculation function Zk (X, Y) may be included in the rectangle that is not selected.
  • the degree of contribution due to the rectangle that is not selected is not so large, even if the rectangle that is not selected is excluded from the calculation target, there is no significant problem.
  • the feature amount calculation calculation by the feature amount calculation unit 222 can be made efficient, and the calculation burden is reduced. Mitigation can be achieved.
  • the radius r of the reference circle C used to determine whether or not each rectangle is to be calculated is a peak of the horizontal function fhi and the vertical function fvi included in the calculation function Zk (X, Y). It is preferable to determine according to the width (width of the skirt) of the shape graph. As described above, since the width of the mountain graph is determined by the expansion coefficient ⁇ k, it is preferable to set the radius r of the reference circle C to be larger as the expansion coefficient ⁇ k is larger. Practically, it is preferable to set the value of the radius r to a range of 5 ⁇ k ⁇ r ⁇ 10 ⁇ k, that is, a value about 5 to 10 times the spread coefficient ⁇ k.
  • FIG. 48 is a diagram showing a verification result obtained by performing a feature amount extraction process on a predetermined number of evaluation points E using the original graphic pattern 10 including the Line & Space pattern.
  • FIG. 48A is a plan view showing a graphic configuration of the original graphic pattern 10 actually used for verification.
  • the original figure pattern 10 is a pattern of a number of parallel lines generally called “Line & Space” patterns. More specifically, linear rectangles having a width of 100 nm and a length of 65 ⁇ m are arranged in the horizontal direction with an interval of 100 nm. The entire pattern is formed in a 65 ⁇ m square area.
  • FIG. 48 (b) b shows the processing time required when the feature amount extraction processing for a predetermined number of evaluation points E is performed on the original graphic pattern 10 shown in FIG. 48 (a), in addition to the basic embodiment. It is the graph compared about embodiment.
  • the bar graph described as “density map” in the figure shows the processing time according to the basic embodiment (processing time by the feature quantity extraction unit 120 shown in FIG. 1), and the bar graph described as “function calculation”
  • the processing time processing time by the feature-value extraction unit 220 shown in FIG. 32) by embodiment is shown.
  • Each bar graph is divided into a plurality of sections, and each section has a predetermined circle number.
  • This circled number corresponds to various individual processes as described in the right column, and each section of the bar graph indicates the processing time required for each individual process.
  • a section marked with a circle number 5 indicates a time required for the image pyramid creation process
  • a section marked with a circle number 6 is necessary for the area density map M1 creation process. Show time.
  • the section marked with the circled number 3 in the “function calculation” bar graph indicates the time required for the calculation processing of the calculation function Zk (X, Y) described above.
  • FIG. 49 is a diagram showing a verification result obtained by performing a feature amount extraction process on a predetermined number of evaluation points E using the original figure pattern 10 including an Array Hole pattern.
  • FIG. 49A is a plan view showing a graphic configuration of the original graphic pattern 10 actually used for verification.
  • the original figure pattern 10 is a pattern in which a large number of squares generally called an Array Hole pattern are arranged in a matrix. More specifically, squares of 100 nm in length and breadth are arranged vertically and horizontally with an interval of 100 nm. The entire pattern is formed in a 65 ⁇ m square area.
  • FIG. 49 (b) ⁇ shows the processing time required when the feature amount extraction processing for a predetermined number of evaluation points E is performed on the original graphic pattern 10 shown in FIG. 49 (a) ⁇ . It is the graph compared about embodiment. Again, the bar graph labeled “Density Map” in the figure represents the processing time according to the basic embodiment, and the bar graph labeled “Functional calculation” represents the processing time according to the additional embodiment. The point that each section constituting these bar graphs indicates individual processing is the same as the bar graph of FIG. 48 (b).
  • the overall processing time is shorter for the “density map” than for the “function calculation”.
  • the calculation process of the calculation function Zk (X, Y) takes a long time.
  • the number of rectangles created by the rectangle assembly replacing unit 221 is enormous, and the calculation processing load of the calculation function Zk (X, Y) inevitably increases. It will be. Therefore, for a pattern in which the number of rectangles created by the rectangular aggregate replacement unit 221 is large, such as this Array Hole pattern, it is better to use the basic embodiment than the additional embodiment as far as the processing time is concerned. It turns out to be efficient.
  • FIG. 50 shown at the end is a diagram showing a verification result obtained by performing the feature amount extraction process for a predetermined number of evaluation points E using the original graphic pattern 10 including the ISO-Space IV pattern.
  • FIG. 50A is a plan view showing a graphic configuration of the original graphic pattern 10 actually used for verification.
  • the original figure pattern 10 is a single elongated linear pattern generally called an ISO-Space IV pattern. More specifically, it is a very simple pattern composed of one linear rectangle elongated in the vertical direction with a width of 100 nm and a length of 65 ⁇ m. The entire pattern is formed in a 65 ⁇ m square area.
  • FIG. 50 (b) shows the processing time required when the feature amount extraction processing for a predetermined number of evaluation points E is performed on the original figure pattern 10 shown in FIG. It is the graph compared about embodiment.
  • the bar graph labeled “Density Map” in the figure represents the processing time according to the basic embodiment
  • the bar graph labeled “Functional calculation” represents the processing time according to the additional embodiment. The point that each section constituting these bar graphs indicates individual processing is the same as the bar graph of FIG. 48 (b).
  • the overall processing time is much shorter for “Functional calculation” than for “Density map”. This is because, in the basic embodiment, the density map creation process and the image pyramid creation process take a long time, whereas in the additional embodiment, the calculation process of the calculation function Zk (X, Y) is extremely difficult. This is because it takes a short time.
  • the number of rectangles created by the rectangular aggregate replacement unit 221 can be relatively small, so that the calculation processing load of the calculation function Zk (X, Y) is inevitably required. Will be reduced. Therefore, as for the pattern in which the number of rectangles created by the rectangular aggregate replacement unit 221 is reduced as in this ISO-Space IV pattern, as long as processing time is concerned, the additional embodiment is used rather than the basic embodiment. Can be seen to be efficient.
  • the basic embodiment described in ⁇ 1 to ⁇ 4 and the additional embodiment described in ⁇ 5 have a difference in processing time required for feature amount extraction depending on the feature of the original graphic pattern 10 to be handled. It will be. Therefore, in practice, it is preferable to perform more efficient feature amount extraction processing by properly using the basic embodiment and the additional embodiment according to the type of the original graphic pattern 10 to be handled. .
  • the figure pattern shape estimation apparatus simulates a lithography process using an original figure pattern in a field where a specific material layer needs to be finely patterned, such as a semiconductor device manufacturing process. Thus, it can be widely used as a technique for estimating the shape of a real graphic pattern formed on a real substrate.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

According to the present invention, the shape of a real figure pattern formed on a real substrate is estimated by simulating a lithography process. An evaluation point (E) is set on a contour of an original figure pattern (10) included in an original image (Q1). A filtering process (convolution) and a contraction process (pulling) for contracting an image are alternately performed on the original image (Q1), and an image pyramid (PP) is created which is formed from n sets of layered images (P1~Pn) having different image sizes. For each of the layered images (P1 to Pn), feature amounts (x1 to xn) are extracted on the basis of pixel values of pixels around the evaluation point (E), and the extracted feature amounts are provided to an estimation calculation unit (132). A neural network in the estimation calculation unit (132) performs calculations for an intermediate layer by using the feature amounts (x1 to xn) as values of an input layer, and outputs, as values of an output layer, an estimation value (y) of a process bias that indicates a deviation amount between a location of the evaluation point (E) on the original figure pattern (10) and that on the real figure pattern.

Description

図形パターンの形状推定装置Shape estimation device for figure pattern

 本発明は、図形パターンの形状推定装置に関し、特に、リソグラフィプロセスを経て基板上に形成される実図形パターンの形状を推定する装置に関する。 The present invention relates to a figure pattern shape estimation apparatus, and more particularly to an apparatus for estimating the shape of an actual figure pattern formed on a substrate through a lithography process.

 半導体デバイスの製造プロセスなど、特定の材料層に対して微細なパターニング加工を施す必要がある分野では、光や電子線を用いた描画を伴うリソグラフィプロセスを経て、物理的な基板上に微細なパターンを形成する手法が採られている。通常、コンピュータを利用して微細なマスクパターンを設計し、得られたマスクパターンのデータに基づいて、基板上に形成されたレジスト層に対する露光を行い、これを現像した後、残存レジスト層をマスクとしたエッチングを行い、基板上に微細なパターンを形成することになる。 In fields that require fine patterning of specific material layers, such as semiconductor device manufacturing processes, a fine pattern is formed on a physical substrate through a lithography process that involves drawing using light or an electron beam. The technique to form is taken. Usually, a fine mask pattern is designed using a computer, and the resist layer formed on the substrate is exposed based on the obtained mask pattern data. After developing this, the remaining resist layer is masked. Etching is performed to form a fine pattern on the substrate.

 ただ、このようなリソグラフィプロセスを経て基板上に実際に得られる実図形パターンと、コンピュータ上で設計された元図形パターンとの間には、若干の食い違いが生じることになる。これは、上記リソグラフィプロセスには、露光、現像、エッチングという工程が含まれるため、最終的に基板上に形成される実図形パターンは、露光工程で利用された元図形パターンに正確には一致しないためである。特に、露光工程では、光や電子線によってレジスト層に対する描画を行うことになるが、その際に、近接効果(PE:Proximity Effect)によって、レジスト層に実際に描画される露光領域は、元図形パターンよりも若干広い領域になることが知られている。 However, there is a slight discrepancy between the actual figure pattern actually obtained on the substrate through such a lithography process and the original figure pattern designed on the computer. This is because the lithography process includes steps of exposure, development, and etching, so that the actual graphic pattern finally formed on the substrate does not exactly match the original graphic pattern used in the exposure process. Because. In particular, in the exposure process, the resist layer is drawn with light or an electron beam. At that time, the exposure area actually drawn on the resist layer by the proximity effect (PE: Proximity Effect) It is known that the area is slightly wider than the pattern.

 また、エッチング工程では、エッチングのローディング現象が生じるため、現像後のパターンとエッチング後のパターンとでは形状が異なってしまう。このエッチングのローディング現象の効果の大小は、実基板表面のレジスト層から露出した面積に応じて変化することが知られている。描画工程の近接効果や、エッチング工程のローディング現象などは、いずれも元図形パターンの形状と実図形パターンの形状との間に差異を生じさせる原因となる現象であるが、このような現象の影響範囲(スケールサイズ)は、各現象で異なっている。 Also, in the etching process, an etching loading phenomenon occurs, so that the shape after development and the pattern after etching differ. It is known that the effect of the etching loading phenomenon varies depending on the area exposed from the resist layer on the surface of the actual substrate. The proximity effect in the drawing process and the loading phenomenon in the etching process are all phenomena that cause a difference between the shape of the original graphic pattern and the shape of the actual graphic pattern. The range (scale size) is different for each phenomenon.

 このような事情から、リソグラフィプロセスを含む半導体デバイスの製造工程などでは、コンピュータ上で所望の元図形パターンを設計した後、この元図形パターンを用いたリソグラフィプロセスをコンピュータ上でシミュレートし、実基板上に形成されるであろう実図形パターンの形状を推定する手順が実行される。そして、シミュレートの結果として得られる実図形パターンの形状(寸法)を踏まえ、必要に応じて、元図形パターンの形状(寸法)に対する補正を行い、この補正により得られる補正図形パターンを用いて実際のリソグラフィプロセスを実行し、実際の半導体デバイスの製造を行うことになる。 Under these circumstances, in a semiconductor device manufacturing process including a lithography process, a desired original figure pattern is designed on a computer, and then the lithography process using the original figure pattern is simulated on a computer, and an actual substrate is obtained. A procedure is performed to estimate the shape of the actual graphic pattern that will be formed above. Based on the shape (dimension) of the actual figure pattern obtained as a result of the simulation, the shape (dimension) of the original figure pattern is corrected if necessary, and the actual figure pattern is obtained using the corrected figure pattern obtained by this correction. The actual lithography process is performed to manufacture an actual semiconductor device.

 したがって、設計どおりの精密なパターンを有する最終製品を製造するためには、上記シミュレーションを的確に行い、実図形パターンの形状を正確に推定する必要がある。そこで、たとえば、下記の特許文献1には、元図形パターンのレイアウトを特徴づける特徴因子と、リソグラフィプロセスによって基板上に形成されるレジストパターンの寸法に影響を与える制御因子と、を入力層とするニューラルネットワークを用いて、高精度のシミュレーションを行う方法が開示されている。また、特許文献2には、2組のニューラルネットワークを利用して、シミュレーションの精度を高める方法が開示されており、特許文献3には、フォトマスクのパターンから特徴量を抽出する際に、適切な抽出パラメータを設定することにより、シミュレーションの精度を高める方法が開示されている。 Therefore, in order to manufacture a final product having a precise pattern as designed, it is necessary to accurately perform the above simulation and accurately estimate the shape of the actual figure pattern. Therefore, for example, in Patent Document 1 below, a feature factor that characterizes the layout of the original graphic pattern and a control factor that affects the size of the resist pattern formed on the substrate by the lithography process are used as an input layer. A method of performing a highly accurate simulation using a neural network is disclosed. Patent Document 2 discloses a method for improving the accuracy of simulation using two sets of neural networks, and Patent Document 3 discloses an appropriate method for extracting feature values from a photomask pattern. A method for improving the accuracy of simulation by setting various extraction parameters is disclosed.

特開2008-122929号公報JP 2008-122929 A 特開2010-044101号公報JP 2010-044101 A 特開2010-156866号公報JP 2010-156866 A

 リソグラフィプロセスを経て基板上に形成される実図形パターンの形状を正確に推定するには、コンピュータ上で実行されるシミュレーションの精度を高める必要がある。そのためには、元図形パターンから的確な特徴量を抽出することが不可欠である。しかしながら、上述した従来技術においては、必ずしも的確な特徴量の抽出が行われていないため、十分なシミュレーションを行うことはできない。 In order to accurately estimate the shape of the actual figure pattern formed on the substrate through the lithography process, it is necessary to improve the accuracy of the simulation executed on the computer. For that purpose, it is indispensable to extract an accurate feature amount from the original figure pattern. However, in the above-described prior art, a sufficient simulation cannot be performed because accurate feature amount extraction is not necessarily performed.

 たとえば、特許文献1に開示されている方法では、元図形パターンの各部の寸法、パターン占有率、パターン数、プロセス条件(露光量、照明条件、光学系の開口数、レンズ収差、レジストの種類など)を特徴量として抽出しているが、このような特徴量を用いた方法では、必ずしも正確なシミュレーションを行うことはできず、特に、近接効果の影響を考慮した正確なシミュレーションを行うことは困難である。 For example, in the method disclosed in Patent Document 1, the dimensions of each part of the original figure pattern, the pattern occupation ratio, the number of patterns, the process conditions (exposure amount, illumination conditions, numerical aperture of the optical system, lens aberration, resist type, etc.) ) Is extracted as a feature value, but a method using such a feature value cannot always perform an accurate simulation, and in particular, it is difficult to perform an accurate simulation considering the influence of the proximity effect. It is.

 そこで本発明は、元図形パターンから的確な特徴量を抽出して的確なシミュレーションを行い、実基板上に形成される実図形パターンの形状を正確に推定することが可能な図形パターンの形状推定装置を提供することを目的とする。 Accordingly, the present invention provides a graphic pattern shape estimation apparatus capable of accurately estimating the shape of an actual graphic pattern formed on an actual substrate by extracting an accurate feature amount from the original graphic pattern and performing an accurate simulation. The purpose is to provide.

 (1)  本発明の第1の態様は、元図形パターンを用いたリソグラフィプロセスをシミュレートすることにより、実基板上に形成される実図形パターンの形状を推定する図形パターンの形状推定装置において、
 元図形パターン上に評価点を設定する評価点設定ユニットと、
 元図形パターンについて、評価点の周囲の特徴を示す特徴量を抽出する特徴量抽出ユニットと、
 特徴量に基づいて、評価点の元図形パターン上の位置と実図形パターン上の位置とのずれ量を示すプロセスバイアスを推定するバイアス推定ユニットと、
 を設け、
 評価点設定ユニットは、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターンに基づいて、輪郭線上の所定位置に評価点を設定し、
 特徴量抽出ユニットは、
 元図形パターンに基づいて、それぞれ所定の画素値を有する画素の集合体からなる元画像を作成する元画像作成部と、
 元画像を縮小して縮小画像を作成する縮小処理を含む画像ピラミッド作成処理を行い、それぞれ異なるサイズをもった複数の階層画像からなる画像ピラミッドを作成する画像ピラミッド作成部と、
 画像ピラミッドを構成する各階層画像について、評価点の位置に応じた画素の画素値に基づいて特徴量を算出する特徴量算出部と、
 を有するようにし、
 バイアス推定ユニットは、
 評価点について算出された特徴量を入力する特徴量入力部と、
 予め実施された学習段階によって得られた学習情報に基づいて、特徴量に応じた推定値を求め、求めた推定値を評価点についてのプロセスバイアスの推定値として出力する推定演算部と、
 を有するようにしたものである。
(1) A first aspect of the present invention is a graphic pattern shape estimation apparatus that estimates the shape of an actual graphic pattern formed on an actual substrate by simulating a lithography process using the original graphic pattern.
An evaluation point setting unit for setting evaluation points on the original figure pattern;
A feature quantity extraction unit that extracts a feature quantity indicating features around the evaluation point for the original graphic pattern;
A bias estimation unit that estimates a process bias indicating the amount of deviation between the position of the evaluation point on the original graphic pattern and the position on the actual graphic pattern based on the feature amount;
Provided,
The evaluation point setting unit sets an evaluation point at a predetermined position on the contour line based on the original graphic pattern including information on the contour line indicating the boundary between the inside and the outside of the figure,
The feature extraction unit
Based on the original graphic pattern, an original image creating unit that creates an original image composed of a collection of pixels each having a predetermined pixel value;
An image pyramid creation unit that performs image pyramid creation processing including reduction processing to create a reduced image by reducing the original image, and creates an image pyramid composed of a plurality of hierarchical images each having a different size;
For each hierarchical image constituting the image pyramid, a feature amount calculation unit that calculates a feature amount based on the pixel value of the pixel according to the position of the evaluation point;
And have
The bias estimation unit is
A feature amount input unit for inputting the feature amount calculated for the evaluation point;
An estimation calculation unit that obtains an estimated value according to a feature amount based on learning information obtained by a learning stage performed in advance, and outputs the obtained estimated value as an estimated value of a process bias for an evaluation point;
It is made to have.

 (2)  本発明の第2の態様は、上述した第1の態様に係る図形パターンの形状推定装置において、
 元画像作成部が、画素の二次元配列からなるメッシュ上に元図形パターンを重ね合わせ、個々の画素の位置と元図形パターンを構成する図形の輪郭線の位置との関係に基づいて、個々の画素の画素値を決定するようにしたものである。
(2) According to a second aspect of the present invention, in the graphic pattern shape estimation apparatus according to the first aspect described above,
The original image creation unit superimposes the original figure pattern on a mesh composed of a two-dimensional array of pixels, and based on the relationship between the position of each pixel and the position of the outline of the figure constituting the original figure pattern, The pixel value of the pixel is determined.

 (3)  本発明の第3の態様は、上述した第2の態様に係る図形パターンの形状推定装置において、
 元画像作成部が、元図形パターンに基づいて各図形の内部領域と外部領域とを認識し、各画素内における内部領域の占有率を当該画素の画素値とする面積密度マップを元画像として作成するようにしたものである。
(3) A third aspect of the present invention is the figure pattern shape estimation apparatus according to the second aspect described above,
The original image creation unit recognizes the internal area and external area of each graphic based on the original graphic pattern, and creates an area density map with the occupancy of the internal area in each pixel as the pixel value of the pixel as the original image It is what you do.

 (4)  本発明の第4の態様は、上述した第2の態様に係る図形パターンの形状推定装置において、
 元画像作成部が、元図形パターンに基づいて各図形の輪郭線を認識し、各画素内に存在する輪郭線の長さを当該画素の画素値とするエッジ長密度マップを元画像として作成するようにしたものである。
(4) According to a fourth aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the second aspect described above,
The original image creation unit recognizes the outline of each figure based on the original figure pattern, and creates an edge length density map having the length of the outline existing in each pixel as the pixel value of the pixel as the original image. It is what I did.

 (5)  本発明の第5の態様は、上述した第2の態様に係る図形パターンの形状推定装置において、
 元画像作成部が、図形の内部と外部との境界を示す輪郭線の情報と、リソグラフィプロセスにおける各図形に関するドーズ量の情報と、を含む元図形パターンに基づいて、各図形の内部領域と外部領域とを認識し、更に、各図形に関するドーズ量を認識し、各画素内に存在する各図形について「内部領域の占有率と当該図形のドーズ量との積」を求め、当該積の総和を当該画素の画素値とするドーズ密度マップを元画像として作成するようにしたものである。
(5) According to a fifth aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the second aspect described above,
Based on the original figure pattern that includes the outline information indicating the boundary between the inside and outside of the figure and the information about the dose amount for each figure in the lithography process, the original image creation unit Recognize the area, further recognize the dose amount for each figure, find the "product of the occupancy of the internal area and the dose amount of the figure" for each figure present in each pixel, and calculate the sum of the products A dose density map as a pixel value of the pixel is created as an original image.

 (6)  本発明の第6の態様は、上述した第1~第5の態様に係る図形パターンの形状推定装置において、
 画像ピラミッド作成部が、元画像もしくは縮小画像に対して、所定の画像処理フィルタを用いたフィルタ処理を行う機能を有し、このフィルタ処理と縮小処理とを交互に実行することにより、複数の階層画像からなる画像ピラミッドを作成するようにしたものである。
(6) According to a sixth aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the first to fifth aspects described above,
The image pyramid creation unit has a function of performing a filtering process using a predetermined image processing filter on the original image or the reduced image, and by executing the filtering process and the reducing process alternately, a plurality of layers An image pyramid made up of images is created.

 (7)  本発明の第7の態様は、上述した第6の態様に係る図形パターンの形状推定装置において、
 画像ピラミッド作成部が、元画像作成部によって作成された元画像を第1の準備画像Q1とし、第kの準備画像Qk(但し、kは自然数)に対するフィルタ処理によって得られる画像を第kの階層画像Pkとし、第kの階層画像Pkに対する縮小処理によって得られる画像を第(k+1)の準備画像Q(k+1)として、第nの階層画像Pnが得られるまでフィルタ処理と縮小処理とを交互に実行することにより、第1の階層画像P1~第nの階層画像Pnを含む複数n通りの階層画像からなる画像ピラミッドを作成するようにしたものである。
(7) According to a seventh aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the sixth aspect described above,
The image pyramid creation unit uses the original image created by the original image creation unit as the first preparation image Q1, and the image obtained by the filter processing for the kth preparation image Qk (where k is a natural number) As an image Pk, an image obtained by the reduction process for the kth hierarchical image Pk is set as the (k + 1) th preparation image Q (k + 1), and the filter process and the reduction process are alternately performed until the nth hierarchical image Pn is obtained. By executing this, an image pyramid composed of a plurality of n hierarchical images including the first hierarchical image P1 to the nth hierarchical image Pn is created.

 (8)  本発明の第8の態様は、上述した第6の態様に係る図形パターンの形状推定装置において、
 画像ピラミッド作成部が、元画像作成部によって作成された元画像を第1の準備画像Q1とし、第kの準備画像Qk(但し、kは自然数)に対するフィルタ処理によって得られるフィルタ処理画像Pkと第kの準備画像Qkとの差分画像Dkを求め、当該差分画像Dkを第kの階層画像Dkとし、第kのフィルタ処理画像Pkに対する縮小処理によって得られる画像を第(k+1)の準備画像Q(k+1)として、第nの階層画像Dnが得られるまでフィルタ処理と縮小処理とを交互に実行することにより、第1の階層画像D1~第nの階層画像Dnを含む複数n通りの階層画像からなる画像ピラミッドを作成するようにしたものである。
(8) An eighth aspect of the present invention is the figure pattern shape estimation apparatus according to the sixth aspect described above,
The image pyramid creation unit uses the original image created by the original image creation unit as the first preparation image Q1, and the filtered image Pk obtained by the filter processing for the kth preparation image Qk (where k is a natural number) The difference image Dk from the k preparation image Qk is obtained, the difference image Dk is set as the k-th layer image Dk, and the image obtained by the reduction process on the k-th filtered image Pk is the (k + 1) -th preparation image Q ( k + 1), by alternately executing the filtering process and the reduction process until the n-th hierarchical image Dn is obtained, from a plurality of n hierarchical images including the first hierarchical image D1 to the n-th hierarchical image Dn. An image pyramid is created.

 (9)  本発明の第9の態様は、上述した第6の態様に係る図形パターンの形状推定装置において、
 画像ピラミッド作成部が、
 元画像作成部によって作成された元画像を第1の準備画像Q1とし、第kの準備画像Qk(但し、kは自然数)に対するフィルタ処理によって得られる画像を第kの主階層画像Pkとし、第kの主階層画像Pkに対する縮小処理によって得られる画像を第(k+1)の準備画像Q(k+1)として、第nの主階層画像Pnが得られるまでフィルタ処理と縮小処理とを交互に実行することにより、第1の主階層画像P1~第nの主階層画像Pnを含む複数n通りの階層画像からなる主画像ピラミッドを作成し、
 更に、第kの主階層画像Pkと第kの準備画像Qkとの差分画像Dkを求め、当該差分画像Dkを第kの副階層画像Dkとすることにより、第1の副階層画像D1~第nの副階層画像Dnを含む複数n通りの階層画像からなる副画像ピラミッドを作成し、
 特徴量算出部が、主画像ピラミッドおよび副画像ピラミッドを構成する各階層画像について、評価点の位置に応じた画素の画素値に基づいて特徴量を算出するようにしたものである。
(9) A ninth aspect of the present invention is the figure pattern shape estimation apparatus according to the sixth aspect described above,
The image pyramid creation part
The original image created by the original image creation unit is defined as the first preparation image Q1, the image obtained by filtering the kth preparation image Qk (where k is a natural number) is the kth main hierarchy image Pk, The image obtained by the reduction process for the k main hierarchy image Pk is defined as the (k + 1) th preparation image Q (k + 1), and the filter process and the reduction process are alternately executed until the nth main hierarchy image Pn is obtained. To create a main image pyramid composed of a plurality of n hierarchical images including the first main hierarchical image P1 to the n-th main hierarchical image Pn,
Further, a difference image Dk between the k-th main layer image Pk and the k-th preparation image Qk is obtained, and the difference image Dk is set as the k-th sub-layer image Dk. creating a sub-image pyramid composed of a plurality of n-layer images including n sub-layer images Dn;
The feature amount calculation unit calculates a feature amount for each hierarchical image constituting the main image pyramid and the sub image pyramid based on the pixel value of the pixel corresponding to the position of the evaluation point.

 (10) 本発明の第10の態様は、上述した第6~第9の態様に係る図形パターンの形状推定装置において、
 画像ピラミッド作成部が、画像処理フィルタとしてガウシアンフィルタもしくはラプラシアンフィルタを用いた畳込演算によってフィルタ処理を実行して画像ピラミッドを作成するようにしたものである。
(10) According to a tenth aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the sixth to ninth aspects described above,
The image pyramid creation unit creates the image pyramid by executing filter processing by a convolution operation using a Gaussian filter or a Laplacian filter as an image processing filter.

 (11) 本発明の第11の態様は、上述した第1~第10の態様に係る図形パターンの形状推定装置において、
 画像ピラミッド作成部が、縮小処理として、複数m個の隣接画素を、これら複数m個の隣接画素の画素値の平均値を画素値とする単一の画素に置き換えるアベレージ・プーリング処理を実行することにより縮小画像を作成するようにしたものである。
(11) An eleventh aspect of the present invention is the figure pattern shape estimation apparatus according to the first to tenth aspects described above,
The image pyramid creation unit executes, as a reduction process, an average pooling process that replaces a plurality of m adjacent pixels with a single pixel having an average value of pixel values of the plurality of m adjacent pixels as a pixel value. Thus, a reduced image is created.

 (12) 本発明の第12の態様は、上述した第1~第10の態様に係る図形パターンの形状推定装置において、
 画像ピラミッド作成部が、縮小処理として、複数m個の隣接画素を、これら複数m個の隣接画素の画素値の最大値を画素値とする単一の画素に置き換えるマックス・プーリング処理を実行することにより縮小画像を作成するようにしたものである。
(12) A twelfth aspect of the present invention is the figure pattern shape estimation apparatus according to the first to tenth aspects described above,
The image pyramid creation unit executes, as a reduction process, a max pooling process in which a plurality of m adjacent pixels are replaced with a single pixel having a maximum pixel value of the plurality of m adjacent pixels as a pixel value. Thus, a reduced image is created.

 (13) 本発明の第13の態様は、上述した第1~第12の態様に係る図形パターンの形状推定装置において、
 元画像作成部が、互いに異なる複数通りのアルゴリズムに基づく元画像作成処理を行い、複数通りの元画像を作成し、
 画像ピラミッド作成部が、複数通りの元画像に基づいてそれぞれ画像ピラミッド作成処理を行い、複数通りの画像ピラミッドを作成し、
 特徴量算出部が、複数通りの画像ピラミッドのそれぞれを構成する各階層画像について、評価点の位置に応じた画素の画素値に基づいて特徴量を算出するようにしたものである。
(13) According to a thirteenth aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the first to twelfth aspects described above,
The original image creation unit performs an original image creation process based on a plurality of different algorithms, creates a plurality of original images,
The image pyramid creation unit performs image pyramid creation processing based on multiple original images to create multiple image pyramids,
The feature amount calculation unit calculates a feature amount for each hierarchical image constituting each of the plurality of image pyramids based on the pixel value of the pixel corresponding to the position of the evaluation point.

 (14) 本発明の第14の態様は、上述した第1~第13の態様に係る図形パターンの形状推定装置において、
 画像ピラミッド作成部が、1つの元画像について、互いに異なる複数通りのアルゴリズムに基づく画像ピラミッド作成処理を行い、複数通りの画像ピラミッドを作成し、
 特徴量算出部が、複数通りの画像ピラミッドのそれぞれを構成する各階層画像について、評価点の位置に応じた画素の画素値に基づいて特徴量を算出するようにしたものである。
(14) According to a fourteenth aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the first to thirteenth aspects described above,
The image pyramid creation unit performs image pyramid creation processing based on a plurality of different algorithms for one original image, creates a plurality of image pyramids,
The feature amount calculation unit calculates a feature amount for each hierarchical image constituting each of the plurality of image pyramids based on the pixel value of the pixel corresponding to the position of the evaluation point.

 (15) 本発明の第15の態様は、上述した第1~第14の態様に係る図形パターンの形状推定装置において、
 特徴量算出部が、特定の階層画像上の特定の評価点についての特徴量を算出する際に、特定の階層画像を構成する画素から、特定の評価点に近い順に合計j個の画素を着目画素として抽出し、抽出したj個の着目画素の画素値について、特定の評価点と各着目画素との距離に応じた重みを考慮した加重平均を求める演算を行うようにしたものである。
(15) According to a fifteenth aspect of the present invention, in the graphic pattern shape estimation apparatus according to the first to fourteenth aspects described above,
When the feature quantity calculation unit calculates a feature quantity for a specific evaluation point on a specific hierarchical image, it pays attention to a total of j pixels in order from the pixel constituting the specific hierarchical image to the specific evaluation point. A pixel is extracted as a pixel, and an operation for obtaining a weighted average considering the weight according to the distance between a specific evaluation point and each target pixel is performed on the pixel values of the extracted j target pixels.

 (16) 本発明の第16の態様は、上述した第1~第15の態様に係る図形パターンの形状推定装置において、
 推定演算部が、特徴量入力部が入力した特徴量を入力層とし、プロセスバイアスの推定値を出力層とするニューラルネットワークを有するようにしたものである。
(16) According to a sixteenth aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the first to fifteenth aspects described above,
The estimation calculation unit includes a neural network in which the feature quantity input by the feature quantity input unit is used as an input layer and the process bias estimation value is used as an output layer.

 (17) 本発明の第17の態様は、上述した第16の態様に係る図形パターンの形状推定装置において、
 推定演算部に含まれるニューラルネットワークが、多数のテストパターン図形を用いたリソグラフィプロセスによって実基板上に形成される実図形パターンの実寸法測定によって得られた寸法値と、各テストパターン図形から得られる特徴量と、を用いた学習段階によって得られたパラメータを学習情報として用い、プロセスバイアスの推定処理を行うようにしたものである。
(17) According to a seventeenth aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the sixteenth aspect described above,
The neural network included in the estimation calculation unit is obtained from the dimension value obtained by measuring the actual dimension of the actual figure pattern formed on the actual substrate by the lithography process using a large number of test pattern figures and each test pattern figure. The parameter obtained by the learning stage using the feature amount is used as learning information to perform process bias estimation processing.

 (18) 本発明の第18の態様は、上述した第16または第17の態様に係る図形パターンの形状推定装置において、
 推定演算部が、所定の図形の輪郭線上に位置する評価点についてのプロセスバイアスの推定値として、輪郭線の法線方向についての評価点のずれ量の推定値を求めるようにしたものである。
(18) An eighteenth aspect of the present invention is the figure pattern shape estimation apparatus according to the sixteenth or seventeenth aspect described above,
The estimation calculation unit obtains an estimated value of the deviation amount of the evaluation point in the normal direction of the contour line as the estimated value of the process bias for the evaluation point located on the contour line of the predetermined figure.

 (19) 本発明の第19の態様は、上述した第1~第18の態様に係る図形パターンの形状推定装置を用いて、元図形パターンの形状を補正する図形パターンの形状補正装置を構成するために、
 図形パターンの形状推定装置を構成する評価点設定ユニット、特徴量抽出ユニット、バイアス推定ユニットに加えて、
 バイアス推定ユニットから出力されるプロセスバイアスの推定値に基づいて、元図形パターンに対する補正を行うパターン補正ユニットを更に設け、
 パターン補正ユニットによる補正によって得られた補正図形パターンを、図形パターンの形状推定装置に新たな元図形パターンとして与えることにより、図形パターンに対する補正を繰り返し実行させるようにしたものである。
(19) According to a nineteenth aspect of the present invention, there is provided a figure pattern shape correcting apparatus for correcting the shape of the original figure pattern using the figure pattern shape estimating apparatus according to the first to eighteenth aspects described above. for,
In addition to the evaluation point setting unit, the feature amount extraction unit, and the bias estimation unit that constitute the shape estimation device of the graphic pattern
A pattern correction unit for correcting the original figure pattern based on the estimated value of the process bias output from the bias estimation unit is further provided.
The correction figure pattern obtained by the correction by the pattern correction unit is given as a new original figure pattern to the figure pattern shape estimation apparatus, so that the correction for the figure pattern is repeatedly executed.

 (20) 本発明の第20の態様は、上述した第1~第18の態様に係る図形パターンの形状推定装置もしくは上述した第19の態様に係る図形パターンの形状補正装置を、コンピュータに所定のプログラムを組み込むことにより実現したものである。 (20) According to a twentieth aspect of the present invention, a graphic pattern shape estimation apparatus according to the first to eighteenth aspects described above or a graphic pattern shape correction apparatus according to the nineteenth aspect described above is stored in a computer. This is realized by incorporating a program.

 (21) 本発明の第21の態様は、元図形パターンを用いたリソグラフィプロセスをシミュレートすることにより、実基板上に形成される実図形パターンの形状を推定する図形パターンの形状推定方法において、
 コンピュータが、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターンを入力する元図形パターン入力段階と、
 コンピュータが、輪郭線上の所定位置に評価点を設定する評価点設定段階と、
 コンピュータが、元図形パターンについて、評価点の周囲の特徴を示す特徴量を抽出する特徴量抽出段階と、
 コンピュータが、特徴量に基づいて、評価点の元図形パターン上の位置と実図形パターン上の位置とのずれ量を示すプロセスバイアスを推定するプロセスバイアス推定段階と、
 を行うようにし、
 特徴量抽出段階では、
 元図形パターンに基づいて、それぞれ所定の画素値を有する画素の集合体からなる元画像を作成する元画像作成段階と、
 元画像を縮小して縮小画像を作成する縮小処理を含む画像ピラミッド作成処理を行い、それぞれ異なるサイズをもった複数の階層画像からなる画像ピラミッドを作成する画像ピラミッド作成段階と、
 画像ピラミッドを構成する各階層画像について、評価点の位置に応じた画素の画素値に基づいて特徴量を算出する特徴量算出段階と、
 を行うようにし、
 プロセスバイアス推定段階では、予め実施された学習段階によって得られた学習情報に基づいて、特徴量に応じた推定値を求め、求めた推定値を評価点についてのプロセスバイアスの推定値として出力する推定演算段階を行うようにしたものである。
(21) According to a twenty-first aspect of the present invention, there is provided a graphic pattern shape estimation method for estimating a shape of an actual graphic pattern formed on an actual substrate by simulating a lithography process using the original graphic pattern.
An original graphic pattern input stage in which a computer inputs an original graphic pattern including contour information indicating the boundary between the inside and the outside of the graphic;
An evaluation point setting stage in which the computer sets an evaluation point at a predetermined position on the contour line;
A feature amount extraction stage in which the computer extracts a feature amount indicating features around the evaluation point for the original graphic pattern;
A process bias estimation stage in which a computer estimates a process bias indicating a deviation amount between the position of the evaluation point on the original graphic pattern and the position on the actual graphic pattern based on the feature amount;
And do
In the feature extraction stage,
An original image creating stage for creating an original image composed of a collection of pixels each having a predetermined pixel value based on the original graphic pattern;
An image pyramid creation stage that performs an image pyramid creation process including a reduction process for creating a reduced image by reducing the original image, and creating an image pyramid composed of a plurality of hierarchical images each having a different size,
A feature amount calculation stage for calculating a feature amount based on the pixel value of the pixel corresponding to the position of the evaluation point for each hierarchical image constituting the image pyramid;
And do
In the process bias estimation stage, an estimation value corresponding to the feature amount is obtained based on the learning information obtained in the learning stage performed in advance, and the obtained estimation value is output as the process bias estimation value for the evaluation point. An operation stage is performed.

 (22) 本発明の第22の態様は、上述した第21の態様に係る図形パターンの形状推定方法において、
 画像ピラミッド作成段階で、元画像もしくは縮小画像に対して所定の画像処理フィルタを用いたフィルタ処理を行うフィルタ処理段階と、このフィルタ処理後の画像に対して縮小処理を行う縮小処理段階と、を交互に実行することにより、複数の階層画像からなる画像ピラミッドを作成するようにしたものである。
(22) According to a twenty-second aspect of the present invention, in the shape estimation method for a graphic pattern according to the twenty-first aspect described above,
In the image pyramid creation stage, a filter process stage that performs a filter process using a predetermined image processing filter on the original image or the reduced image, and a reduction process stage that performs a reduction process on the image after the filter process By executing them alternately, an image pyramid composed of a plurality of hierarchical images is created.

 (23) 本発明の第23の態様は、上述した第22の態様に係る図形パターンの形状推定方法において、
 画像ピラミッド作成段階で、フィルタ処理後の画像、もしくはフィルタ処理後の画像とフィルタ処理前の画像との差分画像を階層画像とする画像ピラミッドを作成するようにしたものである。
(23) According to a twenty-third aspect of the present invention, in the shape estimation method for a graphic pattern according to the twenty-second aspect described above,
In the image pyramid creation stage, an image pyramid is created in which the image after filtering, or the difference image between the image after filtering and the image before filtering, is used as a hierarchical image.

 (24) 本発明の第24の態様は、元図形パターンを用いたリソグラフィプロセスをシミュレートすることにより、実基板上に形成される実図形パターンの形状を推定する図形パターンの形状推定装置において、
 元図形パターン上に評価点を設定する評価点設定ユニットと、
 元図形パターンについて、評価点の周囲の特徴を示す特徴量を抽出する特徴量抽出ユニットと、
 特徴量に基づいて、評価点の元図形パターン上の位置と実図形パターン上の位置とのずれ量を示すプロセスバイアスを推定するバイアス推定ユニットと、
 を設け、
 評価点設定ユニットは、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターンに基づいて、輪郭線上の所定位置に評価点を設定し、
 特徴量抽出ユニットは、
 元図形パターンに含まれる図形を矩形の集合体に置き換える矩形集合体置換部と、
 1つの評価点について、その周囲に位置する矩形に対する位置関係に基づいて特徴量を算出するための算出関数を提供する算出関数提供部と、
 算出関数提供部から提供される算出関数を用いて、評価点設定ユニットによって設定された各評価点についての特徴量を算出する特徴量算出部と、
 を有し、
 バイアス推定ユニットは、
 評価点について算出された特徴量を入力する特徴量入力部と、
 予め実施された学習段階によって得られた学習情報に基づいて、特徴量に応じた推定値を求め、求めた推定値を評価点についてのプロセスバイアスの推定値として出力する推定演算部と、
 を有するようにしたものである。
(24) According to a twenty-fourth aspect of the present invention, in a shape estimation apparatus for a graphic pattern that estimates the shape of a real graphic pattern formed on a real substrate by simulating a lithography process using the original graphic pattern,
An evaluation point setting unit for setting evaluation points on the original figure pattern;
A feature quantity extraction unit that extracts a feature quantity indicating features around the evaluation point for the original graphic pattern;
A bias estimation unit that estimates a process bias indicating the amount of deviation between the position of the evaluation point on the original graphic pattern and the position on the actual graphic pattern based on the feature amount;
Provided,
The evaluation point setting unit sets an evaluation point at a predetermined position on the contour line based on the original graphic pattern including information on the contour line indicating the boundary between the inside and the outside of the figure,
The feature extraction unit
A rectangular aggregate replacement unit that replaces a graphic included in the original graphic pattern with a rectangular aggregate;
For one evaluation point, a calculation function providing unit that provides a calculation function for calculating a feature amount based on a positional relationship with respect to a rectangle positioned around the evaluation point;
A feature amount calculation unit that calculates a feature amount for each evaluation point set by the evaluation point setting unit using a calculation function provided from the calculation function providing unit;
Have
The bias estimation unit is
A feature amount input unit for inputting the feature amount calculated for the evaluation point;
An estimation calculation unit that obtains an estimated value according to a feature amount based on learning information obtained by a learning stage performed in advance, and outputs the obtained estimated value as an estimated value of a process bias for an evaluation point;
It is made to have.

 (25) 本発明の第25の態様は、上述した第24の態様に係る図形パターンの形状推定装置において、
 算出関数提供部が、評価点の近傍の狭い範囲を考慮した特徴量から、評価点の遠方まで含めた広い範囲を考慮した特徴量に至るまで、考慮範囲を変えた複数n通りの特徴量を算出するために、複数n通りの算出関数を提供し、
 特徴量算出部が、この複数n通りの算出関数を用いて、各評価点についてそれぞれ複数n通りの特徴量を算出するようにしたものである。
(25) A twenty-fifth aspect of the present invention is the graphic pattern shape estimation apparatus according to the twenty-fourth aspect described above,
The calculation function providing unit obtains a plurality of n types of feature amounts in which the consideration range is changed from a feature amount considering a narrow range near the evaluation point to a feature amount considering a wide range including far from the evaluation point. In order to calculate, a plurality of n calculation functions are provided,
The feature amount calculation unit calculates a plurality of n types of feature amounts for each evaluation point using the plurality of n types of calculation functions.

 (26) 本発明の第26の態様は、上述した第25の態様に係る図形パターンの形状推定装置において、
 算出関数提供部が、ある1つの評価点について、その周囲に位置する矩形の四辺に対する位置関係に基づいて特徴量を算出する算出関数を提供するようにしたものである。
(26) According to a twenty-sixth aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the twenty-fifth aspect described above,
The calculation function providing unit provides a calculation function for calculating a feature amount based on a positional relationship with respect to four sides of a rectangle positioned around one evaluation point.

 (27) 本発明の第27の態様は、上述した第26の態様に係る図形パターンの形状推定装置において、
 矩形集合体置換部が、X軸正方向を右方向、Y軸正方向を上方向にとったXY二次元直交座標系において、元図形パターンに含まれる図形を、X軸に平行な上辺および下辺とY軸に平行な左辺および右辺とを有する矩形の集合体に置き換え、
 算出関数提供部が、XY二次元直交座標系上における、評価点のX軸方向に関する左辺との隔たりを示す左辺位置偏差および右辺との隔たりを示す右辺位置偏差、ならびに、評価点のY軸方向に関する上辺との隔たりを示す上辺位置偏差および下辺との隔たりを示す下辺位置偏差、に基づいて、特徴量を算出する算出関数を提供するようにしたものである。
(27) According to a twenty-seventh aspect of the present invention, in the shape estimation apparatus for a graphic pattern according to the twenty-sixth aspect described above,
In the XY two-dimensional orthogonal coordinate system in which the rectangular aggregate replacement unit takes the X-axis positive direction to the right and the Y-axis positive direction to the upper direction, the upper and lower sides parallel to the X-axis And a rectangular assembly having a left side and a right side parallel to the Y axis,
The calculation function providing unit has, on the XY two-dimensional orthogonal coordinate system, a left side position deviation indicating a distance from the left side in the X axis direction of the evaluation point, a right side position deviation indicating a distance from the right side, and a Y axis direction of the evaluation point. The calculation function for calculating the feature amount is provided based on the upper side position deviation indicating the distance from the upper side and the lower side position deviation indicating the distance from the lower side.

 (28) 本発明の第28の態様は、上述した第27の態様に係る図形パターンの形状推定装置において、
 算出関数提供部が、
 ある1つの着目矩形について、
 変数値の増加とともに関数値が単調増加し、着目矩形の左辺のX座標値を変数として与えたときの関数値が0になるX軸単調増加関数と、変数値の増加とともに関数値が単調減少し、着目矩形の右辺のX座標値を変数として与えたときの関数値が0になるX軸単調減少関数と、の和である水平方向関数と、
 変数値の増加とともに関数値が単調増加し、着目矩形の下辺のY座標値を変数として与えたときの関数値が0になるY軸単調増加関数と、変数値の増加とともに関数値が単調減少し、着目矩形の上辺のY座標値を変数として与えたときの関数値が0になるY軸単調減少関数と、の和である垂直方向関数と、
 を定義して、ある1つの着目評価点についての着目矩形に対する位置関係を示す量を、着目評価点のX座標値を変数とする水平方向関数の関数値と、着目評価点のY座標値を変数とする垂直方向関数の関数値と、の積に基づいて算出し、
 着目評価点の周囲に位置する各矩形に対する位置関係を示す量の総和を、着目評価点についての特徴量とする算出関数を提供するようにしたものである。
(28) According to a twenty-eighth aspect of the present invention, in the figure pattern shape estimation apparatus according to the twenty-seventh aspect described above,
The calculation function provider
For one particular rectangle of interest,
The function value increases monotonously as the variable value increases, and the function value decreases monotonously as the variable value increases, as well as the X-axis monotonically increasing function when the X coordinate value of the left side of the target rectangle is given as a variable. A horizontal function that is the sum of an X-axis monotonically decreasing function that gives a function value of 0 when the X coordinate value of the right side of the target rectangle is given as a variable,
The function value increases monotonously as the variable value increases, and the function value decreases monotonously as the variable value increases. A vertical function that is the sum of a Y-axis monotonically decreasing function that has a function value of 0 when the Y coordinate value of the upper side of the target rectangle is given as a variable,
, And a function value of a horizontal function having the X coordinate value of the target evaluation point as a variable and a Y coordinate value of the target evaluation point Calculate based on the product of the function value of the vertical function as a variable,
A calculation function is provided in which the sum of the amounts indicating the positional relationship with respect to each rectangle located around the target evaluation point is used as a feature amount for the target evaluation point.

 (29) 本発明の第29の態様は、上述した第28の態様に係る図形パターンの形状推定装置において、
 算出関数提供部が、単調増加もしくは単調減少の度合いがそれぞれ異なる関数を用いた複数n通りの算出関数を、考慮範囲を変えた複数n通りの特徴量を算出するための算出関数として提供するようにしたものである。
(29) A twenty-ninth aspect of the present invention is the figure pattern shape estimation apparatus according to the twenty-eighth aspect described above,
The calculation function providing unit provides a plurality of n types of calculation functions using functions having different degrees of monotonic increase or monotonous decrease as calculation functions for calculating a plurality of n types of feature amounts with different consideration ranges. It is a thing.

 (30) 本発明の第30の態様は、上述した第29の態様に係る図形パターンの形状推定装置において、
 算出関数提供部が、左辺位置偏差、右辺位置偏差、上辺位置偏差および下辺位置偏差を拡がり係数σで除した値を変数とする単調増加関数もしくは単調減少関数を含む算出関数を用意し、拡がり係数σの値を複数n通りに変えることにより、複数n通りの算出関数を提供するようにしたものである。
(30) According to a thirtieth aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the twenty-ninth aspect described above,
The calculation function provider prepares a calculation function including a monotonically increasing function or a monotonically decreasing function with the value obtained by dividing the left side position deviation, right side position deviation, upper side position deviation, and lower side position deviation by the spreading coefficient σ, and the spreading coefficient. By changing the value of σ to a plurality of n types, a plurality of n types of calculation functions are provided.

 (31) 本発明の第31の態様は、上述した第30の態様に係る図形パターンの形状推定装置において、
 算出関数提供部が、1≦k≦nの範囲をとるパラメータkを用いて、第k番目の拡がり係数をσkと表したときに、σk=2(k-1)に設定するようにしたものである。
(31) According to a thirty-first aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the thirtieth aspect described above,
The calculation function providing unit sets σk = 2 (k−1) when the k-th expansion coefficient is expressed as σk using the parameter k in the range of 1 ≦ k ≦ n. It is.

 (32) 本発明の第32の態様は、上述した第24~第31の態様に係る図形パターンの形状推定装置において、
 矩形集合体置換部が、図形の内部と外部との境界を示す輪郭線の情報と、リソグラフィプロセスにおける各図形に関するドーズ量の情報と、を含む元図形パターンに基づいて、各図形の内部領域と外部領域とを認識し、更に、各図形に関するドーズ量を認識し、各図形に対応する矩形に、それぞれドーズ量を設定し、
 算出関数提供部が、各矩形に設定されたドーズ量を変数として含む算出関数を提供するようにしたものである。
(32) In a thirty-second aspect of the present invention, in the shape estimation apparatus for graphic patterns according to the twenty-fourth to thirty-first aspects described above,
Based on the original graphic pattern including the information on the outline indicating the boundary between the inside and the outside of the graphic and the information on the dose amount for each graphic in the lithography process, the rectangular aggregate replacement unit Recognize the external area, further recognize the dose amount for each figure, set the dose amount for each rectangle corresponding to each figure,
The calculation function providing unit provides a calculation function including a dose amount set for each rectangle as a variable.

 (33) 本発明の第33の態様は、上述した第24~第31の態様に係る図形パターンの形状推定装置において、
 矩形集合体置換部が、元図形パターンに基づいて各図形の輪郭線を構成する単位線分を認識し、各単位線分について微小幅を設定することにより、元図形パターンに含まれる図形を、微小幅をもった矩形の集合体に置き換えるようにしたものである。
(33) According to a thirty-third aspect of the present invention, in the graphic pattern shape estimating apparatus according to the twenty-fourth to thirty-first aspects described above,
The rectangle assembly replacement unit recognizes the unit line segments that form the contour lines of each figure based on the original figure pattern, and sets the minute width for each unit line segment, so that the figure included in the original figure pattern is It is replaced with a rectangular aggregate with a very small width.

 (34) 本発明の第34の態様は、上述した第24~第33の態様に係る図形パターンの形状推定装置において、
 特徴量算出部が、評価点についての特徴量を算出する際に、評価点を中心として所定半径をもった基準円を定義し、この基準円に応じた所定近傍範囲内に属する矩形との位置関係のみを考慮した演算を行うようにしたものである。
(34) According to a thirty-fourth aspect of the present invention, in the figure pattern shape estimation apparatus according to the twenty-fourth to thirty-third aspects described above,
When the feature quantity calculation unit calculates the feature quantity for the evaluation point, it defines a reference circle having a predetermined radius with the evaluation point as the center, and the position of the rectangle belonging to the predetermined neighborhood range according to the reference circle The calculation is performed considering only the relationship.

 (35) 本発明の第35の態様は、上述した第24~第34の態様に係る図形パターンの形状推定装置において、
 推定演算部が、特徴量入力部が入力した特徴量を入力層とし、プロセスバイアスの推定値を出力層とするニューラルネットワークを有するようにしたものである。
(35) According to a thirty-fifth aspect of the present invention, in the graphic pattern shape estimation apparatus according to the twenty-fourth to thirty-fourth aspects described above,
The estimation calculation unit includes a neural network in which the feature quantity input by the feature quantity input unit is used as an input layer and the process bias estimation value is used as an output layer.

 (36) 本発明の第36の態様は、上述した第35の態様に係る図形パターンの形状推定装置において、
 推定演算部に含まれるニューラルネットワークが、多数のテストパターン図形を用いたリソグラフィプロセスによって実基板上に形成される実図形パターンの実寸法測定によって得られた寸法値と、各テストパターン図形から得られる特徴量と、を用いた学習段階によって得られたパラメータを学習情報として用い、プロセスバイアスの推定処理を行うようにしたものである。
(36) A thirty-sixth aspect of the present invention is the figure pattern shape estimation apparatus according to the thirty-fifth aspect described above,
The neural network included in the estimation calculation unit is obtained from the dimension value obtained by measuring the actual dimension of the actual figure pattern formed on the actual substrate by the lithography process using a large number of test pattern figures and each test pattern figure. The parameter obtained by the learning stage using the feature amount is used as learning information to perform process bias estimation processing.

 (37) 本発明の第37の態様は、上述した第35または第36の態様に係る図形パターンの形状推定装置において、
 推定演算部が、所定の図形の輪郭線上に位置する評価点についてのプロセスバイアスの推定値として、輪郭線の法線方向についての評価点のずれ量の推定値を求めるようにしたものである。
(37) A thirty-seventh aspect of the present invention is the figure pattern shape estimation apparatus according to the thirty-fifth or thirty-sixth aspect described above,
The estimation calculation unit obtains an estimated value of the deviation amount of the evaluation point in the normal direction of the contour line as the estimated value of the process bias for the evaluation point located on the contour line of the predetermined figure.

 (38) 本発明の第38の態様は、上述した第24~第37の態様に係る図形パターンの形状推定装置を用いて、元図形パターンの形状を補正する図形パターンの形状補正装置を構成するために、
 図形パターンの形状推定装置を構成する評価点設定ユニット、特徴量抽出ユニット、バイアス推定ユニットに加えて、
 バイアス推定ユニットから出力されるプロセスバイアスの推定値に基づいて、元図形パターンに対する補正を行うパターン補正ユニットを更に設け、
 パターン補正ユニットによる補正によって得られた補正図形パターンを、図形パターンの形状推定装置に新たな元図形パターンとして与えることにより、図形パターンに対する補正を繰り返し実行させるようにしたものである。
(38) According to a thirty-eighth aspect of the present invention, a graphic pattern shape correcting apparatus for correcting the shape of an original graphic pattern is configured using the graphic pattern shape estimating apparatus according to the twenty-fourth to thirty-seventh aspects described above. for,
In addition to the evaluation point setting unit, the feature amount extraction unit, and the bias estimation unit that constitute the shape estimation device of the graphic pattern
A pattern correction unit for correcting the original figure pattern based on the estimated value of the process bias output from the bias estimation unit is further provided.
The correction figure pattern obtained by the correction by the pattern correction unit is given as a new original figure pattern to the figure pattern shape estimation apparatus, so that the correction for the figure pattern is repeatedly executed.

 (39) 本発明の第39の態様は、上述した第24~第37の態様に係る図形パターンの形状推定装置もしくは上述した第38の態様に係る図形パターンの形状補正装置を、コンピュータに所定のプログラムを組み込むことにより実現したものである。 (39) A thirty-ninth aspect of the present invention provides a computer with the graphic pattern shape estimating apparatus according to the twenty-fourth to thirty-seventh aspects described above or the graphic pattern shape correcting apparatus according to the thirty-eighth aspect described above. This is realized by incorporating a program.

 (40) 本発明の第40の態様は、元図形パターンを用いたリソグラフィプロセスをシミュレートすることにより、実基板上に形成される実図形パターンの形状を推定する図形パターンの形状推定方法において、
 コンピュータが、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターンを入力する元図形パターン入力段階と、
 コンピュータが、輪郭線上の所定位置に評価点を設定する評価点設定段階と、
 コンピュータが、元図形パターンについて、評価点の周囲の特徴を示す特徴量を抽出する特徴量抽出段階と、
 コンピュータが、特徴量に基づいて、評価点の元図形パターン上の位置と実図形パターン上の位置とのずれ量を示すプロセスバイアスを推定するプロセスバイアス推定段階と、
 を行うようにし、
 特徴量抽出段階では、
 元図形パターンに含まれる図形を矩形の集合体に置き換える矩形集合体置換段階と、
 各評価点について、その周囲に位置する矩形に対する位置関係に基づいて特徴量を算出する特徴量算出段階と、
 を行い、
 プロセスバイアス推定段階では、予め実施された学習段階によって得られた学習情報に基づいて、特徴量に応じた推定値を求め、求めた推定値を評価点についてのプロセスバイアスの推定値として出力する推定演算段階を行うようにしたものである。
(40) According to a 40th aspect of the present invention, there is provided a graphic pattern shape estimation method for estimating a shape of an actual graphic pattern formed on an actual substrate by simulating a lithography process using the original graphic pattern.
An original graphic pattern input stage in which a computer inputs an original graphic pattern including contour information indicating the boundary between the inside and the outside of the graphic;
An evaluation point setting stage in which the computer sets an evaluation point at a predetermined position on the contour line;
A feature amount extraction stage in which the computer extracts a feature amount indicating features around the evaluation point for the original graphic pattern;
A process bias estimation stage in which a computer estimates a process bias indicating a deviation amount between the position of the evaluation point on the original graphic pattern and the position on the actual graphic pattern based on the feature amount;
And do
In the feature extraction stage,
A rectangular assembly replacement stage for replacing a graphic included in the original graphic pattern with a rectangular assembly;
For each evaluation point, a feature amount calculation stage for calculating a feature amount based on a positional relationship with respect to a rectangle positioned around the evaluation point;
And
In the process bias estimation stage, an estimation value corresponding to the feature amount is obtained based on the learning information obtained in the learning stage performed in advance, and the obtained estimation value is output as the process bias estimation value for the evaluation point. An operation stage is performed.

 本発明に係る図形パターンの形状推定装置および形状推定方法によれば、元図形パターン上に評価点が設定され、この評価点についてのずれ量を示すプロセスバイアスが推定される。しかも、基本的実施形態では、元図形パターンに対応する元画像を縮小する縮小処理が行われ、異なるサイズをもった複数の階層画像からなる画像ピラミッドが作成され、各階層画像について評価点位置の特徴が特徴量として抽出される。また、付加的実施形態では、元図形パターンに含まれる図形が矩形の集合体に置き換えられ、各評価点について矩形に対する位置関係に基づいて特微量が抽出される。このため、元図形パターンから的確な特徴量を抽出して的確なシミュレーションを行うことができるようになり、実基板上に形成される実図形パターンの形状を正確に推定することが可能になる。 According to the shape estimation apparatus and shape estimation method for a graphic pattern according to the present invention, an evaluation point is set on the original graphic pattern, and a process bias indicating a deviation amount with respect to the evaluation point is estimated. Moreover, in the basic embodiment, a reduction process for reducing the original image corresponding to the original graphic pattern is performed, an image pyramid composed of a plurality of hierarchical images having different sizes is created, and the evaluation point position is determined for each hierarchical image. Features are extracted as feature values. Further, in the additional embodiment, the graphic included in the original graphic pattern is replaced with a rectangular aggregate, and the feature amount is extracted based on the positional relationship with respect to the rectangle for each evaluation point. For this reason, it becomes possible to perform accurate simulation by extracting accurate feature values from the original graphic pattern, and it is possible to accurately estimate the shape of the actual graphic pattern formed on the actual substrate.

 より具体的には、本発明の基本的実施形態では、異なるサイズをもった複数の階層画像からなる画像ピラミッドから特徴量の抽出が行われるため、各評価点の近傍から遠方に至るまでの様々な特徴を示す個別の特徴量を得ることができる。したがって、近接効果やエッチングのローディング現象など、スケールの異なる各種現象の影響を考慮した正確なシミュレーションを行うことが可能になる。本発明の付加的実施形態において、特微量を算出するために複数の算出関数を用いた場合も同様である。 More specifically, in the basic embodiment of the present invention, since feature amounts are extracted from an image pyramid composed of a plurality of hierarchical images having different sizes, various features ranging from the vicinity of each evaluation point to a distant location are used. It is possible to obtain an individual feature amount indicating a unique feature. Therefore, it is possible to perform an accurate simulation in consideration of the influence of various phenomena having different scales such as the proximity effect and the etching loading phenomenon. The same applies to the case where a plurality of calculation functions are used to calculate the feature amount in the additional embodiment of the present invention.

 また、本発明に係る図形パターンの形状推定装置を利用すれば、推定結果に基づいて元図形パターンに対する補正を行うことができるので、元図形パターンの形状を正確に補正することが可能な図形パターンの形状補正装置を提供することもできる。 In addition, if the figure pattern shape estimation apparatus according to the present invention is used, the original figure pattern can be corrected based on the estimation result, so that the figure pattern that can accurately correct the shape of the original figure pattern can be obtained. It is also possible to provide a shape correction apparatus.

本発明の基本的実施形態に係る図形パターンの形状補正装置100の構成を示すブロック図である。It is a block diagram which shows the structure of the shape correction apparatus 100 of the figure pattern which concerns on fundamental embodiment of this invention. 元図形パターンと実図形パターンとの間に形状の相違が生じた一例を示す平面図である。It is a top view which shows an example in which the difference in shape produced between the original figure pattern and the real figure pattern. 図2に示す例において、各評価点の設定例および各評価点に生じるプロセスバイアスを示す平面図である。In the example shown in FIG. 2, it is a top view which shows the example of a setting of each evaluation point, and the process bias which arises in each evaluation point. 図1に示す図形パターンの形状補正装置100を用いた製品の設計・製造工程を示す流れ図である。It is a flowchart which shows the design / manufacturing process of a product using the shape correction apparatus 100 of the graphic pattern shown in FIG. 図形パターンの輪郭線上に定義された評価点について、その周囲の特徴を把握する概念を示す平面図である。It is a top view which shows the concept which grasps | ascertains the surrounding characteristic about the evaluation point defined on the outline of a figure pattern. 図1に示す特徴量抽出ユニット120およびバイアス推定ユニット130において実行される処理の概要を示す図である。It is a figure which shows the outline | summary of the process performed in the feature-value extraction unit 120 and the bias estimation unit 130 shown in FIG. 図1に示す特徴量抽出ユニット120において実行される処理手順を示す流れ図である。It is a flowchart which shows the process sequence performed in the feature-value extraction unit 120 shown in FIG. 図1に示す図形パターンの形状補正装置100に与えられる元図形パターン10の具体例を示す平面図である。It is a top view which shows the specific example of the original figure pattern 10 given to the shape correction apparatus 100 of the figure pattern shown in FIG. 図1に示す元画像作成部121において、画素の二次元配列からなるメッシュ上に元図形パターン10を重ね合わせる処理が行われた状態を示す平面図である。It is a top view which shows the state in which the process which superimposes the original figure pattern 10 on the mesh which consists of a two-dimensional arrangement | sequence of a pixel was performed in the original image creation part 121 shown in FIG. 図9に示す元図形パターン10に基づいて作成された面積密度マップM1を示す図である。It is a figure which shows the area density map M1 produced based on the original figure pattern 10 shown in FIG. 図9に示す元図形パターン10に基づいて作成されたエッジ長密度マップM2を示す図である。It is a figure which shows the edge length density map M2 produced based on the original figure pattern 10 shown in FIG. ドーズ量の情報を含んだ元図形パターン10を示す平面図である。It is a top view which shows the original figure pattern 10 containing the information of dose amount. 図12に示すドーズ量付き元図形パターン10に基づいて作成されたドーズ密度マップM3を示す図である。It is a figure which shows the dose density map M3 produced based on the original figure pattern 10 with a dose amount shown in FIG. 第k番目の準備画像QkにガウシアンフィルタGF33を用いたフィルタ処理を施すことにより、第k番目の階層画像Pkを作成する手順を示す平面図である。It is a top view which shows the procedure which produces the kth hierarchy image Pk by performing the filter process which used the Gaussian filter GF33 to the kth preparation image Qk. 図14に示すフィルタ処理により得られた第k番目の階層画像Pkを示す平面図である。It is a top view which shows the kth hierarchy image Pk obtained by the filter process shown in FIG. 図14に示すフィルタ処理に利用する画像処理フィルタの例を示す平面図である。It is a top view which shows the example of the image processing filter utilized for the filter process shown in FIG. 図14に示すフィルタ処理に利用する画像処理フィルタの別な例を示す平面図である。It is a top view which shows another example of the image processing filter utilized for the filter process shown in FIG. 第k番目の階層画像Pkに対して、アベレージ・プーリング処理を施すことにより、第(k+1)番目の準備画像Q(k+1)を作成する手順を示す平面図である。It is a top view which shows the procedure which produces the (k + 1) th preparation image Q (k + 1) by performing an average pooling process with respect to the kth hierarchy image Pk. 第k番目の階層画像Pkに対して、マックス・プーリング処理を施すことにより、第(k+1)番目の準備画像Q(k+1)を作成する手順を示す平面図である。It is a top view which shows the procedure which produces the (k + 1) th preparation image Q (k + 1) by performing a max pooling process with respect to the kth hierarchy image Pk. 図1に示す画像ピラミッド作成部122において、n通りの階層画像P1~Pnからなる画像ピラミッドPPを作成する手順を示す平面図である。FIG. 7 is a plan view showing a procedure for creating an image pyramid PP composed of n hierarchical images P1 to Pn in the image pyramid creation unit 122 shown in FIG. 1; 図1に示す特徴量算出部123において、各階層画像から評価点Eについての特徴量を算出する手順を示す平面図である。FIG. 6 is a plan view showing a procedure for calculating a feature value for an evaluation point E from each hierarchical image in the feature value calculation unit 123 shown in FIG. 図21に示す特徴量算出手順で用いる具体的な演算方法を示す図である。It is a figure which shows the specific calculating method used with the feature-value calculation procedure shown in FIG. 図1に示す画像ピラミッド作成部122において、n通りの差分画像D1~Dnからなる画像ピラミッドPDを作成する手順を示す平面図である。FIG. 7 is a plan view showing a procedure for creating an image pyramid PD composed of n types of difference images D1 to Dn in the image pyramid creation unit 122 shown in FIG. 1; 図1に示す推定演算部132として、ニューラルネットワークを利用した実施例を示すブロック図である。It is a block diagram which shows the Example using a neural network as the estimation calculating part 132 shown in FIG. 図24に示すニューラルネットワークで実行される具体的な演算プロセスを示すダイアグラムである。FIG. 25 is a diagram showing a specific calculation process executed by the neural network shown in FIG. 24. FIG. 図25に示すダイアグラムにおける第1隠れ層の各値を求める演算式を示す図である。FIG. 26 is a diagram illustrating an arithmetic expression for obtaining each value of the first hidden layer in the diagram illustrated in FIG. 25. 図26に示す活性化関数f(ξ)の具体例を示す図である。It is a figure which shows the specific example of the activation function f (ξ) shown in FIG. 図25に示すダイアグラムにおける第2隠れ層~第N隠れ層の各値を求める演算式を示す図である。FIG. 26 is a diagram illustrating an arithmetic expression for obtaining each value of a second hidden layer to an Nth hidden layer in the diagram illustrated in FIG. 25. 図25に示すダイアグラムにおける出力層の値yを求める演算式を示す図である。FIG. 26 is a diagram illustrating an arithmetic expression for obtaining a value y of an output layer in the diagram illustrated in FIG. 25. 図24に示すニューラルネットワークが用いる学習情報Lを得るための学習段階の手順を示す流れ図である。It is a flowchart which shows the procedure of the learning step for obtaining the learning information L which the neural network shown in FIG. 24 uses. 図30に示す流れ図におけるステップS84の推定演算部学習の詳細な手順を示す流れ図である。It is a flowchart which shows the detailed procedure of the estimation calculating part learning of step S84 in the flowchart shown in FIG. 本発明の付加的実施形態に係る図形パターンの形状補正装置200の構成を示すブロック図である。It is a block diagram which shows the structure of the shape correction apparatus 200 of the figure pattern which concerns on additional embodiment of this invention. 図32に示す矩形集合体置換部221によって、元図形パターン10を矩形集合体50に置換する処理の一例を示す平面図である。FIG. 33 is a plan view illustrating an example of a process of replacing the original graphic pattern 10 with a rectangular aggregate 50 by the rectangular aggregate replacing unit 221 illustrated in FIG. 32. 図32に示す矩形集合体置換部221によって、元図形パターン10を矩形集合体50に置換する処理の別な一例を示す平面図である。FIG. 33 is a plan view illustrating another example of a process of replacing the original graphic pattern 10 with the rectangular aggregate 50 by the rectangular aggregate replacing unit 221 illustrated in FIG. 32. 図32に示す特徴量算出部222による特徴量の算出原理と、算出関数提供部223によって提供される算出関数の一例を示す図である。FIG. 33 is a diagram illustrating an example of a feature amount calculation principle by a feature amount calculation unit 222 and a calculation function provided by a calculation function providing unit 223 illustrated in FIG. 32; 図32に示す算出関数提供部223によって提供される具体的な算出関数の一例を示す図である。It is a figure which shows an example of the specific calculation function provided by the calculation function provision part 223 shown in FIG. 図36に示す算出関数に用いられている誤差関数erf(ξ)を説明する図である。It is a figure explaining the error function erf (ξ) used for the calculation function shown in FIG. 図36に示す算出関数に用いられているX軸単調増加関数+erf[(X-Li/σk)]と矩形Fiとの位置的な関係を示す図である。FIG. 37 is a diagram showing a positional relationship between an X-axis monotonically increasing function + erf [(X−Li / σk)] used in the calculation function shown in FIG. 36 and a rectangle Fi. 図36に示す算出関数に用いられているX軸単調減少関数-erf[(X-Ri/σk)]と矩形Fiとの位置的な関係を示す図である。FIG. 37 is a diagram showing a positional relationship between an X-axis monotone decreasing function −erf [(X−Ri / σk)] used in the calculation function shown in FIG. 36 and a rectangle Fi. 図36に示す算出関数に用いられている水平方向関数fhi(σk)と矩形Fiとの位置的な関係を示す図である。It is a figure which shows the positional relationship of the horizontal direction function fhi ((sigma) k) used for the calculation function shown in FIG. 36, and the rectangle Fi. 図36に示す算出関数に用いられている水平方向関数fhi(σk)および垂直方向関数fvi(σk)と矩形Fiとの位置的な関係を示す図である。FIG. 37 is a diagram showing a positional relationship between a horizontal direction function fhi (σk) and a vertical direction function fvi (σk) used in the calculation function shown in FIG. 36 and a rectangle Fi. 図36に示す算出関数に用いられている拡がり係数σkの役割を示す図である。It is a figure which shows the role of the expansion coefficient (sigma) k used for the calculation function shown in FIG. ドーズ量を考慮した算出関数の一例を示す図である。It is a figure which shows an example of the calculation function in consideration of the dose amount. 図32に示す矩形集合体置換部221によって、図形を分割することにより、矩形集合体に置換する処理を示す平面図である。It is a top view which shows the process substituted by the rectangular aggregate | assembly by dividing | segmenting a figure by the rectangular aggregate replacement part 221 shown in FIG. 図32に示す矩形集合体置換部221によって、図形の輪郭線を構成する単位線分に微小幅を設定することにより、矩形集合体に置換する処理を示す平面図である。FIG. 33 is a plan view showing a process of replacing a rectangular aggregate by setting a minute width to a unit line segment constituting a contour line of a graphic by the rectangular aggregate replacing unit 221 shown in FIG. 32. 図45に示す矩形集合体について適用する算出関数の一例を示す図である。It is a figure which shows an example of the calculation function applied about the rectangular aggregate shown in FIG. 図32に示す特徴量算出部222による特徴量の算出演算を効率化する方法を示す平面図である。FIG. 33 is a plan view illustrating a method for improving the efficiency of calculation of feature amounts by the feature amount calculation unit 222 illustrated in FIG. 32. 本発明の基本的実施形態と付加的実施形態とについて、特徴量抽出の処理時間を比較する第1の例(Line & Space パターンの例)を示す図である。It is a figure which shows the 1st example (example of Line & space pattern) which compares the processing time of feature-value extraction about basic embodiment and additional embodiment of this invention. 本発明の基本的実施形態と付加的実施形態とについて、特徴量抽出の処理時間を比較する第2の例(Array Hole パターンの例)を示す図である。It is a figure which shows the 2nd example (Example of Array | Hole | pattern) which compares the processing time of feature-value extraction about basic embodiment and additional embodiment of this invention. 本発明の基本的実施形態と付加的実施形態とについて、特徴量抽出の処理時間を比較する第3の例(ISO-Space パターンの例)を示す図である。It is a figure which shows the 3rd example (example of an ISO-Space pattern) which compares the processing time of the feature-value extraction about basic embodiment and additional embodiment of this invention.

 以下、本発明を図示する実施形態に基づいて説明する。 Hereinafter, the present invention will be described based on the illustrated embodiment.

 <<< §1. 図形パターンの形状補正装置の基本構成 >>>
 ここでは、本発明の基本的実施形態に係る図形パターンの形状補正装置100の構成を、図1のブロック図を参照しながら説明する。図示のとおり、この図形パターンの形状補正装置100は、評価点設定ユニット110、特徴量抽出ユニット120、バイアス推定ユニット130、パターン補正ユニット140を有している。ここで、評価点設定ユニット110、特徴量抽出ユニット120、バイアス推定ユニット130の3つのユニットによって、本発明に係る図形パターンの形状推定装置100′が構成されており、図形パターンの形状補正装置100は、この図形パターンの形状推定装置100′に、更に第4番目のユニットとしてパターン補正ユニット140を付加することにより構成される。
<<< §1. Basic configuration of figure pattern shape correction device >>>
Here, the configuration of the figure pattern shape correcting apparatus 100 according to the basic embodiment of the present invention will be described with reference to the block diagram of FIG. As shown in the figure, the figure pattern shape correction apparatus 100 includes an evaluation point setting unit 110, a feature amount extraction unit 120, a bias estimation unit 130, and a pattern correction unit 140. Here, the figure pattern shape estimation apparatus 100 ′ according to the present invention is configured by the three units of the evaluation point setting unit 110, the feature amount extraction unit 120, and the bias estimation unit 130, and the figure pattern shape correction apparatus 100. Is configured by adding a pattern correction unit 140 as a fourth unit to the figure pattern shape estimation apparatus 100 ′.

 <1.1 図形パターンの形状推定装置>
 はじめに、図形パターンの形状推定装置100′の構成および機能について説明する。この図形パターンの形状推定装置100′は、元図形パターン10を用いたリソグラフィプロセスをシミュレートすることにより、実基板S上に形成される実図形パターン20の形状を推定する役割を果たす。図1の上方には、元図形パターン10から右方向に向かう一点鎖線の矢印が示され、この矢印の先には、実図形パターン20を有する実基板Sが描かれている。この一点鎖線の矢印は、物理的なリソグラフィプロセスを示している。
<1.1 Shape Estimation Device for Graphic Pattern>
First, the configuration and function of the figure pattern shape estimation apparatus 100 ′ will be described. The figure pattern shape estimation apparatus 100 ′ serves to estimate the shape of the actual figure pattern 20 formed on the actual substrate S by simulating a lithography process using the original figure pattern 10. In the upper part of FIG. 1, an alternate long and short dash line arrow from the original graphic pattern 10 is shown, and an actual substrate S having a real graphic pattern 20 is drawn at the tip of the arrow. This dashed-dotted arrow indicates a physical lithography process.

 図示の元図形パターン10は、コンピュータを用いた設計作業によって作成された図形パターンのデータであり、一点鎖線の矢印は、このデータに基づいて、物理的な露光、現像、エッチング等のリソグラフィプロセスを実施することにより、実基板Sが作製されることを示している。この実基板S上には、元図形パターン10に応じた実図形パターン20が形成されることになる。ただ、このようなリソグラフィプロセスを実施すると、元図形パターン10と実図形パターン20との間には、若干の食い違いが生じる。これは、前述したとおり、リソグラフィプロセスに含まれる、露光、現像、エッチングという工程の諸条件により、元図形パターン10どおりの正確な図形を実基板S上に形成することが困難なためである。 The original graphic pattern 10 shown in the figure is graphic pattern data created by design work using a computer, and the alternate long and short dash line arrow indicates a lithography process such as physical exposure, development, and etching based on this data. It is shown that the actual substrate S is manufactured by carrying out. On the actual substrate S, an actual graphic pattern 20 corresponding to the original graphic pattern 10 is formed. However, when such a lithography process is performed, there is a slight discrepancy between the original graphic pattern 10 and the actual graphic pattern 20. This is because, as described above, it is difficult to form an accurate figure on the actual substrate S according to the various conditions of the steps of exposure, development, and etching included in the lithography process.

 図2は、元図形パターン10と実図形パターン20との間に形状の相違が生じた具体例を示す平面図である。半導体デバイスなどでは、実際には、非常に微細かつ複雑な図形パターンをシリコン等の実基板Sの表面に形成する必要があるが、ここでは説明の便宜上、図2(a) に示すような単純な図形が元図形パターン10として与えられた場合について説明する。図示の元図形パターン10は、1つの長方形からなるパターンであり、たとえば、実基板S上に、ハッチングを施した長方形内部領域に相当する材料層を形成することを示す元の図形データということになる。 FIG. 2 is a plan view showing a specific example in which a difference in shape occurs between the original graphic pattern 10 and the actual graphic pattern 20. In a semiconductor device or the like, it is actually necessary to form a very fine and complicated figure pattern on the surface of an actual substrate S such as silicon. Here, for convenience of explanation, a simple pattern as shown in FIG. A case where a simple figure is given as the original figure pattern 10 will be described. The illustrated original figure pattern 10 is a pattern composed of one rectangle, for example, original figure data indicating that a material layer corresponding to the hatched rectangular inner region is formed on the actual substrate S. Become.

 実際のリソグラフィ工程では、実基板S上の材料層の上面にレジスト層を形成し、光や電子線による露光を行って、このレジスト層に対する描画を行うことになる。たとえば、レジスト層に対して、図2(a) に示す元図形パターン10の内部領域(ハッチングを施した部分)について露光を行った後、レジスト層を現像して非露光部を除去すれば、露光部が残存レジスト層(ハッチングを施した部分)として残ることになる。更に、この残存レジスト層をマスクとして、材料層に対してエッチングを施せば、理論的には、材料層についても、元図形パターン10の内部領域(ハッチングを施した部分)を残すことができ、実基板S上に、図2(a) に示す元図形パターン10と同一の実図形パターンを得ることができる。 In an actual lithography process, a resist layer is formed on the upper surface of the material layer on the actual substrate S, and exposure to light or an electron beam is performed to perform drawing on the resist layer. For example, if the resist layer is exposed to the inner region (hatched portion) of the original graphic pattern 10 shown in FIG. 2 (a), the resist layer is developed to remove the non-exposed portion. The exposed portion remains as a remaining resist layer (hatched portion). Furthermore, if the remaining resist layer is used as a mask and the material layer is etched, theoretically, the inner region (hatched portion) of the original figure pattern 10 can be left also for the material layer. On the actual substrate S, the same actual figure pattern as the original figure pattern 10 shown in FIG.

 しかしながら、実際には、実基板S上に得られる実図形パターン20は、元図形パターン10に正確には一致しない。その原因は、リソグラフィプロセスに含まれる、露光、現像、エッチングという工程の諸条件が、最終的に得られる実図形パターン20の形状に影響を及ぼすためである。たとえば、露光工程では、光や電子線によってレジスト層に対する描画を行うことになるが、その際に、近接効果(PE:Proximity Effect)によって、レジスト層に実際に描画される露光領域は、元図形パターン10よりも若干広い領域になることが知られている。このような近接効果の影響を受けると、実基板S上に得られる実図形パターン20は、図2(b) に示す例のように、元図形パターン10(破線で示す)よりも広がった領域になる。 However, actually, the actual graphic pattern 20 obtained on the actual substrate S does not exactly match the original graphic pattern 10. This is because the conditions of the steps of exposure, development, and etching included in the lithography process affect the shape of the actual figure pattern 20 finally obtained. For example, in the exposure process, the resist layer is drawn with light or an electron beam. At that time, the exposure area actually drawn on the resist layer by the proximity effect (PE: Proximity Effect) It is known that the area is slightly wider than the pattern 10. Under the influence of such a proximity effect, the actual graphic pattern 20 obtained on the actual substrate S is an area wider than the original graphic pattern 10 (indicated by a broken line) as shown in FIG. become.

 この他、現像工程で用いる現像液の特性、エッチング工程で用いるエッチング液やプラスマの特性、現像工程やエッチング工程の時間などの条件も、実図形パターン20の形状に影響を及ぼす要因になる。したがって、実際に半導体デバイスなどを製造する際には、コンピュータ上で所望の元図形パターン10を設計した後、この元図形パターン10を用いたリソグラフィプロセスをコンピュータ上でシミュレートし、実基板S上に形成されるであろう実図形パターン20の形状を推定する手順が実行される。図1に示す図形パターンの形状推定装置100′は、このような推定を行う機能をもった装置であり、実基板Sを作成するリソグラフィプロセス(露光、現像、エッチング工程)を実際に行うことなしに、シミュレーションによって、実基板S上に形成されるであろう実図形パターン20の形状を推定する機能を有している。 In addition, conditions such as the characteristics of the developer used in the developing process, the characteristics of the etching solution and plasma used in the etching process, the time of the developing process and the etching process, and the like also affect the shape of the actual figure pattern 20. Therefore, when actually manufacturing a semiconductor device or the like, after designing a desired original figure pattern 10 on a computer, a lithography process using the original figure pattern 10 is simulated on the computer, and the actual substrate S is formed. The procedure for estimating the shape of the actual graphic pattern 20 that will be formed in the next step is executed. The figure pattern shape estimation apparatus 100 'shown in FIG. 1 is an apparatus having a function of performing such estimation, and does not actually perform a lithography process (exposure, development, etching process) for creating an actual substrate S. In addition, it has a function of estimating the shape of the actual figure pattern 20 that will be formed on the actual substrate S by simulation.

 図1に示す図形パターンの形状推定装置100′では、評価点設定ユニット110によって、元図形パターン10上に評価点Eが設定される。具体的には、形状推定装置100′には、元図形パターン10として、図形の内部と外部との境界を示す輪郭線の情報を含む図形データが与えられ、評価点設定ユニット110は、そのような元図形パターン10に基づいて、輪郭線上の所定位置に評価点を設定する処理を行う。 In the figure pattern shape estimation apparatus 100 ′ shown in FIG. 1, an evaluation point E is set on the original figure pattern 10 by the evaluation point setting unit 110. Specifically, the shape estimation apparatus 100 ′ is provided with graphic data including contour information indicating the boundary between the inside and the outside of the figure as the original figure pattern 10, and the evaluation point setting unit 110 performs such processing. Based on the original graphic pattern 10, a process for setting an evaluation point at a predetermined position on the contour line is performed.

 図3は、図2(a) に示す元図形パターン10について、いくつかの評価点を設定した例および各評価点に生じるプロセスバイアス(寸法誤差)を示す平面図である。まず図3(a) は、図2(a) に示す元図形パターン10(長方形の図形)の輪郭線上に、評価点E11,E12,E13を設定した例を示す平面図である。図3では、説明の便宜上、3つの評価点E11,E12,E13が設定された簡素な例が示されているが、実際には、長方形の各辺上に、より多数の評価点が設定される。たとえば、輪郭線に沿って所定ピッチで連続的に評価点を設定するように定めておけば、自動的に多数の評価点を設定することができる。 FIG. 3 is a plan view showing an example in which several evaluation points are set for the original graphic pattern 10 shown in FIG. 2 (a) and the process bias (dimensional error) occurring at each evaluation point. First, FIG. 3A is a plan view showing an example in which evaluation points E11, E12, and E13 are set on the outline of the original graphic pattern 10 (rectangular figure) shown in FIG. 2A. FIG. 3 shows a simple example in which three evaluation points E11, E12, and E13 are set for convenience of explanation, but actually, a larger number of evaluation points are set on each side of the rectangle. The For example, if it is determined that the evaluation points are set continuously at a predetermined pitch along the contour line, a large number of evaluation points can be automatically set.

 ここでは、図2(a) に示す元図形パターン10に基づいて、図2(b) に示すような実図形パターン20が得られた場合を考えてみよう。図3(b) は、図2(b) に示す実図形パターン20の輪郭(実線)を、図2(a) に示す元図形パターン10の輪郭(破線)と対比して示した平面図であり、実線で示す実図形パターン20の輪郭線が、破線で示す元図形パターン10の輪郭線に比べて、寸法yだけ外側に広がっている状態が示されている。このため、元図形パターン10の横幅aは、実図形パターン20では横幅bに広がっている。縦幅についても同様に若干の広がりが生じている。 Here, let us consider a case where an actual graphic pattern 20 as shown in FIG. 2B is obtained based on the original graphic pattern 10 shown in FIG. FIG. 3B is a plan view showing the contour (solid line) of the real graphic pattern 20 shown in FIG. 2B compared with the contour (broken line) of the original graphic pattern 10 shown in FIG. There is shown a state in which the contour line of the real graphic pattern 20 indicated by a solid line extends outward by a dimension y as compared to the contour line of the original graphic pattern 10 indicated by a broken line. For this reason, the horizontal width a of the original graphic pattern 10 extends to the horizontal width b in the actual graphic pattern 20. Similarly, the vertical width slightly expands.

 図3(b) において、実図形パターン20上の各評価点E21,E22,E23は、元図形パターン10上の各評価点E11,E12,E13に対応する点として定められた評価点である。ここで、評価点E21は、評価点E11を破線で示す輪郭線の法線方向外側に向かって所定寸法y11だけずらした点として定義されている。同様に、評価点E22は、評価点E12を破線で示す輪郭線の法線方向外側に向かって所定寸法y12だけずらした点として定義され、評価点E23は、評価点E13を破線で示す輪郭線の法線方向外側に向かって所定寸法y13だけずらした点として定義されている。 3B, the evaluation points E21, E22, E23 on the actual graphic pattern 20 are evaluation points determined as points corresponding to the evaluation points E11, E12, E13 on the original graphic pattern 10. Here, the evaluation point E21 is defined as a point shifted from the evaluation point E11 by a predetermined dimension y11 toward the outer side in the normal direction of the contour line indicated by a broken line. Similarly, the evaluation point E22 is defined as a point shifted from the evaluation point E12 by a predetermined dimension y12 toward the outside in the normal direction of the contour line indicated by a broken line, and the evaluation point E23 is a contour line indicating the evaluation point E13 by a broken line. Is defined as a point shifted by a predetermined dimension y13 toward the outside in the normal direction.

 図3に示す例のように、ここでは、元図形パターン10に対する実図形パターン20の形状変化を定量的に示すために、各評価点Eについて生じる輪郭線の法線方向についてのずれ量yを用いることにする。また、このずれ量yを、リソグラフィプロセスに起因して生じるバイアス量であることから「プロセスバイアスy」と呼ぶことにする。プロセスバイアスyは正負の符号をもった値であり、以下に示す実施例では、図形の露光部(描画部)が太る方向を正の値、細る方向を負の値と定義することにする。図示の例では、輪郭線で囲まれた図形の内部が露光部(描画部)になるので、輪郭線の外側方向にずれた場合には正の値、輪郭線の内側方向にずれた場合には負の値となる。図3に示す例の場合、各評価点Eはいずれも外側方向にずれを生じているため、プロセスバイアスy11,y12,y13は正の値をとる。 As in the example shown in FIG. 3, here, in order to quantitatively indicate the shape change of the actual graphic pattern 20 with respect to the original graphic pattern 10, the deviation amount y in the normal direction of the contour line generated for each evaluation point E is set. I will use it. Further, this deviation amount y is referred to as “process bias y” because it is a bias amount caused by the lithography process. The process bias y is a value having a positive / negative sign, and in the embodiment described below, the direction in which the figure exposure part (drawing part) is fattened is defined as a positive value, and the direction in which the figure is thinned is defined as a negative value. In the example shown in the figure, the inside of the figure surrounded by the contour line is the exposure part (drawing part), so if it is shifted in the outer direction of the contour line, it is a positive value, and it shifts in the inner direction of the contour line. Is negative. In the case of the example shown in FIG. 3, since each evaluation point E is shifted in the outward direction, the process bias y11, y12, y13 takes a positive value.

 なお、図3には、説明の便宜上、3つの評価点E11,E12,E13しか示されていないが、実際には、元図形パターン10の輪郭線上には、より多数の評価点Eが定義される。したがって、各評価点Eのそれぞれについてプロセスバイアスyを推定することができれば、各評価点Eを輪郭線の法線方向にプロセスバイアスyだけずらすことにより、リソグラフィプロセス後の各評価点Eの位置を推定することができ、実図形パターン20の輪郭線位置を推定することができる。 In FIG. 3, only three evaluation points E11, E12, and E13 are shown for convenience of explanation, but actually, more evaluation points E are defined on the contour line of the original graphic pattern 10. The Therefore, if the process bias y can be estimated for each evaluation point E, the position of each evaluation point E after the lithography process is shifted by shifting each evaluation point E by the process bias y in the normal direction of the contour line. The contour position of the actual graphic pattern 20 can be estimated.

 もっとも、各評価点Eのプロセスバイアスyの値は、個々の評価点ごとに異なる。たとえば、図3(b) に示す例の場合、プロセスバイアスy11,y12,y13の値はそれぞれ個別の値になる。これは、各評価点E11,E12,E13の元図形パターン10に対する相対位置がそれぞれ異なっているため、リソグラフィプロセスによる影響も異なり、それぞれに生じるずれ量にも差が生じるためである。したがって、元図形パターン10に基づいて実図形パターン20の形状をシミュレーションによって推定する際の推定精度を高めるためには、個々の評価点ごとにリソグラフィプロセスによる影響を適切に予測し、適切なプロセスバイアスyを求めることが重要である。 However, the value of the process bias y of each evaluation point E differs for each evaluation point. For example, in the example shown in FIG. 3B, the values of the process biases y11, y12, and y13 are individual values. This is because the relative positions of the evaluation points E11, E12, and E13 with respect to the original graphic pattern 10 are different, so that the influence of the lithography process is also different, and the amount of deviation that occurs is also different. Therefore, in order to improve the estimation accuracy when the shape of the actual graphic pattern 20 is estimated by simulation based on the original graphic pattern 10, the influence of the lithography process is appropriately predicted for each evaluation point, and an appropriate process bias is determined. It is important to obtain y.

 そこで、図1に示す図形パターンの形状推定装置100′では、まず、評価点設定ユニット110により、元図形パターン10上に評価点が設定される。具体的には、元図形パターン10に含まれている図形の内部と外部との境界を示す輪郭線の情報に基づいて、この輪郭線上の所定位置に各評価点Eを設定すればよい。たとえば、輪郭線に沿って所定間隔で連続的に評価点を設定することができる。 Therefore, in the figure pattern shape estimation apparatus 100 ′ shown in FIG. 1, first, an evaluation point is set on the original figure pattern 10 by the evaluation point setting unit 110. Specifically, each evaluation point E may be set at a predetermined position on the contour line based on the contour line information indicating the boundary between the inside and the outside of the figure included in the original figure pattern 10. For example, evaluation points can be set continuously at predetermined intervals along the contour line.

 続いて、特徴量抽出ユニット120により、元図形パターン10について、各評価点Eの周囲の特徴を示す特徴量を抽出する。ある1つの評価点Eについての特徴量xは、当該評価点Eの周囲の特徴を示す値ということになる。このような特徴量xを抽出する処理を行うため、特徴量抽出ユニット120は、図1に示すとおり、元画像作成部121、画像ピラミッド作成部122、特徴量算出部123を有している。 Subsequently, the feature amount extraction unit 120 extracts feature amounts indicating features around each evaluation point E for the original graphic pattern 10. The feature amount x for one evaluation point E is a value indicating the features around the evaluation point E. In order to perform the process of extracting the feature quantity x, the feature quantity extraction unit 120 includes an original image creation unit 121, an image pyramid creation unit 122, and a feature quantity calculation unit 123, as shown in FIG.

 元画像作成部121は、与えられた元図形パターン10に基づいて、それぞれ所定の画素値を有する画素の集合体からなる元画像を作成する。たとえば、図2(a) に示すような元図形パターン10が与えられた場合、長方形の内部の画素(図のハッチング部分の画素)については画素値1、長方形の外部の画素については画素値0を付与すれば、二値画像からなる元画像が作成される。 The original image creation unit 121 creates an original image composed of a collection of pixels each having a predetermined pixel value based on the given original graphic pattern 10. For example, when an original figure pattern 10 as shown in FIG. 2A is given, a pixel value 1 for a pixel inside a rectangle (a hatched pixel in the figure) and a pixel value 0 for a pixel outside the rectangle. Is added, an original image composed of binary images is created.

 画像ピラミッド作成部122は、この元画像を縮小して縮小画像を作成する縮小処理を含む画像ピラミッド作成処理を行い、複数n枚の階層画像からなる画像ピラミッドを作成する。画像ピラミッドの各階層を構成するn枚の階層画像は、いずれも元画像作成部121で作成された元画像に対して、所定の画像処理を施すことにより得られる画像であり、それぞれ異なるサイズをもっている。このような複数の階層画像の集合体を「画像ピラミッド」と呼ぶのは、各階層画像をサイズの大きい方から小さい方へと順に積み上げて階層構造を形成すると、あたかもピラミッドが構成されるように見えるためである。 The image pyramid creation unit 122 performs an image pyramid creation process including a reduction process for creating a reduced image by reducing the original image, and creates an image pyramid including a plurality of n layer images. Each of the n layer images constituting each layer of the image pyramid is an image obtained by performing predetermined image processing on the original image created by the original image creation unit 121, and has different sizes. Yes. Such a set of hierarchical images is called an “image pyramid” because each layered image is stacked in order from the largest to the smallest to form a hierarchical structure, as if a pyramid is formed. This is because it can be seen.

 特徴量算出部123は、この画像ピラミッドの階層を構成するn枚の各階層画像について、それぞれ評価点Eの近傍の画素の画素値に基づいて特徴量を算出する。具体的には、1枚目の階層画像における評価点Eの近傍の画素の画素値に基づいて特徴量x1が算出され、2枚目の階層画像における評価点Eの近傍の画素の画素値に基づいて特徴量x2が算出され、同様に、n枚目の階層画像における評価点Eの近傍の画素の画素値に基づいて特徴量xnが算出されることになり、1つの評価点Eについて、n個の特徴量x1~xnが抽出される。たとえば、図3(a) に示す例の場合、評価点E11について、n通りの特徴量x1(E11)~xn(E11)が抽出され、評価点E12について、n通りの特徴量x1(E12)~xn(E12)が抽出され、評価点E13について、n通りの特徴量x1(E13)~xn(E13)が抽出される。 The feature amount calculation unit 123 calculates a feature amount based on the pixel values of the pixels in the vicinity of the evaluation point E for each of the n layer images constituting the layer of the image pyramid. Specifically, the feature amount x1 is calculated based on the pixel value of the pixel near the evaluation point E in the first hierarchical image, and the pixel value of the pixel near the evaluation point E in the second hierarchical image is calculated. On the basis of the feature amount x2, similarly, the feature amount xn is calculated based on the pixel value of the pixel in the vicinity of the evaluation point E in the nth hierarchical image. n feature amounts x1 to xn are extracted. For example, in the example shown in FIG. 3A, n feature values x1 (E11) to xn (E11) are extracted for the evaluation point E11, and n feature values x1 (E12) for the evaluation point E12. To xn (E12) are extracted, and n feature quantities x1 (E13) to xn (E13) are extracted for the evaluation point E13.

 一方、バイアス推定ユニット130は、特徴量抽出ユニット120によって抽出された特徴量xに基づいて、評価点Eの元図形パターン10上の位置と実図形パターン20上の位置とのずれ量を示すプロセスバイアスyを推定する処理を行う。このような推定処理を行うため、バイアス推定ユニット130は、特徴量入力部131と推定演算部132を有している。特徴量入力部131は、特徴量算出部123によって評価点Eについて算出された特徴量x1~xnを入力する構成要素であり、推定演算部132は、予め実施された学習段階によって得られた学習情報Lに基づいて、特徴量x1~xnに応じた推定値を求め、求めた推定値を評価点Eについてのプロセスバイアスの推定値yとして出力する処理を行う。 On the other hand, the bias estimation unit 130 is a process that indicates the amount of deviation between the position of the evaluation point E on the original graphic pattern 10 and the position on the actual graphic pattern 20 based on the feature value x extracted by the feature value extraction unit 120. A process for estimating the bias y is performed. In order to perform such estimation processing, the bias estimation unit 130 includes a feature amount input unit 131 and an estimation calculation unit 132. The feature amount input unit 131 is a component that inputs the feature amounts x1 to xn calculated for the evaluation point E by the feature amount calculation unit 123, and the estimation calculation unit 132 performs the learning obtained by the learning stage performed in advance. Based on the information L, an estimated value corresponding to the feature amounts x1 to xn is obtained, and the obtained estimated value is output as the estimated value y of the process bias for the evaluation point E.

 より具体的に言えば、推定演算部132は、元図形パターン10を構成する図形の輪郭線上に位置する各評価点Eについて、当該輪郭線の法線方向についてのずれ量としてプロセスバイアスの推定値yを出力することになる。たとえば、図3(a) に示す元図形パターン10については、図3(b) に示すように、評価点E11についてプロセスバイアスy11、評価点E12についてプロセスバイアスy12、評価点E13についてプロセスバイアスy13が、それぞれ推定値として推定演算部132から出力される。こうして、各評価点Eについてのプロセスバイアスの推定値yが得られれば、各評価点Eの新たな位置(輪郭線の法線方向に、プロセスバイアスyだけずらした位置)を決定することができるので、図3(b) に示すように、実図形パターン20の形状を推定することができる。 More specifically, the estimation calculating unit 132 estimates the process bias for each evaluation point E located on the contour line of the graphic constituting the original graphic pattern 10 as a deviation amount in the normal direction of the contour line. y is output. For example, with respect to the original figure pattern 10 shown in FIG. 3A, as shown in FIG. 3B, the process bias y11 for the evaluation point E11, the process bias y12 for the evaluation point E12, and the process bias y13 for the evaluation point E13. , Respectively, are output from the estimation calculation unit 132 as estimated values. Thus, if the estimated value y of the process bias for each evaluation point E is obtained, a new position of each evaluation point E (a position shifted by the process bias y in the normal direction of the contour line) can be determined. Therefore, as shown in FIG. 3B, the shape of the actual graphic pattern 20 can be estimated.

 以上が、図形パターンの形状推定装置100′の基本構成および基本動作である。なお、特徴量抽出ユニット120の具体的な動作は§2で詳述し、バイアス推定ユニット130の具体的な動作は§3で詳述する。 The above is the basic configuration and basic operation of the figure pattern shape estimation apparatus 100 ′. The specific operation of the feature quantity extraction unit 120 will be described in detail in §2, and the specific operation of the bias estimation unit 130 will be described in detail in §3.

 <1.2 図形パターンの形状補正装置>
 続いて、図形パターンの形状補正装置100の構成および機能について説明する。図形パターンの形状補正装置100は、上述した図形パターンの形状推定装置100′を用いて、元図形パターン10の形状を補正する装置であり、図1に示すとおり、図形パターンの形状推定装置100′の構成要素となる評価点設定ユニット110、特徴量抽出ユニット120、バイアス推定ユニット130に加えて、更に、パターン補正ユニット140を備えている。パターン補正ユニット140は、バイアス推定ユニット130から出力されるプロセスバイアスの推定値yに基づいて、元図形パターン10に対する補正を行う構成要素であり、このパターン補正ユニット140による補正によって得られる補正図形パターン15が、この図形パターンの形状補正装置100の最終的な出力となる。
<1.2 Shape Pattern Correction Device>
Next, the configuration and function of the figure pattern shape correction apparatus 100 will be described. The figure pattern shape correction apparatus 100 is an apparatus that corrects the shape of the original figure pattern 10 using the figure pattern shape estimation apparatus 100 'described above. As shown in FIG. 1, the figure pattern shape estimation apparatus 100' is used. In addition to the evaluation point setting unit 110, the feature amount extraction unit 120, and the bias estimation unit 130 which are constituent elements of the above, a pattern correction unit 140 is further provided. The pattern correction unit 140 is a component that corrects the original graphic pattern 10 based on the estimated value y of the process bias output from the bias estimation unit 130, and the corrected graphic pattern obtained by the correction by the pattern correction unit 140 15 is the final output of the shape correction apparatus 100 for this graphic pattern.

 パターン補正ユニット140による補正は、元図形パターン10上の各評価点Eをプロセスバイアスyを相殺する方向に移動させ、各図形の境界線を移動後の各評価点Eの位置に修正することによって行うことができる。たとえば、図3(a) に示す元図形パターン10について、図3(b) に示すような実図形パターン20が推定された場合を考えてみよう。この場合、元図形パターン10に含まれる長方形の横幅aは、実図形パターン20上では横幅bに増加しており、(b-a)/2=yとすれば、長方形の左右の辺の位置をいずれも内側へyだけ移動させれば、元図形パターン10と同じ横幅aを有する長方形が得られることになる。したがって、基本的には、元図形パターン10の横幅aを2yだけ減じる補正を行い、この補正後の図形パターンに基づいてリソグラフィプロセスを実行すれば、実基板S上には、実図形パターン20として、当初の設計どおりの横幅aを有する長方形を得ることができる。 The correction by the pattern correction unit 140 is performed by moving each evaluation point E on the original graphic pattern 10 in a direction to cancel the process bias y, and correcting the boundary line of each graphic to the position of each evaluation point E after the movement. It can be carried out. For example, let us consider a case where an actual graphic pattern 20 as shown in FIG. 3B is estimated for the original graphic pattern 10 shown in FIG. In this case, the horizontal width “a” of the rectangle included in the original graphic pattern 10 is increased to the horizontal width “b” on the actual graphic pattern 20, and if (ba) / 2 = y, the positions of the left and right sides of the rectangle If both are moved inward by y, a rectangle having the same horizontal width a as that of the original graphic pattern 10 is obtained. Therefore, basically, if correction is performed to reduce the width a of the original graphic pattern 10 by 2y, and a lithography process is executed based on the corrected graphic pattern, the actual graphic pattern 20 is formed on the actual substrate S. A rectangle having a width a as originally designed can be obtained.

 図3(a) に示す元図形パターン10の場合、評価点E11を左方(長方形の内側)にプロセスバイアスy11だけ移動させ、評価点E12を左方(長方形の内側)にプロセスバイアスy12だけ移動させ、評価点E13を上方(長方形の内側)にプロセスバイアスy13だけ移動させる補正を行えばよい。もちろん、実際には、より多数の評価点が定義されているので、これらすべての評価点について、プロセスバイアスに相当する寸法だけ長方形の内側に移動させる補正を行い、補正後の評価点を結ぶ新たな輪郭線を定義すれば、当該輪郭線で画定される図形を含む補正図形パターン15が得られることになる。このような補正処理自体は公知の技術なので、ここでは詳しい説明は省略する。 In the case of the original graphic pattern 10 shown in FIG. 3A, the evaluation point E11 is moved to the left (inside the rectangle) by the process bias y11, and the evaluation point E12 is moved to the left (inside the rectangle) by the process bias y12. Then, the correction may be performed by moving the evaluation point E13 upward (inside the rectangle) by the process bias y13. Of course, since more evaluation points are actually defined, correction is performed to move all the evaluation points to the inside of the rectangle by a dimension corresponding to the process bias, and new evaluation points are connected. If a simple contour line is defined, a corrected graphic pattern 15 including a graphic defined by the contour line is obtained. Since such correction processing itself is a known technique, detailed description thereof is omitted here.

 もっとも、実際には、こうして得られた補正図形パターン15を用いてリソグラフィプロセスを実行し、実基板S上に実図形パターン25(図示省略)を形成しても、得られる実図形パターン25は、設計当初の元図形パターン10には正確には一致しない(たとえば、実基板S上に形成される長方形の横幅は、正確にはaにならない)。なぜなら、元図形パターン10に含まれる図形と補正図形パターン15に含まれる図形とでは、サイズや形状が異なるため、リソグラフィプロセスを実行した場合の近接効果などの影響に相違が生じるためである。 However, actually, even if a lithography process is executed using the corrected graphic pattern 15 obtained in this way and a real graphic pattern 25 (not shown) is formed on the real substrate S, the actual graphic pattern 25 obtained is It does not exactly match the original figure pattern 10 at the beginning of the design (for example, the lateral width of the rectangle formed on the actual substrate S is not exactly a). This is because the graphic included in the original graphic pattern 10 and the graphic included in the corrected graphic pattern 15 are different in size and shape, so that there is a difference in the influence such as the proximity effect when the lithography process is executed.

 別言すれば、特定の評価点Eについてシミュレーションを行った結果、プロセスバイアスyが生じることが判明したとしても、単に当該評価点Eの位置をプロセスバイアスyだけ逆方向に移動させただけでは、正確な補正を行うことはできないのである。 In other words, even if it is found that a process bias y occurs as a result of simulation for a specific evaluation point E, simply moving the position of the evaluation point E in the reverse direction by the process bias y Accurate correction cannot be performed.

 もちろん、元図形パターン10を用いてリソグラフィプロセスを実行した結果として得られる実図形パターン20に比べれば、補正図形パターン15を用いてリソグラフィプロセスを実行した結果として得られる実図形パターン25の方が、より元図形パターン10に近いパターンになるので、元図形パターン10をそのまま用いて実際のリソグラフィプロセスを実行するよりは、パターン補正ユニット140による補正によって得られる補正図形パターン15を用いて実際のリソグラフィプロセスを実行した方が、実基板S上には、より正確な図形パターンが得られることになる。すなわち、パターン補正ユニット140によって補正を行えば、誤差が縮小することは確かである。 Of course, compared to the actual figure pattern 20 obtained as a result of executing the lithography process using the original figure pattern 10, the actual figure pattern 25 obtained as a result of executing the lithography process using the corrected figure pattern 15 is Since the pattern becomes closer to the original figure pattern 10, the actual lithography process is performed using the corrected figure pattern 15 obtained by the correction by the pattern correction unit 140, rather than executing the actual lithography process using the original figure pattern 10 as it is. The more accurate figure pattern can be obtained on the actual substrate S by executing the above. That is, if correction is performed by the pattern correction unit 140, it is certain that the error is reduced.

 そこで、実用上は、図1に示すとおり、パターン補正ユニット140から出力された補正図形パターン15を、再び、図形パターンの形状補正装置100に与える処理を繰り返し行うようにする。すなわち、補正図形パターン15は、図形パターンの形状推定装置100′に新たな元図形パターンとして与えられることになり、この新たな元図形パターン(補正図形パターン15)について、§1.1で述べた処理が実行される。具体的には、評価点設定ユニット110によって補正図形パターン15上に各評価点Eの設定が行われ、特徴量抽出ユニット120によって各評価点Eについての特徴量が抽出され、バイアス推定ユニット130によって各評価点Eについてのプロセスバイアス推定値yが算出される。そして、この算出されたプロセスバイアス推定値yを用いて、パターン補正ユニット140において再度の補正処理が行われる。 Therefore, practically, as shown in FIG. 1, the process of giving the corrected figure pattern 15 output from the pattern correction unit 140 to the figure pattern shape correcting apparatus 100 is repeated. That is, the corrected figure pattern 15 is given as a new original figure pattern to the figure pattern shape estimation apparatus 100 ', and this new original figure pattern (corrected figure pattern 15) is described in §1.1. Processing is executed. Specifically, each evaluation point E is set on the corrected graphic pattern 15 by the evaluation point setting unit 110, the feature amount for each evaluation point E is extracted by the feature amount extraction unit 120, and the bias estimation unit 130 performs the setting. An estimated process bias value y for each evaluation point E is calculated. Then, the correction process is performed again in the pattern correction unit 140 using the calculated process bias estimated value y.

 図1に示す図形パターンの形状補正装置100は、このように、図形パターンに対する補正を繰り返し実行する機能を有している。すなわち、元図形パターン10に基づいて第1の補正図形パターン15が得られ、この第1の補正図形パターン15に基づいて第2の補正図形パターンが得られ、この第2の補正図形パターンに基づいて第3の補正図形パターンが得られ、... という補正処理が繰り返されることになる。補正処理を行うたびに、元図形パターンとシミュレーションにより得られる図形パターンとの形状誤差は縮小してゆく。 The graphic pattern shape correction apparatus 100 shown in FIG. 1 has a function of repeatedly executing correction on a graphic pattern in this way. In other words, a first corrected graphic pattern 15 is obtained based on the original graphic pattern 10, a second corrected graphic pattern is obtained based on the first corrected graphic pattern 15, and based on the second corrected graphic pattern. Thus, a third corrected graphic pattern is obtained, and the correction process of... Is repeated. Each time correction processing is performed, the shape error between the original graphic pattern and the graphic pattern obtained by the simulation is reduced.

 したがって、たとえば、元図形パターンとシミュレーションにより得られる図形パターンとの形状誤差が所定の許容範囲内に収束した時点で補正完了とし、最後に得られた補正図形パターンを用いて実際のリソグラフィプロセスを実行すれば、実基板S上には、設計当初の元図形パターン10に近い実図形パターンを形成することができる。かくして、本発明に係る図形パターンの形状補正装置100を用いれば、元図形パターンの形状を的確に補正することが可能になる。 Therefore, for example, when the shape error between the original figure pattern and the figure pattern obtained by simulation converges within a predetermined tolerance, the correction is completed, and the actual lithography process is executed using the last obtained figure pattern Then, a real graphic pattern close to the original graphic pattern 10 at the initial design can be formed on the real substrate S. Thus, by using the figure pattern shape correcting apparatus 100 according to the present invention, the shape of the original figure pattern can be accurately corrected.

 なお、図1に示す評価点設定ユニット110、特徴量抽出ユニット120、バイアス推定ユニット130、パターン補正ユニット140は、いずれもコンピュータに所定のプログラムを組み込むことによって構成されている。したがって、本発明に係る図形パターンの形状推定装置100′や図形パターンの形状補正装置100は、実際には、汎用のコンピュータに専用のプログラムを組み込むことによって実現される。 Note that each of the evaluation point setting unit 110, the feature amount extraction unit 120, the bias estimation unit 130, and the pattern correction unit 140 shown in FIG. 1 is configured by incorporating a predetermined program into a computer. Therefore, the figure pattern shape estimation apparatus 100 ′ and the figure pattern shape correction apparatus 100 according to the present invention are actually realized by incorporating a dedicated program into a general-purpose computer.

 図4は、図1に示す図形パターンの形状補正装置100を用いた製品の設計・製造工程を示す流れ図である。まず、ステップS1において、製品設計段階が行われる。この製品設計段階は、半導体デバイスなどを構成するための図形パターンを作成する工程であり、図1に示す元図形パターン10は、この製品設計段階で作成されることになる。なお、半導体デバイスなどの製品設計を行い、図形パターンを作成する装置自体は既に公知の装置であるので、ここでは詳しい説明は省略する。 FIG. 4 is a flowchart showing a product design / manufacturing process using the figure pattern shape correcting apparatus 100 shown in FIG. First, in step S1, a product design stage is performed. This product design stage is a process of creating a graphic pattern for configuring a semiconductor device or the like, and the original graphic pattern 10 shown in FIG. 1 is created at this product design stage. An apparatus for designing a product such as a semiconductor device and creating a graphic pattern is already a known apparatus, and therefore detailed description thereof is omitted here.

 次のステップS2の評価点設定段階は、図1に示す評価点設定ユニット110において実行される段階である。たとえば、図2(a) に示すような元図形パターン10が与えられた場合、図3(a) に示すような評価点E11,E12,E13などが設定される。続くステップS3の特徴量抽出段階は、図1に示す特徴量抽出ユニット120において実行される段階であり、前述したとおり、1つの評価点Eについて、n通りの特徴量x1~xnが抽出される(詳細な抽出手順は、§2で述べる)。そして、ステップS4のプロセスバイアス推定段階は、図1に示すバイアス推定ユニット130において実行される段階であり、前述したとおり、1つの評価点Eについて、n通りの特徴量x1~xnを用いてプロセスバイアス推定値yが求められる(詳細な算出手順は、§3で述べる)。 The evaluation point setting stage in the next step S2 is a stage executed in the evaluation point setting unit 110 shown in FIG. For example, when the original figure pattern 10 as shown in FIG. 2A is given, evaluation points E11, E12, E13, etc. as shown in FIG. 3A are set. The subsequent feature quantity extraction stage of step S3 is a stage executed in the feature quantity extraction unit 120 shown in FIG. 1. As described above, n kinds of feature quantities x1 to xn are extracted for one evaluation point E. (The detailed extraction procedure is described in §2). Then, the process bias estimation stage of step S4 is a stage executed in the bias estimation unit 130 shown in FIG. 1, and as described above, a process using n feature quantities x1 to xn is performed for one evaluation point E. The estimated bias value y is obtained (the detailed calculation procedure is described in §3).

 そして、ステップS5のパターン形状補正段階は、図1に示すパターン補正ユニット140において実行される段階であり、前述したとおり、各評価点Eについて求められたプロセスバイアス推定値yを用いて、元図形パターン10に対する補正を行うことにより補正図形パターン15が得られる。このような補正は、1回だけでは不十分であるため、ステップS6において「補正完了」と判断されるまで、ステップS2へ戻る処理が繰り返される。すなわち、ステップS5で得られた補正図形パターン15を、新たな元図形パターン10として取り扱うことにより、ステップS2~S5の処理が繰り返し実行されることになる。 Then, the pattern shape correction stage in step S5 is a stage executed in the pattern correction unit 140 shown in FIG. 1, and as described above, using the process bias estimated value y obtained for each evaluation point E, the original figure A corrected graphic pattern 15 is obtained by correcting the pattern 10. Since such correction is not sufficient once, the process of returning to step S2 is repeated until it is determined that “correction is completed” in step S6. That is, by treating the corrected graphic pattern 15 obtained in step S5 as a new original graphic pattern 10, the processes in steps S2 to S5 are repeatedly executed.

 このような繰り返し処理の結果、ステップS6において「補正完了」と判断されるに至った場合には、ステップS7へと進み、リソグラフィプロセスが実行される。「補正完了」との判断は、たとえば「一定割合の評価点Eについて、元図形パターン上の位置とシミュレーションにより得られる図形パターン上の位置との誤差が、所定の基準値以下になる」というような特定の条件が満たされたことにより行うことができる。ステップS7のリソグラフィプロセスでは、最終的に得られた補正図形パターンに基づいて、露光、現像、エッチングという実工程が行われ、実基板Sの製造が行われる。 If it is determined in step S6 that “correction is complete” as a result of such repeated processing, the process proceeds to step S7, and the lithography process is executed. The determination of “correction completion” is, for example, “the error between the position on the original graphic pattern and the position on the graphic pattern obtained by the simulation is equal to or less than a predetermined reference value for a certain percentage of evaluation points E”. This can be done by satisfying certain specific conditions. In the lithography process in step S7, actual processes such as exposure, development, and etching are performed based on the finally obtained corrected graphic pattern, and the actual substrate S is manufactured.

 図4に示す流れ図において、ステップS1~S6の段階は、計算機(コンピュータ)上で実行するプロセスであり、ステップS7の段階は、実基板S上で実行するプロセスである。 In the flowchart shown in FIG. 4, steps S1 to S6 are processes executed on a computer (computer), and step S7 is a process executed on an actual substrate S.

 <1.3 本発明における特徴量抽出の基本概念>
 これまで、図1に示す図形パターンの形状推定装置100′および図形パターンの形状補正装置100の基本構成とその基本動作を述べた。これらの各装置において、本発明として最も特徴的な構成要素は、特徴量抽出ユニット120である。本発明は、元図形パターン10から的確な特徴量を抽出して的確なシミュレーションを行い、実基板S上に形成される実図形パターン20の形状を正確に推定するという作用効果を奏するものであるが、このような作用効果を得るために最も重要な役割を果たす構成要素が特徴量抽出ユニット120である。別言すれば、本発明の重要な特徴は、元図形パターン10から非常にユニークな方法で特徴量の抽出を行う点にある。そこで、ここでは、本発明における特徴量抽出の基本概念を説明する。
<1.3 Basic Concept of Feature Extraction in the Present Invention>
So far, the basic configuration and basic operation of the figure pattern shape estimation apparatus 100 ′ and the figure pattern shape correction apparatus 100 shown in FIG. 1 have been described. In each of these apparatuses, the most characteristic component of the present invention is a feature quantity extraction unit 120. The present invention produces an effect of extracting an accurate feature amount from the original graphic pattern 10 and performing an accurate simulation to accurately estimate the shape of the actual graphic pattern 20 formed on the actual substrate S. However, the component that plays the most important role in obtaining such an effect is the feature quantity extraction unit 120. In other words, an important feature of the present invention is that feature amounts are extracted from the original graphic pattern 10 by a very unique method. Therefore, here, the basic concept of feature quantity extraction in the present invention will be described.

 図5は、長方形からなる図形パターン10の輪郭線上に定義された各評価点E11,E12,E13について、その周囲の特徴を把握する概念を示す平面図である。図5(a) は、長方形の右辺中央に設定された評価点E11について、参考円C1の内部と参考円C2の内部の特徴を抽出した状態を示している。参考円C1,C2は、いずれも評価点E11を中心とする円であるが、参考円C1に比べて参考円C2はより大きな円になっている。同様に、図5(b) は、長方形の右辺下方に設定された評価点E12について、図5(c) は、長方形の下辺中央に設定された評価点E13について、それぞれ2つの参考円C1,C2の内部の特徴を抽出した状態を示している。 FIG. 5 is a plan view showing the concept of grasping the surrounding features of each evaluation point E11, E12, E13 defined on the outline of the graphic pattern 10 made of a rectangle. FIG. 5A shows a state in which features inside the reference circle C1 and the inside of the reference circle C2 are extracted for the evaluation point E11 set at the center of the right side of the rectangle. The reference circles C1 and C2 are both circles centered on the evaluation point E11, but the reference circle C2 is larger than the reference circle C1. Similarly, FIG. 5 (b) shows two reference circles C1, for the evaluation point E12 set below the right side of the rectangle, and FIG. 5 (c) shows the evaluation point E13 set for the center of the lower side of the rectangle. The state where the internal features of C2 are extracted is shown.

 ここで、各評価点について、参考円C1の内部の特徴を比較すると、評価点E11と評価点E12については、左半分が図形内部(ハッチング領域)、右半分が図形外部(空白領域)になっており、参考円C1の内部の特徴に関しては差がない。一方、評価点E13についての参考円C1の内部の特徴は、上半分が図形内部(ハッチング領域)、下半分が図形外部(空白領域)になっており、評価点E11,E12の参考円C1内の特徴を90°回転させたものになっているが、ハッチング領域の占有率について差はない。一方、参考円C2の内部の特徴を、各評価点E11,E12,E13について比較すると、ハッチング領域の分布がそれぞれ異なっており、互いに異なる特徴を有していることがわかる。 Here, when the internal features of the reference circle C1 are compared for each evaluation point, for the evaluation points E11 and E12, the left half is inside the figure (hatching area) and the right half is outside the figure (blank area). There is no difference in the internal features of the reference circle C1. On the other hand, the internal features of the reference circle C1 for the evaluation point E13 are such that the upper half is inside the figure (hatched area) and the lower half is outside the figure (blank area), and the inside of the reference circle C1 of the evaluation points E11 and E12 However, there is no difference in the occupation ratio of the hatched area. On the other hand, comparing the internal features of the reference circle C2 with respect to the respective evaluation points E11, E12, E13, it can be seen that the distributions of the hatched regions are different and have different features.

 このように、図形パターン10上の各評価点E11,E12,E13について、その近傍の特徴を把握するにしても、参考円C1のような狭い近傍領域の特徴を抽出するか、参考円C2のように若干広い近傍領域の特徴を抽出するか、によって、抽出される特徴は異なる。したがって、ある1つの評価点Eについて、その近傍領域の特徴を何らかの特徴量xとして定量的に抽出する場合、近傍領域の範囲を段階的に変化させれば、より多様な方法で特徴量の抽出が行えることがわかる。 As described above, for each evaluation point E11, E12, E13 on the graphic pattern 10, even if the features of the neighborhood are grasped, the feature of the narrow neighborhood region such as the reference circle C1 is extracted or the reference circle C2 Thus, the extracted features differ depending on whether or not features in a slightly wide neighboring region are extracted. Therefore, when a feature of a neighboring region is quantitatively extracted as some feature amount x with respect to a certain evaluation point E, the feature amount can be extracted by various methods by changing the range of the neighboring region stepwise. You can see that

 また、前述したとおり、元図形パターン10に基づいてリソグラフィプロセスを実行した場合、基板上に得られる実図形パターン20は、露光工程における近接効果により、元図形パターン10に対して寸法誤差(プロセスバイアス)を生じることになる。特に、電子線露光における近接効果には、影響範囲が狭い前方散乱に起因する効果や、影響範囲が広い後方散乱に起因する効果など、多様な効果が含まれている。たとえば、前方散乱は、レジスト層などから構成される被成形層に電子ビームを照射したときに、質量の小さい電子が、レジスト内で分子に散乱されながら拡がっていく現象として説明され、後方散乱は、レジスト層の下にある金属基板などの表面付近で散乱されて跳ね返ってきた電子がレジスト層内で拡散してゆく現象として説明されている。また、エッチング工程によってもプロセスバイアスが生じ、その大きさは、エッチング時のローディング現象に応じて変動する。このローディング現象も、上述した露光工程における近接効果と同様に、狭い影響範囲をもつ成分や広い影響範囲をもつ成分など、多様な成分が合わさって生じることになる。 Further, as described above, when the lithography process is executed based on the original graphic pattern 10, the actual graphic pattern 20 obtained on the substrate has a dimensional error (process bias) with respect to the original graphic pattern 10 due to the proximity effect in the exposure process. ) Will occur. In particular, the proximity effect in electron beam exposure includes various effects such as an effect caused by forward scattering having a narrow influence range and an effect caused by back scattering having a wide influence range. For example, forward scattering is explained as a phenomenon in which, when an electron beam is irradiated onto a molding layer composed of a resist layer or the like, electrons having a small mass spread while being scattered by molecules in the resist, It is described as a phenomenon in which electrons scattered and bounced near the surface of a metal substrate or the like under the resist layer are diffused in the resist layer. In addition, a process bias is also generated by the etching process, and the magnitude of the process bias varies depending on a loading phenomenon during etching. Similar to the proximity effect in the exposure process described above, this loading phenomenon is also caused by combining various components such as a component having a narrow influence range and a component having a wide influence range.

 結局、ある1つの評価点Eについてのプロセスバイアスyの値は、様々なスケール感をもった現象が融合して決まる値になる。したがって、周囲の狭い範囲に関する特徴量から広い範囲に関する特徴量に至るまで、多様な特徴量を抽出することは、影響範囲がそれぞれ異なる、プロセスバイアスに影響を与える各工程中の様々な現象に対して、正確なシミュレーションを行う上で重要である。そこで、本願発明では、1つの評価点Eについて、近傍から遠方に至るまで、その周囲の様々な領域についての特徴量を抽出するようにしている。このように、1つの評価点Eについて複数の特徴量を抽出するため、本発明では、それぞれ異なるサイズをもった複数の階層画像からなる「画像ピラミッド」を作成する方法を採用している。この画像ピラミッドには、影響範囲がそれぞれ異なる様々な現象を多重化した情報が含まれていることになる。 After all, the value of the process bias y for a certain evaluation point E becomes a value determined by the fusion of phenomena having various scale feelings. Therefore, extracting various feature values from feature values related to a narrow range to feature values related to a wide range is different from the various phenomena in each process that affect the process bias that have different influence ranges. This is important for accurate simulation. Therefore, in the present invention, for one evaluation point E, feature quantities for various regions around it are extracted from the vicinity to the distance. As described above, in order to extract a plurality of feature amounts for one evaluation point E, the present invention employs a method of creating an “image pyramid” composed of a plurality of hierarchical images each having a different size. This image pyramid includes information obtained by multiplexing various phenomena having different influence ranges.

 図6は、図1に示す特徴量抽出ユニット120およびバイアス推定ユニット130において実行される処理の概要を示す図である。図の上方に示された元画像Q1は、図1に示す元画像作成部121によって作成された画像であり、与えられた元図形パターン10に相当する画像になる。前述したように、元図形パターン10は、半導体デバイスの設計装置などによって作成されるデータであり、図2(a) に示すような図形を示すデータになるが、通常、図形の輪郭線を示すベクトルデータ(各頂点の座標値と各頂点の連結関係を示すデータ)として与えられる。 FIG. 6 is a diagram showing an outline of processing executed in the feature amount extraction unit 120 and the bias estimation unit 130 shown in FIG. An original image Q1 shown in the upper part of the figure is an image created by the original image creating unit 121 shown in FIG. 1 and is an image corresponding to the given original figure pattern 10. As described above, the original graphic pattern 10 is data created by a semiconductor device design apparatus or the like, and is data indicating a graphic as shown in FIG. 2 (a), but usually indicates a contour line of the graphic. It is given as vector data (data indicating the coordinate value of each vertex and the connection relationship between each vertex).

 元画像作成部121は、与えられた元図形パターン10のデータに基づいて、それぞれ所定の画素値を有する画素の集合体からなる元画像Q1を作成する処理を実行する。たとえば、元図形パターン10を構成する図形の内部に画素値1をもった画素を配置し、外部に画素値0をもった画素を配置すれば、多数の画素Uの集合体からなる元画像Q1を作成することができる。図6に示す元画像Q1は、このような画素Uの集合体からなる画像であり、破線で示すように、元図形パターン10に含まれる長方形の図形を画像情報として有している。また、評価点設定ユニット110によって、この図形の輪郭線上に評価点Eが設定されている。なお、図6では、便宜上、1つの評価点Eのみが描かれているが、実際には、図形の輪郭線に沿って、多数の評価点が設定されている。 The original image creation unit 121 executes a process of creating an original image Q1 composed of an aggregate of pixels each having a predetermined pixel value based on the data of the given original graphic pattern 10. For example, if a pixel having a pixel value of 1 is arranged inside a graphic constituting the original graphic pattern 10 and a pixel having a pixel value of 0 is arranged outside, the original image Q1 composed of an aggregate of a large number of pixels U Can be created. An original image Q1 shown in FIG. 6 is an image composed of such an assembly of pixels U, and has a rectangular figure included in the original figure pattern 10 as image information, as indicated by a broken line. The evaluation point E is set on the contour line of the figure by the evaluation point setting unit 110. In FIG. 6, only one evaluation point E is drawn for convenience, but actually, a large number of evaluation points are set along the contour line of the figure.

 図1に示す画像ピラミッド作成部122は、この元画像Q1に基づいて、画像ピラミッドPPを作成する。画像ピラミッドPPは、それぞれ異なるサイズをもった複数の階層画像によって構成される。図には、複数n通り(n≧2)の階層画像P1~Pnによって構成された画像ピラミッドPPが示されている。元画像Q1から階層画像P1~Pnを作成する具体的な手順は§2で説明するが、基本的には、階層画像P1~Pnは、画素数を小さくする縮小処理によって作成される。たとえば、図示の例の場合、階層画像P1は、元画像Q1と同じサイズの画像(縦横の画素数が同じ画像)であるのに対して、階層画像P2は、縮小された小さなサイズの画像になっており、階層画像P3は、階層画像P2を更に縮小した、より小さなサイズの画像になっている。 The image pyramid creation unit 122 shown in FIG. 1 creates an image pyramid PP based on the original image Q1. The image pyramid PP is composed of a plurality of hierarchical images having different sizes. The figure shows an image pyramid PP composed of a plurality of n (n ≧ 2) hierarchical images P1 to Pn. The specific procedure for creating the hierarchical images P1 to Pn from the original image Q1 will be described in Section 2. Basically, the hierarchical images P1 to Pn are created by a reduction process for reducing the number of pixels. For example, in the illustrated example, the hierarchical image P1 is an image having the same size as the original image Q1 (an image having the same number of vertical and horizontal pixels), whereas the hierarchical image P2 is reduced to a small size image. The hierarchical image P3 is an image having a smaller size obtained by further reducing the hierarchical image P2.

 このように、本発明では、元画像Q1に基づいて、画像サイズが徐々に小さくなってゆくような階層画像P1~Pnが作成される。このように、サイズの異なる複数の階層画像を上下に並べた様子は、図示のようなピラミッドの形態になるため、本願では、これら複数の階層画像P1~Pnの集合体を画像ピラミッドPPと呼んでいる。階層画像P1~Pnは、いずれも元画像Q1に基づいて作成されているため、元図形パターン10の情報を有しており、それぞれについて評価点Eの位置を定義することができる。図では、各階層画像P1~Pnにそれぞれ長方形の図形が描かれており、その輪郭線上に評価点Eが配置されている。 As described above, in the present invention, hierarchical images P1 to Pn whose image size is gradually reduced are created based on the original image Q1. In this way, the state in which a plurality of hierarchical images having different sizes are arranged one above the other is in the form of a pyramid as shown in the figure. Therefore, in the present application, an aggregate of the plurality of hierarchical images P1 to Pn is referred to as an image pyramid PP. It is out. Since each of the hierarchical images P1 to Pn is created based on the original image Q1, it has information of the original graphic pattern 10, and the position of the evaluation point E can be defined for each of the hierarchical images P1 to Pn. In the figure, a rectangular figure is drawn on each of the hierarchical images P1 to Pn, and an evaluation point E is arranged on the contour line.

 図1に示す特徴量算出部123は、画像ピラミッドを構成する各階層画像について、評価点の近傍の画素の画素値に基づいて特徴量を算出する処理を行う。図6には、画像ピラミッドPPを構成するn枚の階層画像P1~Pnから、それぞれ評価点Eの特徴量x1~xnが抽出された状態が示されている。図示されている特徴量x1~xnは、いずれも同じ評価点Eの周囲の特徴を示す値であるが、特徴量x1は、第1の階層画像P1上の評価点Eの近傍画素の画素値に基づいて算出された値であり、特徴量x2は、第2の階層画像P2上の評価点Eの近傍画素の画素値に基づいて算出された値であり、特徴量x3は、第3の階層画像P3上の評価点Eの近傍画素の画素値に基づいて算出された値である。各特徴量x1~xnの具体的な算出手順は§2で説明する。 1 performs a process of calculating a feature value for each hierarchical image constituting the image pyramid based on pixel values of pixels in the vicinity of the evaluation point. FIG. 6 shows a state in which feature amounts x1 to xn of the evaluation points E are extracted from n layer images P1 to Pn constituting the image pyramid PP, respectively. The illustrated feature amounts x1 to xn are values indicating features around the same evaluation point E, but the feature amount x1 is a pixel value of a pixel near the evaluation point E on the first hierarchical image P1. The feature amount x2 is a value calculated based on the pixel values of the neighboring pixels of the evaluation point E on the second hierarchical image P2, and the feature amount x3 is the third value. This is a value calculated based on the pixel values of the neighboring pixels of the evaluation point E on the hierarchical image P3. A specific calculation procedure for each of the feature amounts x1 to xn will be described in Section 2.

 図6には、便宜上、1つの評価点Eのみが描かれているが、実際には、多数の評価点のそれぞれについて、n個の特徴量x1~xnが算出されることになる。個々の特徴量x1~xnは、いずれも所定のスカラー値であるが、個々の評価点Eごとにそれぞれn個の特徴量x1~xnが得られることになる。したがって、このn個の特徴量x1~xnをn次元ベクトルとして把握すれば、特徴量抽出ユニット120は、1つの評価点Eについて、n次元ベクトルからなる特徴量を抽出する処理を行うことになる。 FIG. 6 shows only one evaluation point E for convenience, but in actuality, n feature values x1 to xn are calculated for each of a large number of evaluation points. Each of the feature amounts x1 to xn is a predetermined scalar value, but n feature amounts x1 to xn are obtained for each evaluation point E, respectively. Therefore, if the n feature quantities x1 to xn are grasped as n-dimensional vectors, the feature quantity extraction unit 120 performs a process of extracting feature quantities composed of n-dimensional vectors for one evaluation point E. .

 こうして抽出された各評価点についての特徴量(n次元ベクトル)は、バイアス推定ユニット130内の特徴量入力部131によって入力され、推定演算部132に与えられる。推定演算部132は、ここに示す実施例の場合、ニューラルネットワークによって構成され、予め実施された学習段階によって得られた学習情報Lに基づいて、n次元ベクトルとして与えられた特徴量x1~xnに応じて、評価点Eについてのプロセスバイアスの推定値y(スカラー値)を算出する演算を行う。具体的な演算手順は§3で説明する。 The feature amount (n-dimensional vector) for each evaluation point extracted in this way is input by the feature amount input unit 131 in the bias estimation unit 130 and given to the estimation calculation unit 132. In the case of the embodiment shown here, the estimation calculation unit 132 is configured by a neural network, and based on the learning information L obtained in advance by the learning stage, the estimation calculation unit 132 applies the feature amounts x1 to xn given as n-dimensional vectors. Accordingly, an operation for calculating an estimated value y (scalar value) of the process bias for the evaluation point E is performed. A specific calculation procedure will be described in §3.

 このように、本発明によれば、実際のリソグラフィプロセスに対して、物理的・実験的なシミュレーションモデルが構築されていなくても、あらゆる元図形パターン10に対して正確な特徴量を抽出することができる。また、そもそも物理的・実験的なシミュレーションモデルを構築する必要がないため、後に§3.2で述べるニューラルネットワークの学習段階において、材料物性や工程条件の種々の設定値を考慮する必要もない。 As described above, according to the present invention, an accurate feature amount can be extracted for every original figure pattern 10 even if a physical / experimental simulation model is not constructed for an actual lithography process. Can do. In addition, since it is not necessary to construct a physical / experimental simulation model in the first place, it is not necessary to consider various set values of material properties and process conditions in the learning stage of the neural network described later in §3.2.

 以上、本発明に係る図形パターンの形状推定装置100′および形状補正装置100を、半導体デバイスを製造するためのリソグラフィプロセスに利用する例について説明したが、本発明は、半導体デバイスの製造工程への利用に限定されるものではなく、リソグラフィプロセスを含む様々な製品の製造工程に利用することができる。たとえば、NIL(Nano Imprint Lithography)やEUV(Extreme UltraViolet Lithography)などのリソグラフィプロセスを含む様々な製品の製造工程においても本発明を利用することが可能である。特に、NILにおいては、元図形パターンから露光リソグラフィを通して作製したマスターテンプレート上の実図形パターンが、元図形パターンと一致するように、元図形パターンに対して補正を行ってもよい。あるいは、マスターテンプレートからインプリントを通して作製したレプリカテンプレート上の実図形パターンが、元図形パターンと一致するように、元図形パターンに対する補正を行ってもよい。 As described above, the example of using the figure pattern shape estimation apparatus 100 ′ and the shape correction apparatus 100 according to the present invention in the lithography process for manufacturing a semiconductor device has been described. The present invention is not limited to use, and can be used for manufacturing processes of various products including lithography processes. For example, the present invention can be used in manufacturing processes of various products including lithography processes such as NIL (Nano Imprint Lithography) and EUV (Extreme UltraViolet Lithography). In particular, in the NIL, the original graphic pattern may be corrected so that the actual graphic pattern on the master template produced from the original graphic pattern through exposure lithography matches the original graphic pattern. Or you may correct | amend an original figure pattern so that the real figure pattern on the replica template produced through the imprint from the master template may correspond with the original figure pattern.

 その他にも、本願発明は、MEMS(Micro Electro Mechanical Systems)、LSPM(Large-size Photomask)、リードフレーム、メタルマスク、メタルメッシュセンサー、カラーフィルタなど、リソグラフィプロセスを含むすべての製品分野に適用可能である。 In addition, the present invention can be applied to all product fields including lithography processes such as MEMS (Micro Electro Mechanical Systems), LSPM (Large-size Photomask), lead frames, metal masks, metal mesh sensors, color filters, etc. is there.

 <<< §2. 特徴量抽出ユニットの詳細 >>>
 続いて、特徴量抽出ユニット120の詳細な処理動作を説明する。図1に示すように、特徴量抽出ユニット120は、元画像作成部121、画像ピラミッド作成部122、特徴量算出部123を有しており、図4の流れ図におけるステップS3の特徴量抽出処理を実行する機能を有している。この特徴量抽出処理は、実際には、図7に示す各手順により実行される。ここで、ステップS31,S32は、元画像作成部121によって実行される手順であり、ステップS33~S36は、画像ピラミッド作成部122によって実行される手順であり、ステップS37は、特徴量算出部123によって実行される手順である。以下、各部で実行される手順を、具体例を挙げて詳細に説明する。
<<< §2. Details of feature extraction unit >>
Next, a detailed processing operation of the feature amount extraction unit 120 will be described. As shown in FIG. 1, the feature amount extraction unit 120 includes an original image creation unit 121, an image pyramid creation unit 122, and a feature amount calculation unit 123, and performs the feature amount extraction process in step S3 in the flowchart of FIG. Has the function to execute. This feature amount extraction processing is actually executed by each procedure shown in FIG. Here, steps S31 and S32 are procedures executed by the original image creation unit 121, steps S33 to S36 are procedures executed by the image pyramid creation unit 122, and step S37 is a feature amount calculation unit 123. The procedure executed by Hereinafter, a procedure executed by each unit will be described in detail with a specific example.

 <2.1 元画像作成部121による処理手順>
 元画像作成部121は、与えられた元図形パターン10に基づいて、それぞれ所定の画素値を有する画素の集合体からなる元画像を作成する機能を果たし、図7の流れ図のステップS31,S32を実行する。まず、ステップS31において、元図形パターン10を入力する処理が行われ、続くステップS32において、元画像作成処理が行われる。
<2.1 Processing Procedure by Original Image Creation Unit 121>
The original image creation unit 121 performs a function of creating an original image composed of a collection of pixels each having a predetermined pixel value based on the given original graphic pattern 10, and performs steps S31 and S32 in the flowchart of FIG. Execute. First, in step S31, the process of inputting the original graphic pattern 10 is performed, and in the subsequent step S32, the original image creation process is performed.

 §1では、説明の便宜上、ステップS31で入力される元図形パターン10の一例として、図2(a) のような1つの長方形の図形のみからなる単純なパターンを示した。ここでは、より詳細な説明を行うため、図8に示すような5つの図形F1~F5(矩形)を含む元図形パターン10が与えられた場合を考えてみよう。前述したように、図形パターンの形状補正装置100に与えられる元図形パターン10は、通常、図形の輪郭線を示すベクトルデータになる。したがって、図8では、各図形F1~F5の内部にハッチングを施して示しているが、実際には、この元図形パターン10は、5つの長方形F1~F5の各頂点の座標値とこれら各頂点の連結関係を示すベクトルデータとして与えられる。 In §1, for convenience of explanation, as an example of the original figure pattern 10 input in step S31, a simple pattern including only one rectangular figure as shown in FIG. Here, in order to give a more detailed explanation, consider the case where an original figure pattern 10 including five figures F1 to F5 (rectangles) as shown in FIG. 8 is given. As described above, the original figure pattern 10 given to the figure pattern shape correction apparatus 100 is usually vector data indicating the outline of the figure. Therefore, in FIG. 8, the inside of each of the figures F1 to F5 is indicated by hatching. However, in reality, the original figure pattern 10 includes the coordinate values of the vertices of the five rectangles F1 to F5 and the vertices thereof. Is given as vector data indicating the connection relationship.

 ステップS32の元画像作成処理は、このようなベクトルデータとして与えられた元図形パターン10に基づいて、画素の集合体からなる元画像Q1のデータ(ラスターデータ)を作成する処理ということになる。具体的には、元画像作成部121は、画素Uの二次元配列からなるメッシュを定義し、このメッシュ上に元図形パターン10を重ね合わせ、個々の画素Uの位置と元図形パターン10を構成する図形F1~F5の輪郭線の位置との関係に基づいて、個々の画素Uの画素値を決定する処理を行う。 The original image creation process of step S32 is a process of creating data (raster data) of the original image Q1 composed of a collection of pixels based on the original graphic pattern 10 given as such vector data. Specifically, the original image creation unit 121 defines a mesh composed of a two-dimensional array of pixels U, overlays the original graphic pattern 10 on this mesh, and configures the position of each pixel U and the original graphic pattern 10. The pixel value of each pixel U is determined based on the relationship with the positions of the contour lines of the graphics F1 to F5.

 図9は、元画像作成部121において、画素Uの二次元配列からなるメッシュ上に元図形パターン10を重ね合わせる処理が行われた状態を示す平面図である。この例では、縦横ともに画素寸法uをもった画素Uを二次元配列させたメッシュが定義されており、多数の画素Uが、縦横ともにピッチuで並べられている。画素寸法uは、各図形F1~F5の形状を十分な解像度で表現することができる適切な値に設定される。画素寸法uを小さく設定すればするほど、形状表現の解像度は向上するが、後の処理負担は重くなる。一般に、半導体デバイスを製造するために用いられる元図形パターン10は、極めて微細なパターンになるため、画素寸法uとしては、たとえば、u=5~20nm程度の値を設定するのが好ましい。 FIG. 9 is a plan view showing a state where the original image creation unit 121 has performed processing for superimposing the original figure pattern 10 on a mesh composed of a two-dimensional array of pixels U. FIG. In this example, a mesh is defined in which pixels U having pixel dimensions u both vertically and horizontally are two-dimensionally arranged, and a large number of pixels U are arranged at a pitch u both vertically and horizontally. The pixel size u is set to an appropriate value that can represent the shapes of the figures F1 to F5 with sufficient resolution. The smaller the pixel size u is set, the higher the resolution of shape expression, but the later processing load becomes heavier. In general, since the original figure pattern 10 used for manufacturing a semiconductor device is a very fine pattern, it is preferable to set, for example, a value of about u = 5 to 20 nm as the pixel dimension u.

 こうして、画素Uの二次元配列が定義されたら、図形F1~F5の輪郭線の位置との関係に基づいて、個々の画素Uに画素値を定義する。画素値の定義には、いくつかの方法がある。 Thus, when the two-dimensional array of pixels U is defined, pixel values are defined for the individual pixels U based on the relationship with the positions of the contour lines of the graphics F1 to F5. There are several methods for defining pixel values.

 最も基本的な定義方法は、元図形パターン10に基づいて、各図形F1~F5の内部領域と外部領域とを認識し、各画素U内における内部領域の占有率を当該画素の画素値とする方法である。図8において、ハッチングが施された領域が各図形F1~F5の内部領域であり、白い領域が外部領域である。したがって、この方法を採る場合、図9のように重ね合わせた状態において、各画素の画素値は、当該画素内における内部領域(ハッチング領域)の占有率(0~1)として定義される。このような方法で画素値が定義された画像は、一般に「面積密度マップ」と呼ばれている。 The most basic definition method is to recognize the internal area and external area of each graphic F1 to F5 based on the original graphic pattern 10, and use the occupancy of the internal area in each pixel U as the pixel value of the pixel. Is the method. In FIG. 8, the hatched area is the internal area of each of the figures F1 to F5, and the white area is the external area. Therefore, when this method is adopted, the pixel value of each pixel is defined as the occupancy rate (0 to 1) of the internal area (hatched area) in the pixel in the state of being overlapped as shown in FIG. An image in which pixel values are defined by such a method is generally called an “area density map”.

 図10は、図9に示す「元図形パターン10+画素の二次元配列」に基づいて作成された面積密度マップM1を示す図である。ここで、各セルは、図9で定義された各画素であり、セル内の数字は各画素に定義された画素値である。なお、空白のセルは、画素値0をもつ画素である(画素値0の図示は省略されている)。この面積密度マップM1において、たとえば、画素値1.0をもつ画素は、図9におけるハッチング領域の占有率が100%の画素であり、画素値0.5をもつ画素は、図9におけるハッチング領域の占有率が50%の画素である。この面積密度マップM1は、基本的に、図形内部を画素値1、図形外部を画素値0で表現した二値画像ということになるが、図形の輪郭線が位置する画素については、内部領域の割合を示す値が画素値として与えられることになるので、全体としてはモノクロの階調画像ということになる。 FIG. 10 is a diagram showing an area density map M1 created based on the “original graphic pattern 10 + two-dimensional array of pixels” shown in FIG. Here, each cell is each pixel defined in FIG. 9, and the numbers in the cell are pixel values defined for each pixel. A blank cell is a pixel having a pixel value of 0 (illustration of the pixel value of 0 is omitted). In this area density map M1, for example, a pixel having a pixel value of 1.0 is a pixel whose occupancy ratio of the hatching area in FIG. 9 is 100%, and a pixel having a pixel value of 0.5 is the hatching area in FIG. This is a pixel with an occupation ratio of 50%. This area density map M1 is basically a binary image in which the inside of the figure is represented by a pixel value of 1 and the outside of the figure is represented by a pixel value of 0. However, for the pixel where the outline of the figure is located, Since the value indicating the ratio is given as the pixel value, the whole image is a monochrome gradation image.

 画素値の別な定義方法として、元図形パターン10に基づいて、各図形F1~F5の輪郭線を認識し、各画素U内に存在する輪郭線の長さを当該画素の画素値とする方法を採ることもできる。この方法を採る場合、図9のように重ね合わせた状態において、各画素の画素値は、当該画素内に存在する輪郭線の長さの総和として定義される。このような方法で画素値が定義された画像は、一般に「エッジ長密度マップ」と呼ばれている。輪郭線の長さの単位としては、たとえば、画素寸法uを1とした単位を用いればよい。 As another definition method of the pixel value, a method of recognizing the contour lines of the graphics F1 to F5 based on the original graphic pattern 10 and setting the length of the contour line existing in each pixel U as the pixel value of the pixel. Can also be taken. When this method is adopted, the pixel value of each pixel is defined as the total sum of the lengths of the contour lines existing in the pixel in the superimposed state as shown in FIG. An image in which pixel values are defined by such a method is generally called an “edge length density map”. As a unit of the length of the contour line, for example, a unit in which the pixel dimension u is 1 may be used.

 図11は、図9に示す「元図形パターン10+画素の二次元配列」に基づいて作成されたエッジ長密度マップM2を示す図である。ここでも、各セルは、図9で定義された各画素であり、セル内の数字は各画素内に存在する輪郭線の長さの総和(画素寸法uを1とした長さ)として定義された画素値である。なお、空白のセルは、画素値0をもつ画素である(画素値0の図示は省略されている)。このエッジ長密度マップM2において、たとえば、画素値1.0をもつ画素は、図9において、画素内に存在する輪郭線の長さがuとなる画素である。このエッジ長密度マップM2は、基本的に、輪郭線の密度分布を示すモノクロの階調画像ということになるので、前述した面積密度マップM1とは性質がかなり異なる画像になる。ただ、本発明のように、輪郭線上に定義された評価点Eについての特徴量を抽出する上では、非常に有用な画像になる。 FIG. 11 is a diagram showing an edge length density map M2 created based on the “original graphic pattern 10 + two-dimensional array of pixels” shown in FIG. Again, each cell is each pixel defined in FIG. 9, and the numbers in the cell are defined as the sum of the lengths of the contour lines existing in each pixel (the length when the pixel dimension u is 1). Pixel value. A blank cell is a pixel having a pixel value of 0 (illustration of the pixel value of 0 is omitted). In the edge length density map M2, for example, a pixel having a pixel value of 1.0 is a pixel having a length u of the contour line existing in the pixel in FIG. Since the edge length density map M2 is basically a monochrome gradation image showing the density distribution of the contour line, the image is considerably different from the area density map M1 described above. However, as in the present invention, it is a very useful image in extracting the feature amount for the evaluation point E defined on the contour line.

 以上、図8に示す元図形パターン10に基づいて、個々の画素の画素値を定義する方法を述べたが、リソグラフィプロセスに用いる元図形パターン10には、図形の内部と外部との境界を示す輪郭線の幾何学的な情報に加えて、更に、各図形に関するドーズ量の情報が付加されることがある。図12は、このようなドーズ量の情報を含んだ元図形パターン10を示す平面図である。図12に示すドーズ量付きの元図形パターン10は、図8に示す元図形パターン10と同様に、各図形F1~F5の輪郭線の情報を含んでいるが、それに加えて、各図形F1~F5のそれぞれについて、ドーズ量を定義する情報を含んでいる。 As described above, the method for defining the pixel value of each pixel based on the original graphic pattern 10 shown in FIG. 8 has been described. However, the original graphic pattern 10 used in the lithography process shows the boundary between the inside and the outside of the graphic. In addition to the geometric information of the contour line, information on a dose amount for each figure may be further added. FIG. 12 is a plan view showing the original graphic pattern 10 including information on such a dose amount. The original figure pattern 10 with dose shown in FIG. 12 includes the outline information of the figures F1 to F5, as in the original figure pattern 10 shown in FIG. For each of F5, information defining the dose amount is included.

 具体的には、図示の例の場合、図形F1~F3についてはドーズ量100%が定義され、図形F4についてはドーズ量50%が定義され、図形F5についてはドーズ量10%が定義されている。これらのドーズ量は、リソグラフィプロセスの露光工程において、照射する光や電子線の強度(露光の回数によって総エネルギー量を制御する場合も含む)を示すものである。図示の例の場合、図形F1~F3の内部領域を露光する際には100%の強度で光や電子線を照射するが、図形F4の内部領域を露光する際には50%、図形F5の内部領域を露光する際には10%の強度で光や電子線を照射することになる。このように、露光工程時に、個々の図形ごとにドーズ量を制御するようにすれば、実基板S上に形成される実図形パターン20の寸法を更に細かく調整することができる。 Specifically, in the illustrated example, a dose amount of 100% is defined for the figures F1 to F3, a dose amount of 50% is defined for the figure F4, and a dose amount of 10% is defined for the figure F5. . These dose amounts indicate the intensity of light or electron beam to be irradiated (including the case where the total energy amount is controlled by the number of exposures) in the exposure process of the lithography process. In the case of the illustrated example, light and electron beams are irradiated with 100% intensity when exposing the internal area of the figures F1 to F3, but 50% of the figure F5 is exposed when the internal area of the figure F4 is exposed. When the internal region is exposed, light or an electron beam is irradiated with an intensity of 10%. As described above, when the dose is controlled for each figure during the exposure process, the dimensions of the actual figure pattern 20 formed on the actual substrate S can be further finely adjusted.

 リソグラフィプロセスの露光工程において、このようなドーズ量の制御を行うケースでは、シミュレーションにおいても、ドーズ量を考慮する必要がある。このようなケースでは、元画像の画素値を決定する際に、ドーズ量を考慮した方法を採る必要がある。すなわち、図形の内部と外部との境界を示す輪郭線の情報と、リソグラフィプロセスにおける各図形に関するドーズ量の情報と、を含む元図形パターン10が与えられた場合には、当該元図形パターン10に基づいて、各図形の内部領域と外部領域とを認識し、更に、各図形に関するドーズ量を認識し、各画素内に存在する各図形について「内部領域の占有率と当該図形のドーズ量との積」を求め、当該積の総和を当該画素の画素値とする方法を採ればよい。 In the case where such dose control is performed in the exposure process of the lithography process, it is necessary to consider the dose even in the simulation. In such a case, when determining the pixel value of the original image, it is necessary to adopt a method that takes the dose amount into consideration. That is, when an original graphic pattern 10 including information on a contour line indicating the boundary between the inside and the outside of the graphic and information on a dose amount for each graphic in the lithography process is given, Based on this, the internal area and external area of each figure are recognized, the dose amount for each figure is recognized, and for each figure existing in each pixel, the occupancy rate of the internal area and the dose amount of the figure are determined. What is necessary is just to take the method which calculates | requires "product" and makes the total of the said product the pixel value of the said pixel.

 この方法を採る場合、図9のように重ね合わせた状態において、各画素の画素値は、当該画素内における特定の図形の内部領域(ハッチング領域)の占有率(0~1)と当該特定の図形に関するドーズ量の積の総和として定義される。このような方法で画素値が定義された画像は、一般に「ドーズ密度マップ」と呼ばれている。このドーズ密度マップも、全体としてはモノクロの階調画像になる。 When this method is adopted, the pixel values of each pixel in the state of being overlapped as shown in FIG. 9 are the occupancy ratio (0 to 1) of the internal area (hatched area) of the specific graphic in the pixel and the specific value. It is defined as the sum of products of doses related to figures. An image in which pixel values are defined by such a method is generally called a “dose density map”. This dose density map also becomes a monochrome gradation image as a whole.

 図13は、図12に示すドーズ量付きの元図形パターン10に基づいて作成されたドーズ密度マップM3を示す図である。ここで、各セルは、図9で定義された各画素であり、セル内の数字は各画素に定義された画素値である。なお、空白のセルは、画素値0をもつ画素である(画素値0の図示は省略されている)。このドーズ密度マップM3を、図10に示す面積密度マップM1と比較すると、ドーズ量100%が与えられた図形F1~F3が配置された画素についての画素値に変わりはないが、ドーズ量50%が与えられた図形F4やドーズ量10%が与えられた図形F5が配置された画素についての画素値は、当該ドーズ量に応じた量だけ減じられていることがわかる。これは、実際の露光工程において、図形F4,F5の内部に対して、強度が減じられた光や電子線が照射される現象を示すものである。 FIG. 13 is a diagram showing a dose density map M3 created based on the original figure pattern 10 with a dose amount shown in FIG. Here, each cell is each pixel defined in FIG. 9, and the numbers in the cell are pixel values defined for each pixel. A blank cell is a pixel having a pixel value of 0 (illustration of the pixel value of 0 is omitted). When this dose density map M3 is compared with the area density map M1 shown in FIG. 10, the pixel value for the pixels where the figures F1 to F3 to which the dose amount 100% is given is not changed, but the dose amount 50%. It can be seen that the pixel value of the pixel in which the figure F4 to which the figure is given and the figure F5 to which the dose amount is 10% is arranged is reduced by an amount corresponding to the dose quantity. This shows a phenomenon in which light or an electron beam with reduced intensity is irradiated to the inside of the figures F4 and F5 in the actual exposure process.

 以上、元図形パターン10に基づいて、面積密度マップM1、エッジ長密度マップM2、ドーズ密度マップM3という3通りの画像(所定の画素値を有する画素Uの集合体)を作成する例を示した。図7の流れ図におけるステップS32の元画像作成処理では、これら3通りの画像のいずれかを作成して元画像とすればよい。図7では、一例として、面積密度マップM1を元画像Q1とした例が示されている。もちろん、上述した3通り以外の方法で元画像Q1を作成することも可能である。 As described above, an example in which three images (an aggregate of pixels U having a predetermined pixel value), that is, the area density map M1, the edge length density map M2, and the dose density map M3 are created based on the original figure pattern 10 has been shown. . In the original image creation process in step S32 in the flowchart of FIG. 7, any of these three images may be created and used as the original image. FIG. 7 shows an example in which the area density map M1 is the original image Q1. Of course, it is also possible to create the original image Q1 by a method other than the three methods described above.

 なお、このステップS32で作成された元画像Q1は、ステップS33のフィルタ処理において、第1番目の準備画像Q1として利用される。ステップS33のフィルタ処理は、後述するように、第k番目の準備画像Qkに画像処理フィルタを作用させて、第k番目の階層画像Pkを作成する処理である。そこで、ステップS32の手順は、パラメータk(k=1,2,3,... )を初期値k=1に設定して、第1番目の準備画像Q1を作成する処理と言うことができる。この第1番目の準備画像Q1は、元図形パターン10に基づいて最初に作成された元画像であり、後述する画像ピラミッド作成工程において最初に用いられる基準の画像ということになる。 Note that the original image Q1 created in step S32 is used as the first preparation image Q1 in the filtering process of step S33. As will be described later, the filtering process in step S33 is a process of creating a kth hierarchical image Pk by applying an image processing filter to the kth preparation image Qk. Therefore, it can be said that the procedure of step S32 is a process of creating the first preparation image Q1 by setting the parameter k (k = 1, 2, 3,...) To the initial value k = 1. . The first preparation image Q1 is an original image that is first created based on the original graphic pattern 10, and is a reference image that is first used in an image pyramid creation process described later.

 <2.2 画像ピラミッド作成部122による処理手順>
 続いて、画像ピラミッド作成部122による画像ピラミッドの作成処理の手順を説明する。画像ピラミッド作成部122は、図7のステップS32で作成された元画像Q1(たとえば、図10に示す面積密度マップM1)に基づいて、画素数を小さくする縮小処理を行う機能を有し、それぞれ異なるサイズをもった複数の階層画像からなる画像ピラミッドを作成する画像ピラミッド作成処理を行う。ここで述べる実施例の場合、この画像ピラミッド作成処理は、図7の流れ図のステップS33~S36に示す手順によって実行される。
<2.2 Processing Procedure by Image Pyramid Creation Unit 122>
Subsequently, a procedure of image pyramid creation processing by the image pyramid creation unit 122 will be described. The image pyramid creation unit 122 has a function of performing a reduction process for reducing the number of pixels based on the original image Q1 created in step S32 of FIG. 7 (for example, the area density map M1 shown in FIG. 10). An image pyramid creation process is performed to create an image pyramid composed of a plurality of hierarchical images having different sizes. In the case of the embodiment described here, this image pyramid creation process is executed by the procedure shown in steps S33 to S36 in the flowchart of FIG.

 ここでは、ステップS31において、図8に示すような元図形パターン10が入力され、ステップS32において、図10に示す面積密度マップM1が元画像Q1として作成された場合を例にとって、具体的な画像ピラミッド作成処理の手順を説明する。 Here, in step S31, an original graphic pattern 10 as shown in FIG. 8 is input, and in step S32, a specific image is taken as an example where the area density map M1 shown in FIG. 10 is created as the original image Q1. The procedure of the pyramid creation process will be described.

 まず、ステップS33では、第k番目の準備画像Qkに画像処理フィルタを作用させて、第k番目の階層画像Pkを作成するフィルタ処理が実行される。このフィルタ処理は、具体的には、たとえば、画像処理フィルタとしてガウシアンフィルタを用いた畳込演算として実行される。図14は、準備画像QkにガウシアンフィルタGF33を用いたフィルタ処理を施すことにより、第k番目の階層画像Pkを作成する手順を示す平面図である。この図14に示されている第k番目の準備画像Qkは、実際には、図10に示す面積密度マップM1と同じものである。 First, in step S33, a filter process for creating the kth hierarchical image Pk by applying an image processing filter to the kth preparation image Qk is executed. Specifically, this filter processing is executed, for example, as a convolution operation using a Gaussian filter as an image processing filter. FIG. 14 is a plan view showing a procedure for creating the k-th hierarchical image Pk by performing the filtering process using the Gaussian filter GF33 on the preparation image Qk. The kth preparation image Qk shown in FIG. 14 is actually the same as the area density map M1 shown in FIG.

 すなわち、図10に示す面積密度マップM1は、便宜上、8×8の画素配列として示され、画素値0の記載が省略されているのに対して、図14に示す準備画像Qkは、10×10の画素配列として示され、画素値0も記載されているが、両者は実質的に同一の画像である。要するに、図14では、フィルタ処理を実行する都合上、図10に示す8×8の画素配列からなる面積密度マップM1の周囲に、画素値0を有する画素を配置して10×10の画素配列としているだけである。この段階では、パラメータkは初期値1であり、図14に示す準備画像Qkは、第1番目の準備画像Q1ということになる。上述したとおり、この第1番目の準備画像Q1は、ステップS32における元画像作成処理で作成された元画像に他ならない。 That is, the area density map M1 shown in FIG. 10 is shown as an 8 × 8 pixel array for convenience, and the description of the pixel value 0 is omitted, whereas the preparation image Qk shown in FIG. Although shown as a pixel array of 10 and a pixel value of 0 is also shown, they are substantially the same image. In short, in FIG. 14, for the convenience of executing the filtering process, pixels having a pixel value of 0 are arranged around the area density map M1 including the 8 × 8 pixel array shown in FIG. It is only doing. At this stage, the parameter k has an initial value of 1, and the preparation image Qk shown in FIG. 14 is the first preparation image Q1. As described above, the first preparation image Q1 is nothing but the original image created by the original image creation process in step S32.

 図14に示すフィルタ処理では、ガウシアンフィルタGF33を用いた畳込演算が実行される。ガウシアンフィルタGF33は、図示のような3×3の画素配列であり、このガウシアンフィルタGF33を、第k番目の準備画像Qkの所定位置に重ね合わせて積和演算処理を行うことにより、第k番目の階層画像Pk(フィルタ処理画像)が得られる。図15は、図14に示すフィルタ処理により得られた第k番目の階層画像Pkを示す平面図である。この第k番目の階層画像Pkは、第k番目の準備画像Qkと同様に10×10の画素配列からなり、個々の画素の画素値は、ガウシアンフィルタGF33を用いた積和演算によって得られた値になる。 In the filter processing shown in FIG. 14, a convolution operation using the Gaussian filter GF33 is executed. The Gaussian filter GF33 has a 3 × 3 pixel array as shown in the figure. The Gaussian filter GF33 is superposed on a predetermined position of the k-th preparation image Qk to perform a product-sum operation process, thereby performing the k-th operation. Layer image Pk (filtered image) is obtained. FIG. 15 is a plan view showing the k-th hierarchical image Pk obtained by the filtering process shown in FIG. This k-th hierarchical image Pk has a 10 × 10 pixel array as in the k-th preparation image Qk, and the pixel value of each pixel is obtained by a product-sum operation using the Gaussian filter GF33. Value.

 たとえば、図15に示す第k番目の階層画像Pkにおいて、太枠で囲った画素(第4行第3列の画素)に着目すると、この着目画素には画素値0.375が与えられている。当該画素値は、図14に太枠で囲った3×3の画素配列(第4行第3列の画素を中心とした9画素)上に、図示のガウシアンフィルタGF33を重ね、9画素のそれぞれについて、同位置に重ねられた画素同士の画素値の積を求め、9個の積の総和として求められたものである。具体的には、着目画素の画素値0.375は、「(1/16×0)+(2/16×0.25)+(1/16×0.5)+(2/16×0)+(4/16×0.5)+(2/16×1.0)+(1/16×0)+(2/16×0.25)+(1/16×0.5)」なる積和演算値として求められる。このような積和演算によるフィルタ処理は、画像についての畳込演算処理として一般的に知られた処理であるため、ここでは詳しい説明は省略する。 For example, in the k-th hierarchical image Pk shown in FIG. 15, when attention is paid to a pixel surrounded by a thick frame (pixel in the fourth row and third column), a pixel value of 0.375 is given to the target pixel. . The pixel value is obtained by superimposing the illustrated Gaussian filter GF33 on the 3 × 3 pixel array (9 pixels centered on the pixel in the fourth row and the third column) surrounded by a thick frame in FIG. , The product of the pixel values of pixels superposed at the same position is obtained, and the product is obtained as the sum of nine products. Specifically, the pixel value 0.375 of the target pixel is “(1/16 × 0) + (2/16 × 0.25) + (1/16 × 0.5) + (2/16 × 0). ) + (4/16 × 0.5) + (2/16 × 1.0) + (1/16 × 0) + (2/16 × 0.25) + (1/16 × 0.5) ” Is obtained as a product-sum operation value. Such filter processing by product-sum operation is generally known as convolution operation processing for an image, and thus detailed description thereof is omitted here.

 なお、図14に示すフィルタ処理では、画像処理フィルタとして、図16(a) に示すような3×3の画素配列からなるガウシアンフィルタGF33を用いた畳込演算を行っているが、図16(b) に示すような3×3の画素配列からなるラプラシアンフィルタLF33を用いた畳込演算を行うようにしてもよい。一般に、ガウシアンフィルタを用いたフィルタ処理は、画像の輪郭をボケさせる効果を与え、ラプラシアンフィルタを用いたフィルタ処理は、画像の輪郭を強調させる効果を与えることが知られている。いずれのフィルタ処理を採用しても、第k番目の準備画像Qkに対して若干特徴が異なる第k番目の階層画像Pkを得ることができるので、それぞれ異なる特徴をもった複数の階層画像からなる画像ピラミッドを作成する上で効果的である。 In the filter processing shown in FIG. 14, a convolution operation is performed using a Gaussian filter GF33 having a 3 × 3 pixel array as shown in FIG. 16A as the image processing filter. b) A convolution operation using a Laplacian filter LF33 having a 3 × 3 pixel array as shown in may be performed. In general, it is known that filter processing using a Gaussian filter gives an effect of blurring the contour of an image, and filter processing using a Laplacian filter gives an effect of enhancing the contour of an image. Regardless of which filter processing is employed, the k-th hierarchical image Pk having slightly different characteristics from the k-th prepared image Qk can be obtained, and thus includes a plurality of hierarchical images each having different characteristics. It is effective in creating an image pyramid.

 もちろん、ステップS33のフィルタ処理に用いる画像処理フィルタは、図16(a) に示すガウシアンフィルタGF33や図16(b) に示すラプラシアンフィルタLF33に限定されるものではなく、この他にも種々の画像処理フィルタを利用することが可能である。また、用いる画像処理フィルタのサイズも、3×3の画素配列に限定されることなく、任意のサイズの画像処理フィルタを利用することが可能である。たとえば、図17(a) に示すような5×5の画素配列からなるガウシアンフィルタGF55や、図17(b) に示すような5×5の画素配列からなるラプラシアンフィルタLF55を利用してもよい。 Of course, the image processing filter used for the filter processing in step S33 is not limited to the Gaussian filter GF33 shown in FIG. 16 (a) and the Laplacian filter LF33 shown in FIG. 16 (b). Processing filters can be used. Further, the size of the image processing filter to be used is not limited to the 3 × 3 pixel array, and an image processing filter of an arbitrary size can be used. For example, a Gaussian filter GF55 having a 5 × 5 pixel array as shown in FIG. 17A or a Laplacian filter LF55 having a 5 × 5 pixel array as shown in FIG. 17B may be used. .

 こうして、図7の手順におけるステップS33のフィルタ処理が完了すると、ステップS34において、パラメータkが所定の設定値nに到達したか否かが判断され、k<nの場合には、ステップS35の縮小処理が実行される。この縮小処理は、所定の対象画像を元に、当該対象画像よりも画素数の小さな画像を作成する処理である。ここに示す実施例の場合、ステップS33のフィルタ処理で作成された第k番目の階層画像Pkに対して縮小処理を実行することにより、第(k+1)番目の準備画像Q(k+1)を作成する処理が実行される。したがって、準備画像Q(k+1)は、階層画像Pkよりもサイズが小さな画像(画素配列における縦横の画素数が小さな画像)になる。 Thus, when the filtering process in step S33 in the procedure of FIG. 7 is completed, it is determined in step S34 whether or not the parameter k has reached a predetermined set value n. If k <n, the reduction in step S35 is performed. Processing is executed. This reduction processing is processing for creating an image having a smaller number of pixels than the target image based on the predetermined target image. In the case of the example shown here, the (k + 1) th preparation image Q (k + 1) is created by executing the reduction process on the kth hierarchical image Pk created by the filtering process of step S33. Processing is executed. Therefore, the preparation image Q (k + 1) is an image having a smaller size than the hierarchical image Pk (an image having a small number of vertical and horizontal pixels in the pixel array).

 このような画像についての縮小処理は、「プーリング処理」とも呼ばれ、ステップS35で行う縮小処理としては、たとえば、「アベレージ・プーリング処理」を採用することができる。図18は、第k番目の階層画像Pkに対して、アベレージ・プーリング処理を施すことにより、縮小画像として、第(k+1)番目の準備画像Q(k+1)を作成する手順を示す平面図である。具体的には、図18(a) に示す4×4の画素配列からなる階層画像Pkにアベレージ・プーリング処理(縮小処理)を施すことにより、図18(b) に示すような2×2の画素配列からなる準備画像Q(k+1)が縮小画像として作成されている。 Such a reduction process for an image is also referred to as a “pooling process”, and as the reduction process performed in step S35, for example, an “average pooling process” can be employed. FIG. 18 is a plan view showing a procedure for creating an (k + 1) th preparation image Q (k + 1) as a reduced image by performing an average pooling process on the kth hierarchical image Pk. . Specifically, an average pooling process (reduction process) is performed on the hierarchical image Pk having the 4 × 4 pixel array shown in FIG. 18 (a) to obtain a 2 × 2 image as shown in FIG. 18 (b). A preparation image Q (k + 1) having a pixel array is created as a reduced image.

 図18に示すアベレージ・プーリング処理は、2×2の画素配列からなる4個の画素を1つの画素に変換(縮小)する処理であり、元の4個の画素の画素値の平均値を、変換後の1つの画素の画素値とすることにより、縮小画像が作成されることになる。たとえば、図18(a) に示す階層画像Pkの左上に配置されている2×2の画素配列からなる4個の画素(太枠内の画素)は、図18(b) の準備画像Q(k+1)上では太枠で示す1画素に変換(縮小)されている。この変換(縮小)後の太枠画素の画素値0.5は、元の4個の画素の画素値の平均値になっている。 The average pooling process shown in FIG. 18 is a process of converting (reducing) four pixels having a 2 × 2 pixel array into one pixel, and an average value of pixel values of the original four pixels is expressed as follows: By using the pixel value of one pixel after conversion, a reduced image is created. For example, four pixels (pixels within a thick frame) having a 2 × 2 pixel arrangement arranged at the upper left of the hierarchical image Pk shown in FIG. 18A are shown in FIG. On (k + 1), it is converted (reduced) to one pixel indicated by a thick frame. The pixel value 0.5 of the thick frame pixel after this conversion (reduction) is an average value of the pixel values of the original four pixels.

 一方、図19は、第k番目の階層画像Pkに対して、マックス・プーリング処理を施すことにより、縮小画像として、第(k+1)番目の準備画像Q(k+1)を作成する手順を示す平面図である。具体的には、図19(a) に示す4×4の画素配列からなる階層画像Pkにマックス・プーリング処理(縮小処理)を施すことにより、図19(b) に示すような2×2の画素配列からなる準備画像Q(k+1)が縮小画像として作成されている。 On the other hand, FIG. 19 is a plan view showing a procedure for creating a (k + 1) th preparation image Q (k + 1) as a reduced image by performing a max pooling process on the kth hierarchical image Pk. It is. Specifically, by applying a max pooling process (reduction process) to the hierarchical image Pk having the 4 × 4 pixel array shown in FIG. 19A, the 2 × 2 as shown in FIG. A preparation image Q (k + 1) having a pixel array is created as a reduced image.

 図19に示すマックス・プーリング処理は、図18に示すアベレージ・プーリング処理と同様に、2×2の画素配列からなる4個の画素を1つの画素に変換(縮小)する処理であるが、元の4個の画素の画素値の最大値を、変換後の1つの画素の画素値とすることにより、縮小画像が作成されることになる。たとえば、図19(a) に示す階層画像Pkの左上に配置されている2×2の画素配列からなる4個の画素(太枠内の画素)は、図19(b) の準備画像Q(k+1)上では太枠で示す1画素に変換(縮小)されている。この変換(縮小)後の太枠画素の画素値1.0は、元の4個の画素の画素値の最大値になっている。 The maximum pooling process shown in FIG. 19 is a process for converting (reducing) four pixels having a 2 × 2 pixel array into one pixel, similar to the average pooling process shown in FIG. By using the maximum value of the pixel values of the four pixels as the pixel value of one pixel after conversion, a reduced image is created. For example, four pixels (pixels within a thick frame) having a 2 × 2 pixel array arranged at the upper left of the hierarchical image Pk shown in FIG. 19A are shown in FIG. On (k + 1), it is converted (reduced) to one pixel indicated by a thick frame. The pixel value 1.0 of the thick frame pixel after this conversion (reduction) is the maximum value of the pixel values of the original four pixels.

 なお、図18および図19に示す各プーリング処理は、2×2の画素配列からなる4個の画素を単一の画素に変換する縮小処理であるが、もちろん、3×3の画素配列からなる9個の画素を単一の画素に変換する縮小処理を行うことも可能であるし、3×2の画素配列からなる6個の画素を単一の画素に変換する縮小処理を行うことも可能である。 Each of the pooling processes shown in FIGS. 18 and 19 is a reduction process for converting four pixels having a 2 × 2 pixel array into a single pixel, but of course, having a 3 × 3 pixel array. It is also possible to perform a reduction process for converting nine pixels into a single pixel, and it is also possible to perform a reduction process for converting six pixels having a 3 × 2 pixel array into a single pixel. It is.

 結局、画像ピラミッド作成部122は、ステップS35の縮小処理として、複数m個の隣接画素を、これら複数m個の隣接画素の画素値の平均値を画素値とする単一の画素に置き換えるアベレージ・プーリング処理を実行することにより縮小画像を作成することもできるし、複数m個の隣接画素を、これら複数m個の隣接画素の画素値の最大値を画素値とする単一の画素に置き換えるマックス・プーリング処理を実行することにより縮小画像を作成することもできる。もちろん、ステップS35の縮小処理としては、その他の縮小処理を行うことも可能である。要するに、階層画像Pkに対して画素数を小さくするような変換を施すことにより、サイズの小さな準備画像Q(k+1)を作成することができる処理、別言すれば、「画素数が小さくなった縮小画像」を作成する処理であれば、ステップS35において、どのような縮小処理を実行してもかまわない。 After all, the image pyramid creation unit 122 replaces the plurality of m adjacent pixels with a single pixel having an average value of the pixel values of the plurality of m adjacent pixels as a pixel value as the reduction processing in step S35. It is also possible to create a reduced image by executing the pooling process, and to replace a plurality of m adjacent pixels with a single pixel having the maximum pixel value of the plurality of m adjacent pixels as a pixel value. -It is also possible to create a reduced image by executing a pooling process. Of course, other reduction processing can be performed as the reduction processing in step S35. In short, a process that can create a small-sized preparation image Q (k + 1) by performing conversion that reduces the number of pixels on the hierarchical image Pk, in other words, “the number of pixels has decreased. As long as it is a process for creating a “reduced image”, any reduction process may be executed in step S35.

 こうして、ステップS35の縮小処理が完了したら、ステップS36において、パラメータkが1だけ増加され、再びステップS33のフィルタ処理が実行される。結局、上述のように、k=1として第1番目の準備画像Q1(元画像)に対してステップS33でフィルタ処理を行うことにより第1番目の階層画像P1が作成され、続いて、ステップS35で、この階層画像P1に対して縮小処理を行うことにより第2番目の準備画像Q2が作成され、ステップS36でk=2に更新され、再びステップS33において、第2番目の準備画像Q2に対するフィルタ処理を行うことにより第2番目の階層画像P2が作成されることになる。 Thus, when the reduction process of step S35 is completed, the parameter k is increased by 1 in step S36, and the filter process of step S33 is executed again. Eventually, as described above, k = 1 and the first preparation image Q1 (original image) is filtered in step S33 to create the first hierarchical image P1, and then step S35. Thus, the second preparatory image Q2 is created by performing the reduction process on the hierarchical image P1, and is updated to k = 2 in step S36. In step S33, the filter for the second preparatory image Q2 is again performed. By performing the process, the second hierarchical image P2 is created.

 このような繰り返し手順が、ステップS34において、k=nと判断されるまで繰り返し実行される。ここで、nの値としては、画像ピラミッドの階層数(すなわち、画像ピラミッドを構成する階層画像の総数)として適切な値を予め設定しておけばよい。nの値を大きく設定すればするほど、1つの評価点Eについて抽出される特徴量の数nが多くなるので、より正確なシミュレーションが可能になるが、演算負担は増大する。また、ステップS35の縮小処理を繰り返すほど、画像のサイズはどんどん小さくなってゆくので、nの値を大きく設定しすぎると、ステップS35の縮小処理を行うことができなくなる。したがって、実用上は、元画像Q1のサイズや演算負担を考慮して、nの値を適切に設定すればよい。 Such an iterative procedure is repeatedly executed until it is determined in step S34 that k = n. Here, as the value of n, an appropriate value may be set in advance as the number of layers of the image pyramid (that is, the total number of layer images constituting the image pyramid). As the value of n is set larger, the number n of feature quantities extracted for one evaluation point E increases, so that more accurate simulation becomes possible, but the calculation burden increases. Further, as the reduction process in step S35 is repeated, the size of the image becomes smaller. Therefore, if the value of n is set too large, the reduction process in step S35 cannot be performed. Therefore, in practice, the value of n may be set appropriately in consideration of the size of the original image Q1 and the calculation burden.

 こうして、ステップS34において、k=nと判断されると、画像ピラミッド作成部122による処理は完了である。この時点で、図6に示すように、複数n通りの階層画像P1~Pnからなる画像ピラミッドPPが作成されたことになる。そこで、ステップS34からステップS37へと進み、特徴量算出処理が実行される。 Thus, when it is determined in step S34 that k = n, the processing by the image pyramid creation unit 122 is completed. At this time, as shown in FIG. 6, an image pyramid PP composed of a plurality of n hierarchical images P1 to Pn is created. Therefore, the process proceeds from step S34 to step S37, and a feature amount calculation process is executed.

 図20は、画像ピラミッド作成部122において、n通りの階層画像P1~Pnからなる画像ピラミッドPPを作成する手順(図7のステップS33~S36の手順)を示す平面図である。図の上段左に示す第1の準備画像Q1は、ステップS32の元画像作成処理において、k=1として作成された元画像であり、個々の画素には、たとえば図10に示す面積密度マップM1のような画素値が定義されている。上述したとおり、ステップS33では、この第1の準備画像Q1に対してフィルタ処理が行われる。具体的には、たとえば、3×3の画素配列からなるガウシアンフィルタGF33を用いた畳込演算により、図20の上段右に示すような第1の階層画像P1が作成される。この第1の階層画像P1のサイズは、第1の準備画像Q1のサイズと同じである。 FIG. 20 is a plan view showing a procedure (steps S33 to S36 in FIG. 7) for creating an image pyramid PP composed of n hierarchical images P1 to Pn in the image pyramid creation unit 122. The first preparation image Q1 shown in the upper left of the figure is an original image created with k = 1 in the original image creation processing in step S32. The area density map M1 shown in FIG. The pixel value is defined as follows. As described above, in step S33, the filter process is performed on the first preparation image Q1. Specifically, for example, a first hierarchical image P1 as shown in the upper right of FIG. 20 is created by a convolution operation using a Gaussian filter GF33 having a 3 × 3 pixel array. The size of the first hierarchical image P1 is the same as the size of the first preparation image Q1.

 続いて、ステップS35の縮小処理において、第1の階層画像P1に対する縮小処理(たとえば、アベレージ・プーリング処理)が行われ、図20の中段左に示す第2の準備画像Q2が作成される。この第2の準備画像Q2のサイズは、第1の階層画像P1のサイズよりも小さなものになる。続いて、ステップS36において、パラメータkの値が2に更新され、再びステップS33のフィルタ処理が実行される。すなわち、3×3の画素配列からなるガウシアンフィルタGF33を用いた畳込演算により、図20の中段右に示すような第2の階層画像P2が作成される。この第2の階層画像P2のサイズは、第2の準備画像Q2のサイズと同じである。 Subsequently, in the reduction process of step S35, a reduction process (for example, average pooling process) is performed on the first hierarchical image P1, and a second preparation image Q2 shown in the middle left of FIG. 20 is created. The size of the second preparation image Q2 is smaller than the size of the first hierarchical image P1. Subsequently, in step S36, the value of the parameter k is updated to 2, and the filtering process of step S33 is executed again. That is, the second hierarchical image P2 as shown in the middle right of FIG. 20 is created by the convolution operation using the Gaussian filter GF33 having a 3 × 3 pixel array. The size of the second hierarchical image P2 is the same as the size of the second preparation image Q2.

 そして再びステップS35の縮小処理が実行される。すなわち、第2の階層画像P2に対する縮小処理(たとえば、アベレージ・プーリング処理)が行われ、図20の下段左に示す第3の準備画像Q3が作成される。この第3の準備画像Q3のサイズは、第2の階層画像P2のサイズよりも小さなものになる。続いて、ステップS36において、パラメータkの値が3に更新され、再びステップS33のフィルタ処理が実行される。すなわち、3×3の画素配列からなるガウシアンフィルタGF33を用いた畳込演算により、図20の下段右に示すような第3の階層画像P3が作成される。この第3の階層画像P3のサイズは、第3の準備画像Q3のサイズと同じである。 Then, the reduction process in step S35 is executed again. That is, reduction processing (for example, average pooling processing) is performed on the second hierarchical image P2, and a third preparation image Q3 shown on the lower left of FIG. 20 is created. The size of the third preparation image Q3 is smaller than the size of the second hierarchical image P2. Subsequently, in step S36, the value of the parameter k is updated to 3, and the filtering process of step S33 is executed again. That is, a third layer image P3 as shown in the lower right of FIG. 20 is created by a convolution operation using a Gaussian filter GF33 having a 3 × 3 pixel array. The size of the third layer image P3 is the same as the size of the third preparation image Q3.

 このような処理が、パラメータk=nになるまで繰り返し実行され、最終的に、第nの準備画像Qnと第nの階層画像が得られることになる。こうして、第1の階層画像P1~第nの階層画像Pnまでのサイズが異なるn通りの階層画像によって、画像ピラミッドPPが構成されることになる。 Such processing is repeatedly executed until the parameter k = n, and finally, the n-th preparation image Qn and the n-th layer image are obtained. In this way, the image pyramid PP is constituted by n hierarchical images having different sizes from the first hierarchical image P1 to the nth hierarchical image Pn.

 結局、図7の手順に示す実施例の場合、画像ピラミッド作成部122は、元画像Q1もしくは縮小画像Q(k+1)に対して、所定の画像処理フィルタを用いたフィルタ処理を行う機能を有しており、このフィルタ処理と縮小処理とを交互に実行することにより、複数の階層画像P1~Pnからなる画像ピラミッドPPを作成することになる。 After all, in the case of the embodiment shown in the procedure of FIG. 7, the image pyramid creation unit 122 has a function of performing filter processing using a predetermined image processing filter on the original image Q1 or the reduced image Q (k + 1). By alternately executing the filter process and the reduction process, an image pyramid PP composed of a plurality of hierarchical images P1 to Pn is created.

 より具体的には、画像ピラミッド作成部122は、元画像作成部121によって作成された元画像を第1の準備画像Q1とし、第kの準備画像Qk(但し、kは自然数)に対するフィルタ処理によって得られる画像を第kの階層画像Pkとし、第kの階層画像Pkに対する縮小処理によって得られる画像を第(k+1)の準備画像Q(k+1)として、第nの階層画像Pnが得られるまでフィルタ処理と縮小処理とを交互に実行することにより、第1の階層画像P1~第nの階層画像Pnを含む複数n通りの階層画像からなる画像ピラミッドPPを作成することになる。 More specifically, the image pyramid creation unit 122 uses the original image created by the original image creation unit 121 as the first preparation image Q1, and performs a filtering process on the kth preparation image Qk (where k is a natural number). The obtained image is the k-th hierarchical image Pk, and the image obtained by the reduction process for the k-th hierarchical image Pk is the (k + 1) -th prepared image Q (k + 1), and is filtered until the n-th hierarchical image Pn is obtained. By alternately executing the processing and the reduction processing, an image pyramid PP composed of a plurality of n hierarchical images including the first hierarchical image P1 to the nth hierarchical image Pn is created.

 本発明における画像ピラミッドPPは、それぞれ異なるサイズをもった複数の階層画像によって構成されるものであればよいので、図7の流れ図に示す手順において、ステップS35の縮小処理は必須の処理になるが、ステップS33のフィルタ処理は必ずしも必要な処理ではない。ただ、フィルタ処理を行うと、個々の画素の画素値に周囲の画素の画素値の影響を作用させることができる。このため、フィルタ処理を加えることにより、バリエーションに富んだ複数の階層画像を作成することができ、より多様な情報を含んだ特徴量を抽出することができるようになり、結果的に、より正確なシミュレーションが可能になる。したがって、実用上は、図7の流れ図に示す手順のように、縮小処理とフィルタ処理とを交互に実行するようにするのが好ましい。 Since the image pyramid PP according to the present invention only needs to be composed of a plurality of hierarchical images having different sizes, the reduction process of step S35 is an essential process in the procedure shown in the flowchart of FIG. The filtering process in step S33 is not necessarily a necessary process. However, when the filter process is performed, the influence of the pixel values of the surrounding pixels can be applied to the pixel values of the individual pixels. For this reason, by adding filter processing, it is possible to create a plurality of hierarchical images rich in variations, and it is possible to extract feature quantities including more diverse information, resulting in more accurate results. Simulation becomes possible. Therefore, in practice, it is preferable to alternately perform the reduction process and the filter process as in the procedure shown in the flowchart of FIG.

 <2.3 特徴量算出部123による処理手順>
 次に、特徴量算出部123による特徴量算出処理の手順を説明する。特徴量算出部123は、図7のステップS37に示すとおり、画像ピラミッドPPを構成する各階層画像P1~Pnに基づいて、各評価点Eについての特徴量x1~xnを算出する処理を行う。ここでは、この特徴量x1~xnの算出処理の手順を具体的に説明する。
<2.3 Processing Procedure by Feature Quantity Calculation Unit 123>
Next, a procedure of feature amount calculation processing by the feature amount calculation unit 123 will be described. As shown in step S37 of FIG. 7, the feature amount calculation unit 123 performs a process of calculating the feature amounts x1 to xn for each evaluation point E based on the hierarchical images P1 to Pn constituting the image pyramid PP. Here, the procedure for calculating the feature amounts x1 to xn will be specifically described.

 図21は、特徴量算出部123において、各階層画像P1~Pnから特定の評価点Eについての特徴量x1~xnを算出する手順を示す平面図である。具体的には、図21(a) ~(c) の左側には、それぞれ第1の階層画像P1,第2の階層画像P2,第3の階層画像P3の各画素配列に、元図形パターン10を構成する長方形(太枠で示す)を重ねた状態が示されており、図21(a) ~(c) の右側には、各階層画像P1,P2,P3に基づいて、特定の評価点Eについての特徴量x1,x2,x3を算出する原理が示されている。 FIG. 21 is a plan view showing a procedure for calculating feature amounts x1 to xn for a specific evaluation point E from the hierarchical images P1 to Pn in the feature amount calculation unit 123. FIG. Specifically, on the left side of FIGS. 21A to 21C, the original graphic pattern 10 is arranged in each pixel arrangement of the first hierarchical image P1, the second hierarchical image P2, and the third hierarchical image P3, respectively. A rectangle (indicated by a thick frame) is formed on the right side of FIGS. 21 (a) to 21 (c), and specific evaluation points are shown on the right side based on the hierarchical images P1, P2, and P3. The principle of calculating feature amounts x1, x2, and x3 for E is shown.

 図21に示す各階層画像P1,P2,P3は、画像ピラミッドPPの各階層を構成する画像の一部である。実際には、P1~Pnまでのn通りの階層画像が用意され、n組の特徴量x1~xnが抽出されることになるが、図21では、説明の便宜上、3枚の階層画像P1,P2,P3から3組の特徴量x1,x2,x3を抽出する様子が示されている。 21 is a part of an image constituting each layer of the image pyramid PP. Actually, n hierarchical images from P1 to Pn are prepared and n sets of feature values x1 to xn are extracted. In FIG. 21, for convenience of explanation, three hierarchical images P1, A state in which three sets of feature values x1, x2, and x3 are extracted from P2 and P3 is shown.

 ここで、第1の階層画像P1は、元画像Q1(第1の準備画像)に対してフィルタ処理を施すことによって得られた画像であり、図示の例では、16×16の画素配列を有している。これに対して、第2の階層画像P2は、第1の階層画像P1に対して縮小処理およびフィルタ処理を施すことによって得られた画像であり、図示の例では、8×8の画素配列を有している。また、第3の階層画像P3は、第2の階層画像P2に対して縮小処理およびフィルタ処理を施すことによって得られた画像であり、図示の例では、4×4の画素配列を有している。 Here, the first hierarchical image P1 is an image obtained by performing a filtering process on the original image Q1 (first preparation image). In the illustrated example, the first hierarchical image P1 has a 16 × 16 pixel array. is doing. In contrast, the second hierarchical image P2 is an image obtained by performing reduction processing and filtering processing on the first hierarchical image P1, and in the illustrated example, an 8 × 8 pixel array is formed. Have. The third hierarchical image P3 is an image obtained by performing reduction processing and filtering processing on the second hierarchical image P2, and has a 4 × 4 pixel array in the illustrated example. Yes.

 図21では、各階層画像P1,P2,P3を、その輪郭が同じ大きさの正方形になるように描いているため、いずれも同じ大きさの画像になっているが、画素配列としては、16×16,8×8,4×4と徐々に縮小しており、画像のサイズは徐々に減少したものになっている。ただ、図21では、各階層画像P1,P2,P3の外枠を、同じ大きさの正方形として描いているため、画素の大きさが徐々に大きくなっている。別言すれば、画像の解像度は、階層画像P1,P2,P3の順に低下してゆき、徐々に粗い画像になってゆく。 In FIG. 21, since each of the hierarchical images P1, P2, and P3 is drawn so that the outline thereof becomes a square having the same size, the images are all the same size, but the pixel arrangement is 16 The image size is gradually reduced to x16, 8x8, and 4x4, and the image size is gradually reduced. However, in FIG. 21, the outer frame of each hierarchical image P1, P2, P3 is drawn as a square of the same size, so the size of the pixels gradually increases. In other words, the resolution of the image decreases in the order of the hierarchical images P1, P2, and P3, and gradually becomes a coarse image.

 上述したとおり、図には、元図形パターン10を構成する長方形が太枠で描かれている。各階層画像P1,P2,P3は、いずれも画素の集合体からなるラスター画像であるので、図に太枠で描かれた長方形の輪郭線は、実際には、輪郭線そのものの情報として含まれているわけではなく、個々の画素の画素値の情報として含まれていることになる。ただ、図21では、説明の便宜上、各階層画像P1,P2,P3上の長方形の位置を太線で示してある。ここでは、この長方形の輪郭線上に定義された特定の評価点Eについて、特徴量x1~xnを抽出する処理を説明する。 As described above, in the figure, the rectangle constituting the original figure pattern 10 is drawn with a thick frame. Since each of the hierarchical images P1, P2, and P3 is a raster image made up of a collection of pixels, the rectangular outline drawn with a thick frame in the figure is actually included as information on the outline itself. However, it is included as information on pixel values of individual pixels. However, in FIG. 21, for the convenience of explanation, the positions of the rectangles on the hierarchical images P1, P2, and P3 are indicated by bold lines. Here, a process of extracting feature amounts x1 to xn for a specific evaluation point E defined on the rectangular outline will be described.

 図21(a) ~(c) を見ればわかるとおり、各階層画像P1,P2,P3に対して、太枠で示す長方形は同じ相対位置に配置されており、特定の評価点Eも同じ相対位置に配置されている。ここに示す実施例の場合、1つの評価点Eについての特徴量は、その近傍の画素の画素値に基づいて算出される。 As can be seen from FIGS. 21A to 21C, for each of the hierarchical images P1, P2, and P3, the rectangles indicated by the thick frames are arranged at the same relative position, and the specific evaluation point E is also at the same relative position. Placed in position. In the case of the embodiment shown here, the feature amount for one evaluation point E is calculated based on the pixel values of the neighboring pixels.

 まず、図21(a) に示すように、第1の階層画像P1に基づいて、評価点Eについての特徴量x1が抽出される。具体的には、特徴量算出部123は、図21(a) の右側に示すように、第1の階層画像P1を構成する画素から、評価点Eの近傍に位置する4個の画素(図にハッチングを施した画素)を着目画素として抽出し、これら4個の着目画素の画素値を用いた演算により特徴量x1を算出する。同様に、図21(b) の右側に示すように、第2の階層画像P2を構成する画素から、評価点Eの近傍に位置する4個の画素(図にハッチングを施した画素)を着目画素として抽出し、これら4個の着目画素の画素値を用いた演算により特徴量x2を算出する。また、図21(c) の右側に示すように、第3の階層画像P3を構成する画素から、評価点Eの近傍に位置する4個の画素(図にハッチングを施した画素)を着目画素として抽出し、これら4個の着目画素の画素値を用いた演算により特徴量x3を算出する。 First, as shown in FIG. 21 (a), a feature quantity x1 for the evaluation point E is extracted based on the first hierarchical image P1. Specifically, the feature quantity calculation unit 123, as shown on the right side of FIG. 21A, four pixels (in the figure) located in the vicinity of the evaluation point E from the pixels constituting the first hierarchical image P1. Are extracted as the target pixel, and the feature amount x1 is calculated by the calculation using the pixel values of these four target pixels. Similarly, as shown on the right side of FIG. 21 (b), attention is paid to four pixels (hatched pixels in the figure) located in the vicinity of the evaluation point E from the pixels constituting the second hierarchical image P2. Extracted as pixels, the feature amount x2 is calculated by calculation using the pixel values of these four target pixels. Further, as shown on the right side of FIG. 21 (c), four pixels (pixels hatched in the figure) located in the vicinity of the evaluation point E from the pixels constituting the third hierarchical image P3 are focused pixels. And the feature amount x3 is calculated by calculation using the pixel values of these four target pixels.

 このような処理を、第4の階層画像P4~第nの階層画像Pnについても行えば、特定の評価点Eについて、n組の特徴量x1~xnを抽出することができる。これらn組の特徴量x1~xnは、いずれも元図形パターン10上の同じ評価点Eについて、その周囲の特徴を示すパラメータになるが、元図形パターン10から影響を受ける範囲が互いに異なっている。たとえば、第1の階層画像P1から抽出された特徴量x1は、図21(a) の右側の図にハッチングが施された狭い領域内の特徴を示す値になるが、第2の階層画像P2から抽出された特徴量x2は、図21(b) の右側の図にハッチングが施されたより広い領域内の特徴を示す値になり、第3の階層画像P3から抽出された特徴量x3は、図21(c) の右側の図にハッチングが施された更に広い領域内の特徴を示す値になる。 If such processing is also performed for the fourth hierarchical image P4 to the nth hierarchical image Pn, n sets of feature quantities x1 to xn can be extracted for a specific evaluation point E. These n sets of feature amounts x1 to xn are parameters indicating the surrounding features of the same evaluation point E on the original graphic pattern 10, but the ranges affected by the original graphic pattern 10 are different from each other. . For example, the feature quantity x1 extracted from the first hierarchical image P1 becomes a value indicating the feature in the narrow area hatched in the diagram on the right side of FIG. 21 (a), but the second hierarchical image P2 The feature amount x2 extracted from FIG. 21B is a value indicating the feature in the wider area hatched in FIG. 21B, and the feature amount x3 extracted from the third hierarchical image P3 is The value on the right side of FIG. 21 (c) is a value indicating a characteristic in a wider area where hatching is performed.

 前述したように、ある1つの評価点Eについてのプロセスバイアスyの値は、前方散乱,後方散乱など、様々なスケール感をもった現象が融合して決まる値になる。したがって、同一の評価点Eについての特徴量として、その周囲のごく狭い範囲に関する特徴量x1から、より広い範囲に関する特徴量xnに至るまで、多様な特徴量x1~xnを抽出すれば、影響範囲がそれぞれ異なる様々な現象を考慮した正確なシミュレーションを行うことができる。図21には、1つの評価点Eについて、n組の特徴量x1~xnを抽出する処理が示されているが、実際には、元図形パターン10上に定義された多数の評価点のそれぞれについて、n組の特徴量x1~xnが同様の手順によって抽出されることになる。 As described above, the value of the process bias y for a certain evaluation point E is a value determined by merging phenomena having various scale feelings such as forward scattering and back scattering. Therefore, if various feature amounts x1 to xn are extracted from the feature amount x1 related to a very narrow range to the feature amount xn related to a wider range as the feature amount for the same evaluation point E, the influence range is extracted. It is possible to perform an accurate simulation in consideration of various phenomena different from each other. FIG. 21 shows a process of extracting n sets of feature amounts x1 to xn for one evaluation point E. Actually, each of a large number of evaluation points defined on the original graphic pattern 10 is shown. N sets of feature quantities x1 to xn are extracted by the same procedure.

 評価点Eの近傍にある着目画素の画素値に基づいて特徴量xを算出する方法としては、着目画素の画素値の単純平均を特徴量xとする単純な方法を採ることができる。たとえば、図21(a) に示すように、第1の階層画像P1から評価点Eについての特徴量x1を抽出するには、図にハッチングを施して示した着目画素(評価点Eの近傍にある4個の画素)の画素値の単純平均を特徴量x1とすればよい。ただ、より正確な特徴量を算出するためには、評価点Eと各着目画素との距離に応じた重みを考慮した加重平均を求め、この加重平均の値を特徴量x1とするのが好ましい。 As a method for calculating the feature amount x based on the pixel value of the pixel of interest near the evaluation point E, a simple method can be adopted in which the simple average of the pixel values of the pixel of interest is the feature amount x. For example, as shown in FIG. 21A, in order to extract the feature value x1 for the evaluation point E from the first hierarchical image P1, the pixel of interest (in the vicinity of the evaluation point E) indicated by hatching in the drawing is used. A simple average of the pixel values of (four pixels) may be the feature amount x1. However, in order to calculate a more accurate feature quantity, it is preferable to obtain a weighted average that takes into account the weight according to the distance between the evaluation point E and each pixel of interest, and to use the weighted average value as the feature quantity x1. .

 図22は、図21に示す特徴量算出手順で用いる具体的な演算方法(加重平均の値を特徴量とする演算方法)を示す図である。ここでは、特定の評価点Eの近傍に位置する着目画素として、4個の画素A,B,C,Dが選択された例が示されている。具体的には、処理対象となる階層画像P上において、評価点Eに近い順に合計4個の画素を選択する処理を行えば、着目画素A,B,C,Dを決定することができる。そこで、この4個の着目画素A,B,C,Dの画素値について、評価点Eと各画素との距離に応じた重みを考慮した加重平均を特徴量xとする演算を行えばよい。 FIG. 22 is a diagram showing a specific calculation method (calculation method using a weighted average value as a feature amount) used in the feature amount calculation procedure shown in FIG. Here, an example is shown in which four pixels A, B, C, and D are selected as the pixel of interest located in the vicinity of a specific evaluation point E. Specifically, the target pixels A, B, C, and D can be determined by performing a process of selecting a total of four pixels in order from the evaluation point E on the hierarchical image P to be processed. Therefore, for the pixel values of the four target pixels A, B, C, and D, a calculation may be performed using a weighted average considering the weight according to the distance between the evaluation point E and each pixel as the feature amount x.

 図22(a) には、各着目画素A,B,C,Dの中心点にx印が表示され、これらx印を連結する破線が描かれている。各着目画素A,B,C,Dの画素寸法は縦横ともにuであり、上記破線は、この画素寸法uをもった画素を半分に分割する分割線になっている。ここに示す実施例の場合、評価点Eと各着目画素A,B,C,Dとの距離として、評価点Eと各着目画素A,B,C,Dの中心点との横方向距離および縦方向距離を採用している。具体的には、図22(a) に示す例の場合、着目画素Aについては、横方向距離a,縦方向距離cになり、着目画素Bについては、横方向距離b,縦方向距離cになり、着目画素Cについては、横方向距離a,縦方向距離dになり、着目画素Dについては、横方向距離b,縦方向距離dになる。 In FIG. 22A, a mark x is displayed at the center point of each pixel of interest A, B, C, D, and a broken line connecting these marks x is drawn. The pixel size of each pixel of interest A, B, C, and D is u both vertically and horizontally, and the broken line is a dividing line that divides the pixel having the pixel size u in half. In the embodiment shown here, the distance between the evaluation point E and each pixel of interest A, B, C, D is the lateral distance between the evaluation point E and the center point of each pixel of interest A, B, C, D, and The vertical distance is adopted. Specifically, in the example shown in FIG. 22A, the target pixel A has a horizontal distance a and a vertical distance c, and the target pixel B has a horizontal distance b and a vertical distance c. Thus, the pixel of interest C has a horizontal distance a and a vertical distance d, and the pixel of interest D has a horizontal distance b and a vertical distance d.

 この場合、評価点Eの特徴量xは、各着目画素A,B,C,Dの画素値を同じ符号A,B,C,Dで表し、画素寸法u(画素ピッチ)を1とすれば、図22(b) に示すとおり、
    G=(A・b+B・a)/2
    H=(C・b+D・a)/2
    x=(G・d+H・c)/2
なる演算によって求めることができる。
In this case, if the feature value x of the evaluation point E is represented by the same reference symbols A, B, C, and D as the pixel values of the respective target pixels A, B, C, and D, and the pixel size u (pixel pitch) is 1, As shown in FIG. 22 (b),
G = (A · b + B · a) / 2
H = (C · b + D · a) / 2
x = (G · d + H · c) / 2
Can be obtained by the following calculation.

 もちろん、4個の着目画素A,B,C,Dの画素値から特徴量xを算出する方法は、この図22に例示する方法に限定されるものではなく、評価点Eの近傍の画素の画素値を反映した特徴量xを算出することができるのであれば、この他にも様々な算出方法を採ることが可能である。また、図21および図22に例示する算出方法では、着目画素として、評価点Eの近傍に位置する4個の画素を選択しているが、特徴量xの算出に利用する着目画素の数は4個に限定されるものではない。たとえば、評価点Eの近傍に位置する3×3の画素配列を構成する9個の画素を着目画素として選択し、これら9個の着目画素の画素値について、評価点Eとの距離に応じた重みを考慮した加重平均を求め、これを評価点Eについての特徴量xとすることも可能である。 Of course, the method of calculating the feature amount x from the pixel values of the four target pixels A, B, C, and D is not limited to the method illustrated in FIG. Various other calculation methods can be employed as long as the feature amount x reflecting the pixel value can be calculated. In the calculation methods illustrated in FIGS. 21 and 22, four pixels located in the vicinity of the evaluation point E are selected as the target pixels. However, the number of target pixels used for calculating the feature amount x is as follows. The number is not limited to four. For example, nine pixels constituting a 3 × 3 pixel array located in the vicinity of the evaluation point E are selected as the target pixels, and the pixel values of these nine target pixels are set according to the distance from the evaluation point E. It is also possible to obtain a weighted average in consideration of the weight and use this as the feature quantity x for the evaluation point E.

 一般論として、特徴量算出部123は、特定の階層画像P上の特定の評価点Eについての特徴量xを算出する際に、当該特定の階層画像Pを構成する画素から、当該特定の評価点Eに近い順に合計j個の画素を着目画素として抽出し、抽出したj個の着目画素の画素値について、当該特定の評価点Eと各着目画素との距離に応じた重みを考慮した加重平均を求める演算を行い、得られた加重平均の値を特徴量xとすることができる。 In general terms, when calculating the feature value x for a specific evaluation point E on the specific hierarchical image P, the feature value calculating unit 123 calculates the specific evaluation from the pixels constituting the specific hierarchical image P. A total of j pixels are extracted as the target pixel in the order close to the point E, and the weight of the extracted pixel values of the j target pixels in consideration of the weight according to the distance between the specific evaluation point E and each target pixel An arithmetic operation for obtaining an average is performed, and the obtained weighted average value can be used as the feature amount x.

 <2.4 特徴量抽出処理の変形例>
 ここでは、これまで述べてきた特徴量抽出処理の手順についての変形例をいくつか述べておく。
<2.4 Modification of Feature Extraction Process>
Here, some modified examples of the procedure of the feature amount extraction processing described so far will be described.

 (1) 差分画像Dkを階層画像とする変形例
 §2.2では、画像ピラミッド作成部122による画像ピラミッドPPの作成処理手順として、図7の流れ図に示すステップS33~S36の処理を例示した。この処理は、元画像Q1(第1の準備画像)を出発点として、フィルタ処理と縮小処理を交互に実行し、図20に示すような手順により、フィルタ処理によって作成されるn枚の画像(以下、フィルタ処理画像と呼ぶ)を、そのまま階層画像P1~Pnとして採用し、画像ピラミッドPPを作成する処理である。
(1) Modified Example Using Difference Image Dk as Hierarchical Image In §2.2, the processing of steps S33 to S36 shown in the flowchart of FIG. 7 is exemplified as the image pyramid PP creation processing procedure by the image pyramid creation unit 122. In this process, starting from the original image Q1 (first preparation image), the filtering process and the reduction process are executed alternately, and n images (by the procedure shown in FIG. (Hereinafter referred to as “filtered image”) is used as it is as the hierarchical images P1 to Pn to create the image pyramid PP.

 これに対して、ここで述べる変形例は、図7の流れ図に示すステップS33~S36の処理を基本としつつ、更に、ステップS33のフィルタ処理が完了した時点で、フィルタ処理により得られた第kのフィルタ処理画像Pk(§2.2では、第kの階層画像Pkと呼んでいた画像)から第kの準備画像Qkを減じる差分演算「Pk-Qk」を行って第kの差分画像Dkを求める処理が付加される。別言すれば、ここで述べる変形例では、図7の流れ図に示すステップS33~S36の処理がそのまま実行されることになるが、更に、第kのフィルタ処理画像Pkから第kの準備画像Qkを減じる差分演算「Pk-Qk」が余分に行われることになる。 On the other hand, the modification described here is based on the processing of steps S33 to S36 shown in the flowchart of FIG. 7, and further, when the filtering processing of step S33 is completed, the k-th obtained by the filtering processing. The difference calculation “Pk−Qk” is performed by subtracting the k-th preparation image Qk from the filtered image Pk (the image called the k-th layer image Pk in §2.2) to obtain the k-th difference image Dk. The required processing is added. In other words, in the modification described here, the processing of steps S33 to S36 shown in the flowchart of FIG. 7 is executed as it is, but furthermore, from the kth filtered image Pk to the kth prepared image Qk. The difference calculation “Pk−Qk” for subtracting is reduced.

 ここで、差分演算「Pk-Qk」は、第kのフィルタ処理画像Pkと第kの準備画像Qkとについて、画素配列上で同じ位置に配置された画素を対応画素と定義し、画像Pk上の各画素の画素値から画像Qk上の対応画素の画素値を減算して差分をとり、得られた差分を画素値とする新たな画素の集合体からなる差分画像Dkを求める処理である。 Here, the difference calculation “Pk−Qk” defines pixels arranged at the same position on the pixel array for the k-th filtered image Pk and the k-th prepared image Qk as corresponding pixels. This is a process for subtracting the pixel value of the corresponding pixel on the image Qk from the pixel value of each pixel to obtain a difference, and obtaining a difference image Dk composed of a new pixel aggregate with the obtained difference as the pixel value.

 図23は、このような差分演算「Pk-Qk」によって、n通りの差分画像D1~Dnからなる画像ピラミッドPDを作成する手順を示す平面図である。ここで、上段右側に示す第1の階層画像D1は、差分演算「P1-Q1」によって得られる差分画像であり、具体的には、図20の上段右側に示す第1のフィルタ処理画像P1(図20では、第1の階層画像P1と呼ばれている)から上段左側に示す第1の準備画像Q1を減じる差分演算(対応位置にある画素同士の画素値の引き算)によって算出される。 FIG. 23 is a plan view showing a procedure for creating an image pyramid PD composed of n kinds of difference images D1 to Dn by such a difference calculation “Pk−Qk”. Here, the first hierarchical image D1 shown on the upper right side is a difference image obtained by the difference calculation “P1-Q1”, and specifically, the first filtered image P1 ( In FIG. 20, the difference is calculated by subtracting the first preparation image Q1 shown on the upper left side (referred to as the first hierarchical image P1) (subtraction of pixel values of pixels at corresponding positions).

 同様に、図23の中段右側に示す第2の階層画像D2は、差分演算「P2-Q2」によって得られる差分画像であり、具体的には、図20の中段右側に示す第2のフィルタ処理画像P2(図20では、第2の階層画像P2と呼ばれている)から中段左側に示す第2の準備画像Q2を減じる差分演算によって算出される。また、図23の下段右側に示す第3の階層画像D3は、差分演算「P3-Q3」によって得られる差分画像であり、具体的には、図20の下段右側に示す第3のフィルタ処理画像P3(図20では、第3の階層画像P3と呼ばれている)から下段左側に示す第3の準備画像Q3を減じる差分演算によって算出される。以下、同様の差分演算が行われ、最終的に、差分演算「Pn-Qn」によって得られる差分画像が、第nの階層画像Dnということになる。 Similarly, the second hierarchical image D2 shown on the right side of the middle stage in FIG. 23 is a difference image obtained by the difference calculation “P2-Q2”. Specifically, the second filter processing shown on the right side of the middle stage in FIG. The difference is calculated by subtracting the second preparation image Q2 shown on the left side of the middle stage from the image P2 (referred to as the second hierarchical image P2 in FIG. 20). Further, the third hierarchical image D3 shown on the lower right side of FIG. 23 is a difference image obtained by the difference calculation “P3-Q3”. Specifically, the third filtered image shown on the lower right side of FIG. The difference is calculated by subtracting the third preparation image Q3 shown on the left side of the lower stage from P3 (referred to as the third hierarchical image P3 in FIG. 20). Thereafter, the same difference calculation is performed, and finally, the difference image obtained by the difference calculation “Pn−Qn” is the n-th layer image Dn.

 §2.2で述べた実施例では、図20に示すとおり、第1の階層画像(第1のフィルタ処理画像)P1~第nの階層画像(第nのフィルタ処理画像)Pnによって画像ピラミッドPPが構成されていたが、ここで述べる変形例では、図23に示すとおり、第1の階層画像(第1の差分画像)D1~第nの階層画像(第nの差分画像)Dnによって画像ピラミッドPDが構成されることになる。 In the embodiment described in §2.2, as shown in FIG. 20, the image pyramid PP is generated by the first hierarchical image (first filtered image) P1 to the nth hierarchical image (nth filtered image) Pn. However, in the modification described here, as shown in FIG. 23, an image pyramid is formed by the first layer image (first difference image) D1 to n-th layer image (n-th difference image) Dn. The PD is configured.

 上述したように、ここで述べる変形例を実施するには、§2.2で述べた実施例の手順に、更に、差分演算の手順を付加すればよい。具体的には、画像ピラミッド作成部122は、元画像作成部121によって作成された元画像を第1の準備画像Q1とし、第kの準備画像Qk(但し、kは自然数)に対するフィルタ処理によって得られるフィルタ処理画像Pkと第kの準備画像Qkとの差分画像Dkを求め、当該差分画像Dkを第kの階層画像Dkとし、第kのフィルタ処理画像Pkに対する縮小処理によって得られる画像を第(k+1)の準備画像Q(k+1)として、第nの階層画像Dnが得られるまでフィルタ処理と縮小処理とを交互に実行することにより、第1の階層画像D1~第nの階層画像Dnを含む複数n通りの階層画像からなる画像ピラミッドPDを作成すればよい。 As described above, in order to implement the modification example described here, a difference calculation procedure may be added to the procedure of the example described in §2.2. Specifically, the image pyramid creation unit 122 uses the original image created by the original image creation unit 121 as the first preparation image Q1, and obtains it by filtering the kth preparation image Qk (where k is a natural number). The difference image Dk between the filtered image Pk and the kth preparation image Qk is obtained, the difference image Dk is set as the kth hierarchical image Dk, and the image obtained by the reduction processing on the kth filtered image Pk is the ( As the preparation image Q (k + 1) of (k + 1), the first hierarchical image D1 to the nth hierarchical image Dn are included by alternately executing the filtering process and the reduction process until the nth hierarchical image Dn is obtained. An image pyramid PD composed of a plurality of n hierarchical images may be created.

 結局、この変形例において画像ピラミッドPDの第k番目の階層を構成する第kの階層画像Dkは、フィルタ処理後の画像(フィルタ処理画像Pk)とフィルタ処理前の画像(準備画像Qk)との差分画像ということになり、各画素の画素値は、フィルタ処理前後の画素値の差に相当する。すなわち、§2.2で述べた実施例における第kの階層画像Pkが、フィルタ処理後の画像自体を示しているのに対して、ここで述べる変形例における第kの階層画像Dkは、フィルタ処理によって生じた差を示すことになる。このように、§2.2で述べた実施例で作成される画像ピラミッドPPと、ここで述べた変形例で作成される画像ピラミッドPDとは、その構成要素となる階層画像の意味合いが大きく異なることになるが、いずれも評価点Eについての何らかの特徴を示す画像である点に変わりはない。したがって、ここで述べた変形例で作成される各階層画像D1~Dnからも、特徴量の抽出を行うことが可能であり、本変形例における特徴量算出部123は、各階層画像D1~Dnから特徴量x1~xnを抽出する処理を行うことになる。 After all, in this modification, the k-th layer image Dk constituting the k-th layer of the image pyramid PD is an image after the filter process (filtered image Pk) and an image before the filter process (prepared image Qk). This is a difference image, and the pixel value of each pixel corresponds to the difference between the pixel values before and after the filtering process. That is, the k-th hierarchical image Pk in the embodiment described in §2.2 shows the image after the filtering process, whereas the k-th hierarchical image Dk in the modification described here is a filter. It shows the difference caused by the process. As described above, the image pyramid PP created in the embodiment described in §2.2 and the image pyramid PD created in the modification described here are significantly different in the meaning of the hierarchical image as a component. As a matter of course, there is no change in the points that are images showing some characteristics about the evaluation point E. Therefore, it is possible to extract feature amounts from each of the hierarchical images D1 to Dn created in the modified example described here, and the feature amount calculating unit 123 in this modified example has the hierarchical images D1 to Dn. Thus, processing for extracting the feature amounts x1 to xn is performed.

 (2) 複数通りのアルゴリズムにより複数通りの画像ピラミッドを作成する変形例
 §2.2で述べた実施例では、図20に示すとおり、元画像(第1の準備画像Q1)に対してフィルタ処理および縮小処理を交互に実行することにより各フィルタ処理画像P1~Pnを求め、これら各フィルタ処理画像P1~Pnをそのままn通りの階層画像P1~Pnとするアルゴリムにより、画像ピラミッドPPが作成されている。これに対して、§2.4(1) で述べた差分画像Dkを階層画像とする変形例では、図23に示すとおり、フィルタ処理画像Pkと準備画像Qkとの差分演算を行うことにより各差分画像D1~Dnを求め、これら各差分画像D1~Dnをn通りの階層画像D1~Dnとするアルゴリムにより、画像ピラミッドPDが作成されている。
(2) Modified Example of Creating Plural Image Pyramids Using Plural Algorithms In the embodiment described in §2.2, as shown in FIG. 20, the original image (first prepared image Q1) is filtered. By alternately executing the reduction processing and the reduction processing, the respective filtered images P1 to Pn are obtained, and the image pyramid PP is created by the algorithm that uses these filtered images P1 to Pn as n hierarchical images P1 to Pn as they are. Yes. On the other hand, in the modified example in which the difference image Dk described in §2.4 (1) is a hierarchical image, as shown in FIG. 23, each difference operation between the filtered image Pk and the preparation image Qk is performed. Difference pyramids D1 to Dn are obtained, and an image pyramid PD is created by an algorithm that uses the difference images D1 to Dn as n hierarchical images D1 to Dn.

 また、フィルタ処理に用いる画像フィルタには、図16や図17に示すように様々な種類があり、縮小処理(プーリング処理)の方法にも、図18や図19に示すように様々な種類がある。このように、出発点が同じ元画像(第1の準備画像Q1)であったとしても、採用するアルゴリズムによって、得られる画像ピラミッドを構成する各階層画像の内容は異なってくる。しかも、本発明を実行する上で、利用する画像ピラミッドは必ずしも1つである必要はなく、複数通りのアルゴリズムにより複数の画像ピラミッドを作成し、個々の画像ピラミッドからそれぞれ特徴量を抽出することも可能である。 Further, there are various types of image filters used for the filter processing as shown in FIGS. 16 and 17, and there are various types of reduction processing (pooling processing) methods as shown in FIGS. is there. Thus, even if the starting point is the same original image (first preparation image Q1), the contents of the hierarchical images constituting the obtained image pyramid differ depending on the algorithm employed. Moreover, in order to execute the present invention, the number of image pyramids to be used is not necessarily one, and a plurality of image pyramids can be created by a plurality of algorithms, and feature amounts can be extracted from individual image pyramids. Is possible.

 すなわち、画像ピラミッド作成部122に、1つの元画像(第1の準備画像Q1)について、互いに異なる複数通りのアルゴリズムに基づく画像ピラミッド作成処理を行う機能をもたせておき、複数通りの画像ピラミッドが作成されるようにしてもよい。この場合、特徴量算出部123は、この複数通りの画像ピラミッドのそれぞれを構成する各階層画像について、評価点の位置に応じた画素(評価点の周囲に位置する画素)の画素値に基づいて特徴量を算出する処理を行うようにすればよい。 That is, the image pyramid creation unit 122 has a function of performing image pyramid creation processing based on a plurality of different algorithms for one original image (first preparation image Q1), thereby creating a plurality of image pyramids. You may be made to do. In this case, the feature amount calculation unit 123, for each hierarchical image constituting each of the plurality of image pyramids, based on the pixel value of a pixel (a pixel located around the evaluation point) corresponding to the evaluation point position. What is necessary is just to perform the process which calculates a feature-value.

 たとえば、画像ピラミッド作成部122が画像ピラミッド作成処理を行う際に、主アルゴリズムとして、§2.2で述べた実施例のアルゴリズムを採用すれば、図20に示すように、n通りの主階層画像P1~Pn(フィルタ処理画像)によって構成される主画像ピラミッドPPを作成することができ、副アルゴリズムとして、§2.4(1) で述べた差分画像Dkを階層画像とする変形例のアルゴリズムを採用すれば、図23に示すように、n通りの副階層画像D1~Dn(差分画像)によって構成される副画像ピラミッドPDを作成することができる。したがって、画像ピラミッド作成部122が、上記2通りのアルゴリズムを用いて画像ピラミッド作成処理を行うようにすれば、主画像ピラミッドPPと副画像ピラミッドPDとの2通りの画像ピラミッドを作成することができる。 For example, when the image pyramid creation unit 122 performs the image pyramid creation process, if the algorithm of the embodiment described in §2.2 is adopted as the main algorithm, as shown in FIG. A main image pyramid PP composed of P1 to Pn (filtered images) can be created, and as a sub-algorithm, a modified algorithm using the difference image Dk described in §2.4 (1) 2 as a hierarchical image is used. If it is adopted, as shown in FIG. 23, a sub-image pyramid PD composed of n sub-layer images D1 to Dn (difference images) can be created. Therefore, if the image pyramid creation unit 122 performs image pyramid creation processing using the above two algorithms, it is possible to create two types of image pyramids, the main image pyramid PP and the sub image pyramid PD. .

 ここで、図14に例示するようなガウシアンフィルタを用いたフィルタ処理により作成された主画像ピラミッドPPは、ガウシアンピラミッドと呼ぶことができる。また、差分画像を用いて構成された副画像ピラミッドPDは、ラプラシアンピラミッドと呼ぶことができる。ガウシアンピラミッドとラプラシアンピラミッドは、互いに性質が大きく異なる画像ピラミッドになるので、これらを主画像ピラミッドPPおよび副画像ピラミッドPDとして採用し、2通りの画像ピラミッドを利用して特徴量の抽出を行うようにすれば、より多様性をもった特徴量の抽出が可能になる。 Here, the main image pyramid PP created by the filter processing using the Gaussian filter illustrated in FIG. 14 can be called a Gaussian pyramid. Further, the sub-image pyramid PD configured using the difference image can be called a Laplacian pyramid. Since the Gaussian pyramid and the Laplacian pyramid are image pyramids that are greatly different from each other, they are adopted as the main image pyramid PP and the sub image pyramid PD, and feature amounts are extracted using two image pyramids. This makes it possible to extract feature values with more diversity.

 一方、特徴量算出部123が、主画像ピラミッドPPを構成する主階層画像P1~Pnおよび副画像ピラミッドPDを構成する副階層画像D1~Dnについて、それぞれ評価点Eの近傍の画素の画素値に基づいて特徴量を算出する処理を行うようにすれば、主階層画像P1~Pnから算出された特徴量xp1~xpnと副階層画像D1~Dnから算出された特徴量xd1~xdnとが得られる。すなわち、1つの評価点Eについて、合計2n個の特徴量が抽出されることになる。この場合、推定演算部132に対しては、1つの評価点Eについての特徴量が、2n次元ベクトルとして与えられることになるので、より正確な推定演算を行うことが可能になる。 On the other hand, the feature amount calculation unit 123 sets the pixel values of the pixels in the vicinity of the evaluation point E for the main layer images P1 to Pn constituting the main image pyramid PP and the sub layer images D1 to Dn constituting the sub image pyramid PD, respectively. If the process of calculating the feature amount is performed based on the feature amounts, the feature amounts xp1 to xpn calculated from the main layer images P1 to Pn and the feature amounts xd1 to xdn calculated from the sub layer images D1 to Dn are obtained. . That is, a total of 2n feature amounts are extracted for one evaluation point E. In this case, since the feature amount for one evaluation point E is given to the estimation calculation unit 132 as a 2n-dimensional vector, more accurate estimation calculation can be performed.

 結局、上述した変形例を実施するには、画像ピラミッド作成部122が、元画像作成部121によって作成された元画像を第1の準備画像Q1とし、第kの準備画像Qk(但し、kは自然数)に対するフィルタ処理によって得られる画像を第kの主階層画像Pkとし、第kの主階層画像Pkに対する縮小処理によって得られる画像を第(k+1)の準備画像Q(k+1)として、第nの主階層画像Pnが得られるまでフィルタ処理と縮小処理とを交互に実行することにより、第1の主階層画像P1~第nの主階層画像Pnを含む複数n通りの階層画像からなる主画像ピラミッドを作成し、更に、第kの主階層画像Pkと第kの準備画像Qkとの差分画像Dkを求め、当該差分画像Dkを第kの副階層画像Dkとすることにより、第1の副階層画像D1~第nの副階層画像Dnを含む複数n通りの階層画像からなる副画像ピラミッドを作成するようにすればよい。 Eventually, in order to implement the above-described modification, the image pyramid creation unit 122 sets the original image created by the original image creation unit 121 as the first preparation image Q1, and the kth preparation image Qk (where k is The image obtained by the filtering process for (natural number) is the k-th main layer image Pk, the image obtained by the reduction process for the k-th main layer image Pk is the (k + 1) th preparation image Q (k + 1), and the nth By alternately executing the filtering process and the reduction process until the main hierarchy image Pn is obtained, a main image pyramid including a plurality of n hierarchy images including the first main hierarchy image P1 to the nth main hierarchy image Pn. Further, a difference image Dk between the k-th main hierarchy image Pk and the k-th preparation image Qk is obtained, and the difference image Dk is set as the k-th sub-layer image Dk. It is sufficient to create a sub image pyramid consisting of hierarchy images of a plurality n as including a sub-hierarchy image Dn image D1 ~ No. n.

 また、特徴量算出部123については、主画像ピラミッドPPおよび副画像ピラミッドPDを構成する各階層画像について、評価点Eの近傍の画素の画素値に基づいて特徴量を算出するようにすればよい。そうすれば、1つの評価点Eについての特徴量として、2n次元ベクトルを抽出することが可能になり、より正確な推定演算を行うことが可能になる。 The feature amount calculation unit 123 may calculate the feature amount based on the pixel values of the pixels near the evaluation point E for each hierarchical image constituting the main image pyramid PP and the sub image pyramid PD. . If it does so, it will become possible to extract a 2n-dimensional vector as a feature-value about one evaluation point E, and it will become possible to perform a more exact estimation calculation.

 (3) 複数通りの元画像を作成して複数通りの画像ピラミッドを作成する変形例
 上述した§2.4(2) では、同一の元画像に対して、複数通りのアルゴリズムを適用することにより、複数通りの画像ピラミッドを作成する変形例を述べた。ここでは、複数通りの元画像を作成して複数通りの画像ピラミッドを作成する変形例を述べる。
(3) Modified example of creating multiple image pyramids by creating multiple image originals In §2.4 (2) above, multiple algorithms are applied to the same original image. A modification of creating multiple image pyramids was described. Here, a modified example in which a plurality of original images are created to create a plurality of image pyramids will be described.

 本発明に係る図形パターンの形状推定装置100′では、図1に示すように、元画像作成部121が、与えられた元図形パターン10に基づいて元画像を作成する処理を行う。ここで作成される元画像としては、§2.1で述べたとおり、面積密度マップM1(図10)や、エッジ長密度マップM2(図11)や、ドーズ密度マップM3(図13)など、様々な形態の画像を採用することができる。 In the figure pattern shape estimation apparatus 100 ′ according to the present invention, as shown in FIG. 1, the original image creation unit 121 performs a process of creating an original image based on the given original figure pattern 10. As the original image created here, as described in §2.1, the area density map M1 (FIG. 10), the edge length density map M2 (FIG. 11), the dose density map M3 (FIG. 13), etc. Various forms of images can be employed.

 別言すれば、元画像作成部121は、元図形パターン10に基づいて元画像を作成する際に、様々な作成アルゴリズムを採用することができ、いずれのアルゴリズムを採用したかによって、内容の異なる様々な元画像を作成することができる。たとえば、図10に示す面積密度マップM1,図11に示すエッジ長密度マップM2,図13に示すドーズ密度マップM3は、いずれも同一の元図形パターン10に基づいて作成された画像であるが、個々の画素のもつ画素値は相互に異なっており、それぞれ異なる画像になっている。あるいは、同一の元図形パターン10に基づいて、画素サイズ(1画素の寸法)およびマップサイズ(縦横に並んだ画素の数)が異なる複数の密度マップ(要するに、解像度が異なる複数のマップ)を準備し、これら複数の密度マップのそれぞれを元画像として複数の画像ピラミッドを作成してもよい。もちろん、理論的には、画素サイズが小さく、マップサイズが大きな密度マップ(高解像度の密度マップ)を元画像として用いるのが理想的であるが、計算機のもつメモリは有限であるため、実用上は、画素サイズおよびマップサイズが異なる複数の密度マップを元画像として用いるのが好ましい。 In other words, the original image creation unit 121 can employ various creation algorithms when creating an original image based on the original graphic pattern 10, and the contents differ depending on which algorithm is employed. Various original images can be created. For example, the area density map M1 shown in FIG. 10, the edge length density map M2 shown in FIG. 11, and the dose density map M3 shown in FIG. 13 are all images created based on the same original figure pattern 10. The pixel values of the individual pixels are different from each other, and each has a different image. Alternatively, based on the same original figure pattern 10, a plurality of density maps (in other words, a plurality of maps having different resolutions) having different pixel sizes (dimensions of one pixel) and map sizes (numbers of pixels arranged vertically and horizontally) are prepared. Then, a plurality of image pyramids may be created using each of the plurality of density maps as an original image. Of course, theoretically, it is ideal to use a density map with a small pixel size and a large map size (high-resolution density map) as the original image. However, since the memory of a computer is limited, it is practically used. Preferably, a plurality of density maps having different pixel sizes and map sizes are used as the original image.

 そこで、元画像作成部121に、互いに異なる複数通りのアルゴリズムに基づく元画像作成処理を行い、複数通りの元画像を作成する機能をもたせておき、画像ピラミッド作成部122に、この複数通りの元画像に基づいてそれぞれ別個独立した画像ピラミッドを作成する処理を行う機能をもたせておけば、複数通りの画像ピラミッドを作成することができる。そして、特徴量算出部123に、この複数通りの画像ピラミッドのそれぞれを構成する各階層画像について、評価点Eの近傍の画素の画素値に基づいて特徴量を算出する機能をもたせておけば、より高次元のベクトルからなる特徴量を抽出することが可能になり、より正確な推定演算を行うことが可能になる。 Therefore, the original image creation unit 121 has a function of performing original image creation processing based on a plurality of different algorithms to create a plurality of types of original images, and the image pyramid creation unit 122 has a plurality of types of original images. A plurality of image pyramids can be created by providing a function of performing processing for creating separate image pyramids based on images. Then, if the feature amount calculation unit 123 has a function of calculating the feature amount based on the pixel values of the pixels in the vicinity of the evaluation point E for each hierarchical image constituting each of the plurality of image pyramids, It is possible to extract a feature amount composed of a higher-dimensional vector, and it is possible to perform a more accurate estimation calculation.

 たとえば、元画像作成部121によって、図10に示す面積密度マップM1からなる第1の元画像と、図11に示すエッジ長密度マップM2からなる第2の元画像と、図13に示すドーズ密度マップM3からなる第3の元画像と、を作成する機能をもたせておけば、画像ピラミッド作成部122は、この3通りの元画像に基づいて、3組の別個独立した画像ピラミッドを作成することができる。いずれの画像ピラミッドも、同一の元図形パターン10に基づいて作成された画像ピラミッドである。特徴量算出部123は、この3通りの画像ピラミッドのそれぞれを構成する各階層画像について、評価点Eの近傍の画素の画素値に基づいて特徴量を算出する処理を行うことができる。1つの画像ピラミッドから、n個の特徴量x1~xnを抽出することにすれば、同一の評価点Eについて、合計3n個の特徴量を抽出することが可能になる。すなわち、1つの評価点Eについての特徴量として、3n次元ベクトルを与えることができるので、より正確な推定演算を行うことが可能になる。 For example, the original image creation unit 121 performs a first original image composed of the area density map M1 shown in FIG. 10, a second original image composed of the edge length density map M2 shown in FIG. 11, and a dose density shown in FIG. If the function of creating the third original image composed of the map M3 is provided, the image pyramid creation unit 122 creates three sets of independent image pyramids based on the three original images. Can do. All of the image pyramids are image pyramids created based on the same original graphic pattern 10. The feature amount calculation unit 123 can perform a process of calculating a feature amount based on the pixel values of pixels in the vicinity of the evaluation point E for each hierarchical image constituting each of the three image pyramids. If n feature amounts x1 to xn are extracted from one image pyramid, a total of 3n feature amounts can be extracted for the same evaluation point E. That is, since a 3n-dimensional vector can be given as a feature amount for one evaluation point E, it is possible to perform more accurate estimation calculation.

 もちろん、ここで述べた変形例に、上述した§2.4(2) の変形例を組み合わせることも可能である。§2.4(2) で述べたとおり、画像ピラミッド作成部122が、画像ピラミッドを作成する際に、異なる2種類のアルゴリズムを採用すれば、主画像ピラミッドPPと副画像ピラミッドPDとの2通りの画像ピラミッドを作成することができる。そこで、上記3通りの元画像のそれぞれに基づいて、主画像ピラミッドPPと副画像ピラミッドPDとの2通りの画像ピラミッドを作成すれば、同一の元図形パターン10に基づいて合計6組の画像ピラミッドを作成することができるので、1つの評価点Eについての特徴量として、6n次元ベクトルを与えることができる。 Of course, it is also possible to combine the above-described modification of §2.4 (2) に with the modification described here. As described in §2.4 (2) IV, if the image pyramid creation unit 122 employs two different types of algorithms when creating the image pyramid, the main image pyramid PP and the sub-image pyramid PD An image pyramid can be created. Therefore, if two image pyramids of the main image pyramid PP and the sub image pyramid PD are created based on each of the above three original images, a total of six image pyramids based on the same original graphic pattern 10 are created. Therefore, a 6n-dimensional vector can be given as a feature amount for one evaluation point E.

 <<< §3. バイアス推定ユニットの詳細 >>>
 ここでは、バイアス推定ユニット130の詳細な処理動作を説明する。図1に示すように、バイアス推定ユニット130は、特徴量入力部131と推定演算部132を有しており、図4の流れ図におけるステップS4のプロセスバイアス推定処理を実行する機能を有している。図6に示す実施例の場合、1つの評価点Eについて、x1~xnなる特徴量(n次元ベクトル)が特徴量入力部131に入力され、推定演算部132による推定演算が実行され、評価点Eについてのプロセスバイアスの推定値yが求められている。図6に示す実施例では、推定演算部132として、ニューラルネットワークが用いられている。そこで、以下、このニューラルネットワークの詳細な構成および動作を説明する。
<<< §3. Bias estimation unit details >>
Here, a detailed processing operation of the bias estimation unit 130 will be described. As shown in FIG. 1, the bias estimation unit 130 includes a feature amount input unit 131 and an estimation calculation unit 132, and has a function of executing the process bias estimation process in step S4 in the flowchart of FIG. . In the case of the embodiment shown in FIG. 6, for one evaluation point E, feature quantities (n-dimensional vectors) x1 to xn are input to the feature quantity input unit 131, and an estimation calculation by the estimation calculation unit 132 is executed. An estimate y of the process bias for E is determined. In the embodiment shown in FIG. 6, a neural network is used as the estimation calculation unit 132. Therefore, the detailed configuration and operation of this neural network will be described below.

 <3.1 ニューラルネットワークによる推定演算>
 ニューラルネットワークは、近年、人工知能の根幹をなす技術として注目されており、画像処理をはじめとする様々な分野で利用されている。このニューラルネットワークは、生物の脳の構造を模したコンピュータ上での構築物であり、ニューロンとそれを繋ぐエッジによって構成される。
<3.1 Estimate calculation by neural network>
In recent years, neural networks have attracted attention as a technology that forms the basis of artificial intelligence, and are used in various fields including image processing. This neural network is a computer-like structure that mimics the structure of a biological brain, and is composed of neurons and edges that connect them.

 図24は、図1に示す推定演算部132として、ニューラルネットワークを利用した実施例を示すブロック図である。図示のとおり、ニューラルネットワークには、入力層、中間層(隠れ層)、出力層が定義され、入力層に与えられた情報に対して、中間層(隠れ層)において所定の情報処理がなされ、出力層にその結果が出力される。本発明の場合、図示のとおり、ある1つの評価点Eについての特徴量x1~xnが、n次元ベクトルとして入力層に与えられ、出力層には、当該評価点Eについてのプロセスバイアスの推定値yが出力される。ここで、プロセスバイアスの推定値yは、§1で述べたように、所定の図形の輪郭線上に位置する評価点Eについて、当該輪郭線の法線方向についてのずれ量を示す推定値である。 FIG. 24 is a block diagram showing an embodiment using a neural network as the estimation calculation unit 132 shown in FIG. As shown in the figure, an input layer, an intermediate layer (hidden layer), and an output layer are defined in the neural network, and predetermined information processing is performed on the information given to the input layer in the intermediate layer (hidden layer). The result is output to the output layer. In the case of the present invention, as shown in the figure, feature quantities x1 to xn for one evaluation point E are given to the input layer as n-dimensional vectors, and an estimated value of the process bias for the evaluation point E is given to the output layer. y is output. Here, the estimated value y of the process bias is an estimated value indicating the amount of deviation in the normal direction of the contour line for the evaluation point E located on the contour line of the predetermined graphic as described in §1. .

 図24に示す実施例の場合、推定演算部132は、特徴量入力部131が入力した特徴量x1~xnを入力層とし、プロセスバイアスの推定値yを出力層とするニューラルネットワークを有しており、このニューラルネットワークの中間層は、第1隠れ層,第2隠れ層,..., 第N隠れ層なるN層の隠れ層によって構成されている。これら隠れ層は、多数のニューロン(ノード)を有し、これら各ニューロンを繋ぐエッジが定義されている。 In the case of the embodiment shown in FIG. 24, the estimation calculation unit 132 has a neural network having the feature amounts x1 to xn input by the feature amount input unit 131 as input layers and the process bias estimate y as an output layer. The intermediate layer of this neural network is composed of N hidden layers, which are a first hidden layer, a second hidden layer,. These hidden layers have a large number of neurons (nodes), and edges connecting these neurons are defined.

 入力層に与えられた特徴量x1~xnは、エッジを介して各ニューロンに信号として伝達されてゆく。そして、最終的に、出力層からプロセスバイアスの推定値yに相当する信号が出力されることになる。ニューラルネットワーク内の信号は、1つの隠れ層のニューロンから次の隠れ層のニューロンへと、エッジを介した演算を経て伝達されてゆく。エッジを介した演算は、学習段階で得られた学習情報L(具体的には、後述するパラメータW,b)を用いて行われる。 The feature values x1 to xn given to the input layer are transmitted as signals to each neuron via the edge. Finally, a signal corresponding to the estimated value y of the process bias is output from the output layer. A signal in the neural network is transmitted from one hidden layer neuron to the next hidden layer neuron through computation via an edge. The calculation via the edge is performed using learning information L (specifically, parameters W and b described later) obtained in the learning stage.

 図25は、図24に示すニューラルネットワークで実行される具体的な演算プロセスを示すダイアグラムである。図に太線で示す部分は、第1隠れ層,第2隠れ層,..., 第N隠れ層を示しており、各隠れ層内の個々の円はニューロン(ノード)、各円を連結する線はエッジを示している。前述したとおり、入力層には、ある1つの評価点Eについての特徴量x1~xnが、n次元ベクトルとして与えられ、出力層には、当該評価点Eについてのプロセスバイアスの推定値yがスカラー値(輪郭線の法線方向についてのずれ量を示す寸法値)として出力される。 FIG. 25 is a diagram showing a specific calculation process executed by the neural network shown in FIG. The part shown by the bold line in the figure shows the first hidden layer, the second hidden layer,..., The Nth hidden layer, and each circle in each hidden layer is a neuron (node) and connects each circle. Lines indicate edges. As described above, feature values x1 to xn for one evaluation point E are given to the input layer as n-dimensional vectors, and an estimated value y of the process bias for the evaluation point E is scalar in the output layer. It is output as a value (a dimension value indicating the amount of deviation in the normal direction of the contour line).

 図示の例の場合、第1隠れ層はM(1)次元の層であり、合計M(1)個のニューロンh(1,1)~h(1,M(1))によって構成され、第2隠れ層はM(2)次元の層であり、合計M(2)個のニューロンh(2,1)~h(2,M(2))によって構成され、第N隠れ層はM(N)次元の層であり、合計M(N)個のニューロンh(N,1)~h(N,M(N))によって構成されている。 In the case of the illustrated example, the first hidden layer is an M (1) -dimensional layer, and is composed of a total of M (1) neurons h (1,1) to h (1, M (1)). The second hidden layer is an M (2) -dimensional layer and is configured by a total of M (2) neurons h (2,1) to h (2, M (2)), and the Nth hidden layer is M (N ) Dimension layer, and is composed of a total of M (N) neurons h (N, 1) to h (N, M (N)).

 ここで、第1隠れ層のニューロンh(1,1)~h(1,M(1))に伝達される信号の演算値を、同じ符号を用いて、それぞれ演算値h(1,1)~h(1,M(1))と表すことにすると、これら演算値h(1,1)~h(1,M(1))の値は、図26の上段に示す行列の式で与えられる。この式の右辺の関数f(ξ)としては、図27(a) に示すシグモイド関数、図27(b) に示す正規化線形関数ReLU、図27(c) に示す正規化線形関数Leakey ReLUなどの活性化関数を用いることができる。 Here, the calculation values of the signals transmitted to the neurons h (1,1) to h (1, M (1)) of the first hidden layer are respectively expressed as the calculation values h (1,1) using the same symbols. .., H (1, M (1)), the values of these calculated values h (1, 1) to h (1, M (1)) are given by the matrix equation shown in the upper part of FIG. It is done. As the function f (ξ) on the right side of this equation, the sigmoid function shown in FIG. 27A, the normalized linear function ReLU shown in FIG. 27B, the normalized linear function Leakey 線形 ReLU shown in FIG. The activation function can be used.

 また、関数f(ξ)の引数として記載されているξは、図26の中段に示すように、行列[W]と行列[x1~xn](入力層にn次元ベクトルとして与えられた特徴量)との積に、行列[b]を加えた値になる。ここで、行列[W]および行列[b]の内容は、図26の下段に示すとおりであり、行列の個々の成分(重みパラメータW(u,v)とバイアスパラメータb(u,v))は、後述する学習段階によって得られた学習情報Lである。すなわち、行列[W]および行列[b]を構成する個々の成分(パラメータW(u,v),b(u,v))の値は、学習情報Lとして与えられており、図26に示された演算式を用いれば、入力層に与えられた特徴量x1~xnに基づいて、第1隠れ層の演算値h(1,1)~h(1,M(1))を算出することができる。 Further, ξ described as an argument of the function f (ξ) is a matrix [W] and a matrix [x1 to xn] (features given to the input layer as n-dimensional vectors, as shown in the middle of FIG. ) And the product of the matrix [b]. Here, the contents of the matrix [W] and the matrix [b] are as shown in the lower part of FIG. 26, and each component of the matrix (weight parameter W (u, v) and bias parameter b (u, v)). Is learning information L obtained by a learning stage described later. That is, the values of the individual components (parameters W (u, v), b (u, v)) constituting the matrix [W] and the matrix [b] are given as learning information L, and are shown in FIG. By using the calculated equation, the calculation values h (1,1) to h (1, M (1)) of the first hidden layer are calculated based on the feature values x1 to xn given to the input layer. Can do.

 一方、図28は、図25に示すダイアグラムにおける第2隠れ層~第N隠れ層の各値を求める演算式を示す図である。具体的には、第(i+1)隠れ層(1≦i≦N)のニューロンh(i+1,1)~h(i+1,M(i+1))に伝達される信号の演算値を、同じ符号を用いて、それぞれ演算値h(i+1,1)~h(i+1,M(i+1))と表すことにすると、これら演算値h(i+1,1)~h(i+1,M(i+1))の値は、図28の上段に示す行列の式で与えられる。この式の右辺の関数f(ξ)としては、前述したように、図27に示す各関数などを用いることができる。 On the other hand, FIG. 28 is a diagram showing an arithmetic expression for obtaining each value of the second hidden layer to the Nth hidden layer in the diagram shown in FIG. Specifically, the same sign is used for the operation value of the signal transmitted to the neurons (i + 1, 1) to h (i + 1, M (i + 1)) of the (i + 1) th hidden layer (1 ≦ i ≦ N). Thus, if expressed as operation values h (i + 1,1) to h (i + 1, M (i + 1)), the values of these operation values h (i + 1,1) to h (i + 1, M (i + 1)) are It is given by the matrix equation shown in the upper part of FIG. As the function f (ξ) on the right side of this equation, as described above, each function shown in FIG. 27 can be used.

 また、関数f(ξ)の引数として記載されているξは、図28の中段に示すように、行列[W]と行列[h(i,1)~h(i,M(i))](1つ前の第i隠れ層のニューロンh(i,1)~h(i,M(i))の演算値)との積に、行列[b]を加えた値になる。ここで、行列[W]および行列[b]の内容は、図28の下段に示すとおりであり、行列の個々の成分(パラメータW(u,v),b(u,v))は、やはり、後述する学習段階によって得られた学習情報Lである。 Also, ξ described as an argument of the function f (ξ) is a matrix [W] and a matrix [h (i, 1) to h (i, M (i))] as shown in the middle part of FIG. It is a value obtained by adding the matrix [b] to the product of (the calculated value of the neuron h (i, 1) to h (i, M (i)) of the previous i-th hidden layer). Here, the contents of the matrix [W] and the matrix [b] are as shown in the lower part of FIG. 28, and the individual components of the matrix (parameters W (u, v), b (u, v)) are This is learning information L obtained by a learning stage described later.

 ここでも、行列[W]および行列[b]を構成する個々の成分(パラメータW(u,v),b(u,v))の値は、学習情報Lとして与えられており、図28に示された演算式を用いれば、第i隠れ層で求められた演算値[h(i,1)~h(i,M(i))]に基づいて、第(i+1)隠れ層の演算値h(i+1,1)~h(i+1,M(i+1))を算出することができる。よって、図25に示すダイアグラムにおける第2隠れ層~第N隠れ層の各値は、図28に示す演算式に基づいて順次求めることができる。 Again, the values of the individual components (parameters W (u, v), b (u, v)) constituting the matrix [W] and the matrix [b] are given as learning information L, and are shown in FIG. Using the arithmetic expression shown, based on the arithmetic values [h (i, 1) to h (i, M (i))] obtained in the i-th hidden layer, the arithmetic values in the (i + 1) -th hidden layer h (i + 1, 1) to h (i + 1, M (i + 1)) can be calculated. Therefore, each value of the second hidden layer to the Nth hidden layer in the diagram shown in FIG. 25 can be obtained sequentially based on the arithmetic expression shown in FIG.

 図29は、図25に示すダイアグラムにおける出力層の値yを求める演算式を示す図である。具体的には、出力値y(評価点Eについてのプロセスバイアスの推定値:スカラー値)は、図29の上段に示す行列の式で与えられる。すなわち、出力値yは、行列[W]と行列[h(N,1)~h(N,M(N))](第N隠れ層のニューロンh(N,1)~h(N,M(N))の値)との積に、スカラー値b(N+1)を加えた値になる。ここで、行列[W]の内容は、図29の下段に示すとおりであり、行列[W]の個々の成分(パラメータW(u,v))およびスカラー値b(N+1)は、やはり、後述する学習段階によって得られた学習情報Lである。 FIG. 29 is a diagram showing an arithmetic expression for obtaining the output layer value y in the diagram shown in FIG. Specifically, the output value y (estimated value of process bias for the evaluation point E: scalar value) is given by the matrix equation shown in the upper part of FIG. That is, the output value y is expressed as matrix [W] and matrix [h (N, 1) to h (N, M (N))] (Nth hidden layer neurons h (N, 1) to h (N, M (N)) and the scalar value b (N + 1). Here, the contents of the matrix [W] are as shown in the lower part of FIG. 29, and the individual components (parameters W (u, v)) and the scalar value b (N + 1) of the matrix [W] are also described later. This is the learning information L obtained by the learning stage.

 このように、図25に示すダイアグラムにおける第1隠れ層の各値は、入力層として与えられた特徴量x1~xnに、学習情報Lとして予め準備されているパラメータW(1,v),b(1,v)を作用させることにより求めることができ、第2隠れ層の各値は、第1隠れ層の各値に、学習情報Lとして予め準備されているパラメータW(2,v),b(2,v)を作用させることにより求めることができ、... 、第N隠れ層の各値は、第(N-1)隠れ層の各値に、学習情報Lとして予め準備されているパラメータW(N,v),b(N,v)を作用させることにより求めることができ、出力層yの値は、第N隠れ層の各値に、学習情報Lとして予め準備されているパラメータパラメータW(N+1,v),b(N+1)を作用させることにより求めることができる。具体的な演算式は、図26~図29に示すとおりである。 In this way, the values of the first hidden layer in the diagram shown in FIG. 25 are the parameters W (1, v), b prepared in advance as learning information L for the feature amounts x1 to xn given as the input layers. (1, v) can be obtained by applying each parameter of the second hidden layer to each value of the first hidden layer using parameters W (2, v), b (2, v) can be obtained by applying ..., each value of the Nth hidden layer is prepared in advance as learning information L in each value of the (N-1) th hidden layer. The values of the output layer y are prepared in advance as learning information L for each value of the Nth hidden layer. Parameter Parameter W (N + 1, v), b (N + 1) It can be obtained by. Specific arithmetic expressions are as shown in FIGS.

 なお、§2.4の(2) ,(3) で述べたように、複数通りの画像ピラミッドを作成する変形例を採用する場合は、入力層として与えられる特徴量が、n次元のベクトル(x1~xn)ではなく、V・n次元(Vは、画像ピラミッドの総数)のベクトルになるが、図25に示すダイアグラムにおける入力層の数値がV・n個に増加するだけで、ニューラルネットワークの基本的な構成および動作に変わりはない。 In addition, as described in §2.4 (2) and (3), when adopting a modified example of creating a plurality of image pyramids, the feature value given as the input layer is an n-dimensional vector ( x1 to xn), but a vector of V · n dimensions (V is the total number of image pyramids), but the numerical value of the input layer in the diagram shown in FIG. There is no change in the basic configuration and operation.

 以上、図25に示すニューラルネットワークを用いて、ある1つの評価点Eについてのプロセスバイアスの推定値yを求める演算を説明したが、実際には、元図形パターン10に含まれる図形の輪郭線上に定義された多数の評価点について同様の演算が行われ、個々の評価点について、それぞれプロセスバイアスの推定値yが求められることになる。図1に示す図形パターンの形状補正装置100では、パターン補正ユニット140が、こうして求められた個々の評価点についてのプロセスバイアスの推定値yに基づいてパターン形状の補正処理(図4のステップS5)を実行することになる。 The calculation for obtaining the estimated value y of the process bias for a certain evaluation point E has been described above using the neural network shown in FIG. 25. In practice, however, the calculation is performed on the outline of the figure included in the original figure pattern 10. The same calculation is performed for a large number of defined evaluation points, and an estimated value y of the process bias is obtained for each evaluation point. In the figure pattern shape correction apparatus 100 shown in FIG. 1, the pattern correction unit 140 corrects the pattern shape based on the estimated value y of the process bias for each evaluation point thus obtained (step S5 in FIG. 4). Will be executed.

 <3.2 ニューラルネットワークの学習段階>
 上述したとおり、図24に示す実施例の場合、推定演算部132は、ニューラルネットワークによって構成されており、予め設定されている学習情報Lを利用して、各ニューロンに伝達される信号値を演算することになる。ここで、学習情報Lの実体は、図26の下段、図28の下段、図29の下段に行列[W],[b]の各成分として記載されたパラメータW(u,v),b(u,v)等の値である。したがって、このようなニューラルネットワークを構築するには、予め実行した学習段階によって、学習情報Lを得ておく必要がある。
<3.2 Learning stage of neural network>
As described above, in the case of the embodiment shown in FIG. 24, the estimation calculation unit 132 is configured by a neural network, and uses the learning information L set in advance to calculate the signal value transmitted to each neuron. Will do. Here, the substance of the learning information L is the parameters W (u, v), b () described as the components of the matrices [W], [b] in the lower part of FIG. 26, the lower part of FIG. 28, and the lower part of FIG. u, v) and the like. Therefore, in order to construct such a neural network, it is necessary to obtain learning information L by a learning stage executed in advance.

 すなわち、推定演算部132に含まれるニューラルネットワークは、多数のテストパターン図形を用いたリソグラフィプロセスによって実基板S上に実際に形成される実図形パターン20の実寸法測定によって得られた寸法値と、各テストパターン図形から得られる特徴量と、を用いた学習段階によって得られたパラメータW(u,v),b(u,v)等を学習情報Lとして用い、プロセスバイアスの推定処理を行うことになる。このようなニューラルネットワークの学習段階の処理自体は公知の技術であるが、ここでは本発明に利用するニューラルネットワークの学習段階に適した処理の概要を簡単に説明しておく。 That is, the neural network included in the estimation calculation unit 132 includes dimension values obtained by actual dimension measurement of the actual figure pattern 20 actually formed on the actual substrate S by a lithography process using a large number of test pattern figures. Process bias estimation processing is performed using, as learning information L, parameters W (u, v), b (u, v), and the like obtained in the learning stage using the feature amounts obtained from each test pattern figure. become. Such a process in the learning stage of the neural network is a known technique, but here, an outline of a process suitable for the learning stage of the neural network used in the present invention will be briefly described.

 図30は、図24に示すニューラルネットワークが用いる学習情報Lを得るための学習段階の手順を示す流れ図である。まず、ステップS81において、テストパターン図形の作成処理が実行される。ここで、テストパターン図形は、たとえば、図2(a) に示すような元図形パターン10に相当するものであり、通常、長方形やL字型図形などの単純な図形が用いられる。実際には、サイズや形状が異なる数千個ものテストパターン図形が作成される。 FIG. 30 is a flowchart showing a learning stage procedure for obtaining learning information L used by the neural network shown in FIG. First, in step S81, a test pattern graphic creation process is executed. Here, the test pattern figure corresponds to, for example, the original figure pattern 10 as shown in FIG. 2A, and a simple figure such as a rectangle or an L-shaped figure is usually used. Actually, thousands of test pattern figures having different sizes and shapes are created.

 続いて、ステップS82において、各テストパターン図形上に評価点Eが設定される。具体的には、個々のテストパターン図形の輪郭線上に、所定間隔で多数の評価点Eを定義する処理を行えばよい。そして、ステップS83において、各評価点Eについて、それぞれ特徴量が抽出される。このステップS83の特徴量抽出処理は、§2で述べた処理と同様であり、特徴量抽出ユニット120と同等の機能を有するユニットによって実行される。§2で述べた手順によれば、個々の評価点Eについて、それぞれ特徴量x1~xnが抽出されることになる。 Subsequently, in step S82, an evaluation point E is set on each test pattern figure. Specifically, a process of defining a large number of evaluation points E at predetermined intervals on the contour lines of individual test pattern figures may be performed. In step S83, a feature amount is extracted for each evaluation point E. The feature quantity extraction process in step S83 is the same as the process described in §2, and is executed by a unit having a function equivalent to that of the feature quantity extraction unit 120. According to the procedure described in §2, the feature quantities x1 to xn are extracted for each evaluation point E.

 ステップS84の推定演算部学習処理は、ステップS83で抽出された特徴量を利用して、学習情報L(すなわち、パラメータW(u,v),b(u,v)等)を決定する処理である。この学習処理を実行するためには、実際のリソグラフィプロセスにより得られた実寸法が必要になる。そこで、ステップS85では、ステップS81で作成されたテストパターン図形に基づいて、実際にリソグラフィプロセスが実行され、実基板Sが作成される。そして、ステップS86において、この実基板S上に形成された実図形パターンに対して、個々の図形の実寸法測定が行われる。この測定結果は、ステップS84の学習処理に利用されることになる。 The estimation calculation unit learning process in step S84 is a process of determining learning information L (that is, parameters W (u, v), b (u, v), etc.) using the feature amount extracted in step S83. is there. In order to execute this learning process, actual dimensions obtained by an actual lithography process are required. Therefore, in step S85, the lithography process is actually executed based on the test pattern graphic created in step S81, and the actual substrate S is created. In step S86, the actual dimension of each figure is measured for the actual figure pattern formed on the actual substrate S. This measurement result is used for the learning process in step S84.

 このように、図30に示す学習段階のプロセスは、ステップS81~S84からなる計算機上で実行するプロセス(コンピュータプログラムにより実行されるプロセス)と、ステップS85,S86からなる実基板上で実行するプロセスと、によって構成される。ステップS84の推定演算部学習処理は、計算機上の処理で得られた特徴量x1~xnと実基板上で測定された実寸法とに基づいて、ニューラルネットワークに利用される学習情報Lを決定する処理ということになる。 As described above, the learning stage process shown in FIG. 30 includes a process executed on the computer composed of steps S81 to S84 (a process executed by the computer program) and a process executed on the actual substrate composed of steps S85 and S86. And composed of In the estimation calculation unit learning process in step S84, learning information L used for the neural network is determined based on the feature amounts x1 to xn obtained by the processing on the computer and the actual dimensions measured on the actual board. It will be processing.

 図31は、図30に示す流れ図におけるステップS84の推定演算部学習の詳細な手順を示す流れ図である。まず、ステップS841において、各評価点の設計位置および特徴量が入力される。ここで、評価点の設計位置は、ステップS82において設定された評価点のテストパターン図形上の位置であり、評価点の特徴量は、ステップS83において抽出された特徴量x1~xnである。続いて、ステップS842において、各評価点の実位置が入力される。ここで、評価点の実位置は、ステップS86において実測された実基板S上の各図形の実寸法に基づいて決定される。そして、ステップS843において、各評価点の実バイアスが算出される。この実バイアスは、ステップS841で入力された評価点の設計位置とステップS842で入力された評価点の実位置とのずれ量に相当する。 FIG. 31 is a flowchart showing a detailed procedure of learning of the estimation calculation unit in step S84 in the flowchart shown in FIG. First, in step S841, the design position and feature amount of each evaluation point are input. Here, the design position of the evaluation point is a position on the test pattern figure of the evaluation point set in step S82, and the feature amount of the evaluation point is the feature amount x1 to xn extracted in step S83. Subsequently, in step S842, the actual position of each evaluation point is input. Here, the actual position of the evaluation point is determined based on the actual dimension of each figure on the actual substrate S actually measured in step S86. In step S843, the actual bias of each evaluation point is calculated. This actual bias corresponds to the amount of deviation between the design position of the evaluation point input in step S841 and the actual position of the evaluation point input in step S842.

 たとえば、ステップS81において、図3(a) に示す元図形パターン10のような長方形がテストパターン図形として作成された場合、ステップS82において、この長方形の輪郭線上に、評価点E11,E12,E13等が設定され、ステップS83において、これら各評価点E11,E12,E13のそれぞれについて、特徴量x1~xnが抽出される。各評価点E11,E12,E13の位置および特徴量は、ステップS841において入力される。 For example, when a rectangle such as the original graphic pattern 10 shown in FIG. 3 (a) is created as a test pattern graphic in step S81, evaluation points E11, E12, E13, etc. are formed on the outline of the rectangle in step S82. In step S83, feature amounts x1 to xn are extracted for each of the evaluation points E11, E12, and E13. The position and feature amount of each evaluation point E11, E12, E13 are input in step S841.

 一方、ステップS85のリソグラフィプロセスによって、実基板S上には、たとえば、図3(b) に示すような実図形パターン20が形成され、ステップS86の実寸法測定によって、この実図形パターン20を構成する長方形の各辺について、実寸法が測定される。この測定により、各評価点E11,E12,E13を、それぞれ輪郭線の法線方向に移動させた点として、評価点の実位置E21,E22,E23が決定される。各評価点の実位置E21,E22,E23は、ステップS842において入力される。 On the other hand, the actual figure pattern 20 as shown in FIG. 3B, for example, is formed on the actual substrate S by the lithography process in step S85, and the actual figure pattern 20 is formed by measuring the actual dimension in step S86. The actual dimensions are measured for each side of the rectangle. By this measurement, the actual positions E21, E22, and E23 of the evaluation points are determined as the points where the evaluation points E11, E12, and E13 are moved in the normal direction of the contour line, respectively. The actual positions E21, E22, E23 of each evaluation point are input in step S842.

 そこで、ステップS843では、図3(b) に示すように、各評価点E11,E12,E13の設計位置と実位置E21,E22,E23との差として、実バイアスy11,y12,y13が算出される。実際には、たとえば、図3(b) に示す実図形パターン20を構成する長方形の横幅bと、図3(a) に示す元図形パターン10を構成する長方形の横幅aとの差「b-a」を2で除した値yを求める作業により、実バイアスを決定するようにしてもよい。もちろん、実際には、数千個という規模のテストパターン図形が作成され、個々の図形の輪郭線上に多数の評価点が設定されるので、ステップS841~S843の手順は、膨大な数の評価点についてそれぞれ実行されることになる。ここで、1つの評価点Eについて着目すると、当該評価点Eについては、特徴量x1~xnと実バイアスyとの組み合わせが、学習材料として準備されたことになる。 Therefore, in step S843, as shown in FIG. 3B, the actual bias y11, y12, y13 is calculated as the difference between the design position of each evaluation point E11, E12, E13 and the actual position E21, E22, E23. The Actually, for example, the difference “b−” between the horizontal width b of the rectangle forming the actual graphic pattern 20 shown in FIG. 3B and the rectangular horizontal width a forming the original graphic pattern 10 shown in FIG. The actual bias may be determined by obtaining a value y obtained by dividing “a” by 2. Of course, in practice, test pattern figures having a scale of several thousand are created, and a large number of evaluation points are set on the contour lines of each figure. Therefore, the procedure of steps S841 to S843 is a huge number of evaluation points. Will be executed for each. Here, focusing on one evaluation point E, for the evaluation point E, a combination of the feature amounts x1 to xn and the actual bias y is prepared as a learning material.

 続いて、ステップS844において、パラメータW,bが初期値に設定される。ここで、パラメータW,bは、図26の下段、図28の下段、図29の下段に行列[W],[b]の各成分として記載されたパラメータW(u,v),b(u,v)等に相当し、学習情報Lを構成する値になる。初期値としては、乱数によってランダムな値を与えればよい。別言すれば、学習前の当初の段階では、学習情報Lを構成するパラメータW,bはデタラメな値になっている。 Subsequently, in step S844, the parameters W and b are set to initial values. Here, the parameters W and b are the parameters W (u, v) and b (u described as the components of the matrices [W] and [b] in the lower part of FIG. 26, the lower part of FIG. 28, and the lower part of FIG. , V) and the like, and becomes a value constituting the learning information L. As an initial value, a random value may be given by a random number. In other words, at the initial stage before learning, the parameters W and b that constitute the learning information L have frank values.

 続いて実行されるステップS845~S848の各段階は、実際の学習プロセスであり、この学習プロセスを行うことにより、当初はデタラメな値が設定されていたパラメータW,bが徐々に更新されてゆき、やがて学習情報Lとして十分に機能する程度の正確な値に修正される。まず、ステップS845では、特徴量x1~xnからプロセスバイアスyを推定する演算が実行される。 Each of the subsequent steps S845 to S848 is an actual learning process, and by performing this learning process, the parameters W and b that were initially set to frank values are gradually updated. Eventually, the value is corrected to an accurate value that sufficiently functions as the learning information L. First, in step S845, an operation for estimating the process bias y from the feature amounts x1 to xn is executed.

 具体的には、図24に示すようなニューラルネットワークを準備する。もちろん、この段階では、学習情報Lを構成するパラメータW,bには、初期値として乱数が与えられており、このニューラルネットワークは、推定演算部132としての正常な機能を果たすことはできない不完全なものである。この不完全なニューラルネットワークの入力層に、ステップS841で入力した特徴量x1~xnを与え、不完全な値からなる学習情報Lを用いた演算を行い、出力層としてプロセスバイアスの推定値yを算出する。もちろん、当初得られるプロセスバイアスの推定値yは、観測された実バイアスとはかけ離れた値になる。 Specifically, a neural network as shown in FIG. 24 is prepared. Of course, at this stage, random numbers are given as initial values to the parameters W and b constituting the learning information L, and this neural network cannot perform a normal function as the estimation calculation unit 132. Is something. The feature values x1 to xn input in step S841 are given to the input layer of this incomplete neural network, the calculation using the learning information L consisting of incomplete values is performed, and the estimated value y of the process bias is output as the output layer. calculate. Of course, the estimated value y of the process bias obtained initially is far from the observed actual bias.

 そこで、ステップS846において、その時点における実バイアスに対する残差を算出する。すなわち、ステップS845で得られたプロセスバイアスの推定値yと、ステップS843で得られた実バイアスの算出値との差を求め、この差を、当該評価点Eについての残差とする。この残差が所定の許容値以下になれば、その時点での学習情報L(すなわち、パラメータW(u,v),b(u,v)等)は、十分に実用性をもった学習情報であると判断できるので、学習段階を終了することができる。 Therefore, in step S846, a residual with respect to the actual bias at that time is calculated. That is, a difference between the estimated value y of the process bias obtained in step S845 and the calculated actual bias value obtained in step S843 is obtained, and this difference is set as a residual for the evaluation point E. If this residual is less than or equal to a predetermined tolerance, the learning information L at that time (that is, parameters W (u, v), b (u, v), etc.) is sufficiently practical learning information. Therefore, the learning stage can be completed.

 ステップS847は、学習段階を終了できるか否かを判定する手順である。実際には、膨大な数の評価点のそれぞれについて残差が求められるので、実用上は、たとえば、残差二乗和の改善量が規定値を下回っていたら、学習終了とする判断を行うような判定方法を採用すればよい。学習終了と判定されなかった場合は、ステップS848において、パラメータW(u,v),b(u,v)等の更新が行われる。具体的には、残差を減少させる作用が生じるように、各パラメータW(u,v),b(u,v)等の値を所定量だけ増減する更新が行われる。具体的な更新方法については、ニューラルネットワークにおける学習手法として様々なアルゴリズムに基づく方法が知られているので、ここでは説明を省略するが、基本方針としては、多数のテストパターン図形の評価点についての残差を総合的に勘案して、それぞれの残差が全体的に少なくなるような総合的な調整を行うアルゴリズムが採用される。 Step S847 is a procedure for determining whether or not the learning stage can be completed. Actually, since a residual is obtained for each of a large number of evaluation points, in practice, for example, if the amount of improvement in the residual sum of squares is below a specified value, a determination is made to end learning. A determination method may be adopted. If it is not determined that the learning has ended, the parameters W (u, v), b (u, v), etc. are updated in step S848. Specifically, updating is performed to increase or decrease the values of the parameters W (u, v), b (u, v), etc. by a predetermined amount so that the effect of reducing the residual occurs. As for the specific update method, methods based on various algorithms are known as learning methods in neural networks, so the explanation is omitted here, but the basic policy is to evaluate the evaluation points of many test pattern figures. An algorithm is employed that comprehensively considers the residuals and performs a total adjustment such that each residual is reduced overall.

 こうして、ステップS847において肯定的な判定がなされるまで、ステップS845~S848の手順が繰り返し実行される。その結果、学習情報Lを構成するパラメータW(u,v),b(u,v)等の値は、残差を減少させる方向に徐々に修正されてゆき、最終的には、ステップS847において肯定的な判定がなされ、学習段階は終了する。この学習終了段階で得られた学習情報L(パラメータW(u,v),b(u,v)等)は、実バイアスに近いプロセスバイアスの推定値yを出力層に得るのに適した情報になっている。したがって、学習終了段階で得られた学習情報Lを含むニューラルネットワークは、本発明における推定演算部132として機能することになる。 Thus, the steps S845 to S848 are repeatedly executed until a positive determination is made in step S847. As a result, the values of the parameters W (u, v), b (u, v), etc. constituting the learning information L are gradually corrected in the direction of decreasing the residual, and finally in step S847. A positive determination is made and the learning phase ends. The learning information L (parameters W (u, v), b (u, v), etc.) obtained at the learning end stage is information suitable for obtaining an estimated value y of the process bias close to the actual bias in the output layer. It has become. Therefore, the neural network including the learning information L obtained at the learning end stage functions as the estimation calculation unit 132 in the present invention.

 <<< §4. 図形パターンの形状推定方法 >>>
 これまで本発明を、図1に示す構成を有する図形パターンの形状推定装置100′もしくは図形パターンの形状補正装置100として捉え、その構成および動作を説明した。ここでは、本発明を、図形パターンの形状推定方法という方法発明として捉えた説明を簡単に行っておく。
<<< §4. Figure pattern shape estimation method >>
The present invention has been described as the figure pattern shape estimation apparatus 100 ′ or the figure pattern shape correction apparatus 100 having the structure shown in FIG. Here, a brief description will be given of the present invention as a method invention called a figure pattern shape estimation method.

 本発明を図形パターンの形状推定方法の発明として把握した場合、当該方法は、元図形パターンを用いたリソグラフィプロセスをシミュレートすることにより、実基板上に形成される実図形パターンの形状を推定する方法ということになる。そして、この方法は、コンピュータが、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターン10を入力する元図形パターン入力段階(図4のステップS1で作成されたパターンを入力する段階)と、コンピュータが、入力した図形の輪郭線上の所定位置に評価点Eを設定する評価点設定段階(図4のステップS2)と、コンピュータが、入力した元図形パターン10について、各評価点Eの周囲の特徴を示す特徴量x1~xnを抽出する特徴量抽出段階(図4のステップS3)と、コンピュータが、抽出した特徴量x1~xnに基づいて、各評価点Eの元図形パターン10上の位置と実図形パターン20上の位置とのずれ量を示すプロセスバイアスyを推定するプロセスバイアス推定段階(図4のステップS4)と、によって構成される。 When the present invention is grasped as an invention of a figure pattern shape estimation method, the method estimates the shape of an actual figure pattern formed on an actual substrate by simulating a lithography process using the original figure pattern. It will be a method. Then, in this method, the computer inputs the original graphic pattern input stage in which the original graphic pattern 10 including the contour line information indicating the boundary between the inside and the outside of the graphic is input (the pattern created in step S1 in FIG. 4 is input). And an evaluation point setting step (step S2 in FIG. 4) in which the computer sets an evaluation point E at a predetermined position on the contour line of the input graphic, and each evaluation for the original graphic pattern 10 input by the computer. A feature amount extraction stage (step S3 in FIG. 4) for extracting feature amounts x1 to xn indicating features around the point E, and the original figure of each evaluation point E based on the extracted feature amounts x1 to xn A process bias estimation step (step S4 in FIG. 4) for estimating a process bias y indicating the amount of deviation between the position on the pattern 10 and the position on the actual graphic pattern 20; Constructed.

 ここで、特徴量抽出段階(図4のステップS3)は、元図形パターン10に基づいて、それぞれ所定の画素値を有する画素Uの集合体からなる元画像Q1を作成する元画像作成段階(図7のステップS32)と、この元画像Q1に基づいて、縮小画像Qk(準備画像)を作成する縮小処理(図7のステップS35)を含む画像ピラミッド作成処理を行い、それぞれ異なるサイズをもった複数の階層画像P1~Pnからなる画像ピラミッドPPを作成する画像ピラミッド作成段階(図7のステップS33~S35)と、作成された画像ピラミッドPPを構成する各階層画像P1~Pnについて、各評価点Eの近傍の画素の画素値に基づいて特徴量x1~xnを算出する特徴量算出段階(図7のステップS37)と、を含んでいる。 Here, in the feature amount extraction stage (step S3 in FIG. 4), based on the original figure pattern 10, an original image creation stage (see FIG. 4) that creates an original image Q1 composed of a collection of pixels U each having a predetermined pixel value. 7 and step S32) and image pyramid creation processing including reduction processing (step S35 in FIG. 7) for creating a reduced image Qk (preparation image) based on the original image Q1. An image pyramid creation stage (steps S33 to S35 in FIG. 7) for creating an image pyramid PP composed of the hierarchical images P1 to Pn, and each evaluation point E for each of the hierarchical images P1 to Pn constituting the created image pyramid PP. And a feature amount calculating step (step S37 in FIG. 7) for calculating feature amounts x1 to xn based on pixel values of pixels in the vicinity of.

 ここで、上記プロセスバイアス推定段階(図4のステップS4)は、予め実施された学習段階によって得られた学習情報Lに基づいて、評価点Eについての特徴量x1~xnに応じた推定値を求め、求めた推定値を当該評価点Eについてのプロセスバイアスの推定値yとして出力する推定演算段階を含んでいる。 Here, in the process bias estimation stage (step S4 in FIG. 4), the estimated values corresponding to the feature quantities x1 to xn for the evaluation point E are calculated based on the learning information L obtained in the learning stage performed in advance. And an estimation calculation step of outputting the calculated estimated value as the estimated value y of the process bias for the evaluation point E.

 なお、§2で述べた特徴量抽出ユニット120を使用した場合は、上記画像ピラミッド作成段階で、元画像Q1もしくは縮小画像Qkに対して所定の画像処理フィルタを用いたフィルタ処理を行うフィルタ処理段階(図7のステップS33)と、このフィルタ処理後の画像Pkに対して縮小処理を行う縮小処理段階(図7のステップS35)と、を交互に実行することにより、複数の階層画像P1~Pnからなる画像ピラミッドPPを作成することができる。 When the feature amount extraction unit 120 described in §2 is used, a filter processing stage that performs a filter process using a predetermined image processing filter on the original image Q1 or the reduced image Qk in the image pyramid creation stage. By alternately executing (step S33 in FIG. 7) and a reduction processing stage (step S35 in FIG. 7) for performing reduction processing on the image Pk after the filter processing, a plurality of hierarchical images P1 to Pn are executed. An image pyramid PP consisting of can be created.

 具体的には、§2.2で述べた手順では、画像ピラミッド作成段階で、フィルタ処理後の画像(図7のステップS33で得られたフィルタ処理画像Pk)を階層画像P1~Pnとする画像ピラミッドPPが作成されている。これに対して、§2.4(1) で述べた変形例の手順では、画像ピラミッド作成段階で、フィルタ処理後の画像(フィルタ処理画像Pk)とフィルタ処理前の画像(準備画像Qk)との差分画像Dkが作成され、作成した差分画像を階層画像D1~Dnとする画像ピラミッドPDが作成されている。 Specifically, in the procedure described in §2.2, images in which the filtered image (filtered image Pk obtained in step S33 in FIG. 7) is set as hierarchical images P1 to Pn at the image pyramid creation stage. A pyramid PP has been created. On the other hand, in the procedure of the modified example described in §2.4 (1) IV, at the image pyramid creation stage, the image after filtering (filtered image Pk) and the image before filtering (prepared image Qk) Difference image Dk is created, and an image pyramid PD is created with the created difference images as hierarchical images D1 to Dn.

 <<< §5. 本発明の付加的実施形態 >>>
 これまで、§1~§4において、本発明の基本的実施形態に係る図形パターンの形状推定装置および形状補正装置を説明してきたが、ここでは、本発明の更に別な実施形態を述べる。便宜上、この§5で述べる実施形態を「本発明の付加的実施形態」と呼ぶ。
<<< §5. Additional embodiments of the invention >>>
So far, in §1 to §4, the figure pattern shape estimation apparatus and shape correction apparatus according to the basic embodiment of the present invention have been described, but here, still another embodiment of the present invention will be described. For convenience, the embodiments described in §5 are referred to as “additional embodiments of the present invention”.

 <5.1 付加的実施形態の基本構成および基本動作>
 図32は、本発明の付加的実施形態に係る図形パターンの形状補正装置200の構成を示すブロック図である。図示のとおり、この図形パターンの形状補正装置200は、評価点設定ユニット110、特徴量抽出ユニット220、バイアス推定ユニット130、パターン補正ユニット140を有している。ここで、評価点設定ユニット110、特徴量抽出ユニット220、バイアス推定ユニット130の3つのユニットによって、本発明の付加的実施形態に係る図形パターンの形状推定装置200′が構成されており、図形パターンの形状補正装置200は、この図形パターンの形状推定装置200′に、更に第4番目のユニットとしてパターン補正ユニット140を付加することにより構成される。
<5.1 Basic Configuration and Basic Operation of Additional Embodiment>
FIG. 32 is a block diagram showing a configuration of a figure pattern shape correcting apparatus 200 according to an additional embodiment of the present invention. As shown in the figure, the figure pattern shape correction apparatus 200 includes an evaluation point setting unit 110, a feature amount extraction unit 220, a bias estimation unit 130, and a pattern correction unit 140. Here, a figure pattern shape estimation apparatus 200 ′ according to an additional embodiment of the present invention is configured by the three units of the evaluation point setting unit 110, the feature amount extraction unit 220, and the bias estimation unit 130. The shape correction apparatus 200 is configured by further adding a pattern correction unit 140 as a fourth unit to the figure pattern shape estimation apparatus 200 ′.

 図1に示す基本的実施形態に係る図形パターンの形状補正装置100と、図32に示す付加的実施形態に係る図形パターンの形状補正装置200との相違点は、前者における特徴量抽出ユニット120が、後者では、特徴量抽出ユニット220に置き換えられている点だけである。別言すれば、図32に示す評価点設定ユニット110、バイアス推定ユニット130、パターン補正ユニット140は、図1に示す同符号の各ユニットと全く同一のものである。 The difference between the figure pattern shape correcting apparatus 100 according to the basic embodiment shown in FIG. 1 and the figure pattern shape correcting apparatus 200 according to the additional embodiment shown in FIG. In the latter, only the feature amount extraction unit 220 is replaced. In other words, the evaluation point setting unit 110, the bias estimation unit 130, and the pattern correction unit 140 shown in FIG. 32 are exactly the same as the units having the same reference numerals shown in FIG.

 したがって、図32に示す図形パターンの形状推定装置200′は、図1に示す図形パターンの形状推定装置100′と同様に、元図形パターン10を用いたリソグラフィプロセスをシミュレートすることにより、実基板S上に形成される実図形パターン20の形状を推定する機能を有する。そのために、この図形パターンの形状推定装置200′は、元図形パターン10上に評価点Eを設定する評価点設定ユニット110と、元図形パターン10について、個々の評価点Eの周囲の特徴を示す特徴量xを抽出する特徴量抽出ユニット220と、この特徴量xに基づいて、個々の評価点Eの元図形パターン10上の位置と実図形パターン20上の位置とのずれ量を示すプロセスバイアスyを推定するバイアス推定ユニット130と、を備えている。 Therefore, the figure pattern shape estimation apparatus 200 ′ shown in FIG. 32 simulates the lithography process using the original figure pattern 10 in the same manner as the figure pattern shape estimation apparatus 100 ′ shown in FIG. It has a function of estimating the shape of the real graphic pattern 20 formed on S. For this purpose, the figure pattern shape estimation apparatus 200 ′ shows an evaluation point setting unit 110 for setting an evaluation point E on the original figure pattern 10, and features around the individual evaluation points E for the original figure pattern 10. A feature amount extraction unit 220 that extracts a feature amount x, and a process bias that indicates a deviation amount between the position on the original graphic pattern 10 and the position on the actual graphic pattern 20 of each evaluation point E based on the feature amount x. and a bias estimation unit 130 for estimating y.

 ここで、評価点設定ユニット110は、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターン10に基づいて、この輪郭線上の所定位置に複数の評価点Eを設定する。具体的な評価点設定処理は、既に§1で述べたとおりである。 Here, the evaluation point setting unit 110 sets a plurality of evaluation points E at predetermined positions on the contour line based on the original graphic pattern 10 including the contour line information indicating the boundary between the inside and the outside of the figure. The specific evaluation point setting process is as already described in §1.

 特徴量抽出ユニット220は、こうして設定された個々の評価点Eのそれぞれについて、元図形パターン10上での評価点Eの周囲の特徴を示す特徴量xを抽出する。ここで述べる実施例の場合、1つの評価点Eについて、それぞれ複数n個の特徴量x1~xnが抽出される。後に詳述するように、ここで述べる付加的実施形態の場合、特徴量x1~xnは、実際には、予め定義された所定の算出関数に基づく演算によって算出されることになる。 The feature quantity extraction unit 220 extracts a feature quantity x indicating a feature around the evaluation point E on the original graphic pattern 10 for each evaluation point E set in this way. In the embodiment described here, a plurality of n feature amounts x1 to xn are extracted for each evaluation point E. As will be described in detail later, in the case of the additional embodiment described here, the feature amounts x1 to xn are actually calculated by an operation based on a predetermined calculation function defined in advance.

 一方、バイアス推定ユニット130は、こうして個々の評価点Eについて算出された特徴量x1~xnを入力する特徴量入力部131と、予め実施された学習段階によって得られた学習情報Lに基づいて、特徴量x1~xnに応じた推定値を求め、求めた推定値を評価点Eについてのプロセスバイアスの推定値yとして出力する推定演算部132と、を有している。具体的な推定演算の方法は、既に§3で述べたとおりである。 On the other hand, the bias estimation unit 130, based on the feature amount input unit 131 for inputting the feature amounts x1 to xn calculated for the individual evaluation points E in this way, and the learning information L obtained by the learning stage performed in advance, An estimation calculation unit 132 that calculates an estimated value according to the feature amounts x1 to xn and outputs the calculated estimated value as the estimated value y of the process bias for the evaluation point E. The specific estimation calculation method is as already described in §3.

 付加的実施形態に係る図形パターンの形状補正装置200は、上述した図形パターンの形状推定装置200′を用いて、元図形パターン10の形状を補正する機能を有した装置であり、図形パターンの形状推定装置200′を構成する評価点設定ユニット110、特徴量抽出ユニット220、バイアス推定ユニット130に、更に、パターン補正ユニット140を加えたものである。このパターン補正ユニット140は、バイアス推定ユニット130から出力されるプロセスバイアスの推定値yに基づいて、元図形パターン10に対する補正を行う処理を行う。そして、パターン補正ユニット140による補正によって得られた補正図形パターン15を、図形パターンの形状推定装置200′に新たな元図形パターンとして与えることにより、図形パターンに対する補正を繰り返し実行する機能を有する。このような補正処理の具体的な方法は、既に§1で述べたとおりである。 The figure pattern shape correction apparatus 200 according to the additional embodiment is an apparatus having a function of correcting the shape of the original figure pattern 10 by using the figure pattern shape estimation apparatus 200 ′ described above. A pattern correction unit 140 is further added to the evaluation point setting unit 110, the feature amount extraction unit 220, and the bias estimation unit 130 that constitute the estimation device 200 ′. The pattern correction unit 140 performs processing for correcting the original graphic pattern 10 based on the estimated value y of the process bias output from the bias estimation unit 130. Then, the corrected graphic pattern 15 obtained by the correction by the pattern correction unit 140 is given as a new original graphic pattern to the graphic pattern shape estimation device 200 ′, thereby repeatedly performing correction on the graphic pattern. The specific method of such correction processing is as already described in §1.

 もちろん、図32に示す評価点設定ユニット110、特徴量抽出ユニット220、バイアス推定ユニット130、パターン補正ユニット140は、いずれもコンピュータに所定のプログラムを組み込むことによって構成されている。したがって、この付加的実施形態に係る図形パターンの形状推定装置200′や図形パターンの形状補正装置200は、実際には、汎用のコンピュータに専用のプログラムを組み込むことによって実現される。 Of course, all of the evaluation point setting unit 110, the feature amount extraction unit 220, the bias estimation unit 130, and the pattern correction unit 140 shown in FIG. 32 are configured by incorporating a predetermined program into a computer. Therefore, the graphic pattern shape estimation device 200 ′ and the graphic pattern shape correction device 200 according to this additional embodiment are actually realized by incorporating a dedicated program into a general-purpose computer.

 このように、図1に示す形状補正装置100(もしくは形状推定装置100′)と、図32に示す図形パターンの形状補正装置200(もしくは形状推定装置200′)とは、大部分の構成が共通している。ただ、両者では、特徴量の抽出の仕組が相違することになる。そこで、以下、この相違点についての説明を行うため、図32に示す特徴量抽出ユニット220の基本構成および基本動作を述べる。 As described above, the shape correction apparatus 100 (or shape estimation apparatus 100 ′) shown in FIG. 1 and the figure pattern shape correction apparatus 200 (or shape estimation apparatus 200 ′) shown in FIG. is doing. However, the feature extraction mechanism is different between the two. Therefore, in order to explain this difference, the basic configuration and basic operation of the feature quantity extraction unit 220 shown in FIG. 32 will be described below.

 図1に示す特徴量抽出ユニット120は、図6に示すとおり、元画像Q1に対して縮小処理を含む画像ピラミッド作成処理を行い、それぞれ異なるサイズをもった複数n通りの階層画像P1~Pnからなる画像ピラミッドPPを作成し、各階層画像P1~Pnを用いて、評価点Eの位置に応じた画素の画素値に基づいて当該評価点Eについての特徴量を算出する方法を採用していた。 As shown in FIG. 6, the feature quantity extraction unit 120 shown in FIG. 1 performs image pyramid creation processing including reduction processing on the original image Q1, and uses a plurality of n hierarchical images P1 to Pn having different sizes. The image pyramid PP is created, and the feature amount for the evaluation point E is calculated based on the pixel value of the pixel corresponding to the position of the evaluation point E using each of the hierarchical images P1 to Pn. .

 これに対して、図32に示す特徴量抽出ユニット220は、n通りの階層画像P1~Pnを用いる代わりに、n通りの算出関数Z1(X,Y)~Zn(X,Y)を用いて、座標(X,Y)に位置する評価点E(X,Y)についてのn通りの特徴量x1~xnを算出する処理を行う。 On the other hand, the feature quantity extraction unit 220 shown in FIG. 32 uses n calculation functions Z1 (X, Y) to Zn (X, Y) instead of using n hierarchy images P1 to Pn. Then, a process of calculating n feature values x1 to xn for the evaluation point E (X, Y) located at the coordinates (X, Y) is performed.

 ここで、n通りの算出関数Z1(X,Y)~Zn(X,Y)は、評価点E(X,Y)の近傍の狭い範囲を考慮した特徴量x1から、評価点E(X,Y)の遠方まで含めた広い範囲を考慮した特徴量xnに至るまで、考慮範囲を変えた複数n通りの特徴量x1~xnを算出するために適した算出関数になっている。したがって、各評価点の近傍から遠方に至るまでの様々な特徴を示す個別の特徴量を得ることができ、近接効果やエッチングのローディング現象など、スケールの異なる各種現象の影響を考慮した正確なシミュレーションを行うことが可能になる。 Here, n calculation functions Z1 (X, Y) to Zn (X, Y) are obtained from the evaluation point E (X, Y) from the feature amount x1 in consideration of a narrow range in the vicinity of the evaluation point E (X, Y). The calculation function is suitable for calculating a plurality of n types of feature quantities x1 to xn with different consideration ranges up to the feature quantity xn taking into account a wide range including Y). Therefore, it is possible to obtain individual features that show various features from the vicinity of each evaluation point to the distance, and accurate simulation that takes into account the effects of various phenomena with different scales, such as proximity effects and etching loading phenomena. It becomes possible to do.

 このように、評価点の近傍から遠方に至るまでの様々な特徴を示す複数n通りの特徴量x1~xnを抽出することにより、スケールの異なる各種現象の影響を考慮した正確なシミュレーションを行う、という考え方は、§1~§4で述べた基本的実施形態で採用されている技術思想であり、また、この§5で述べる付加的実施形態でも採用されている技術思想である。別言すれば、このような技術思想を利用して、正確なシミュレーションを行うという点において、基本的実施形態と付加的実施形態とは共通しており、いずれの実施形態の場合も、得られた特徴量x1~xnには、影響範囲がそれぞれ異なる様々な現象を多重化した情報が含まれていることになる。 In this way, by extracting a plurality of n types of feature amounts x1 to xn indicating various features from the vicinity of the evaluation point to the distant place, an accurate simulation is performed in consideration of the effects of various phenomena having different scales. This is the technical idea adopted in the basic embodiment described in §1 to §4, and is also the technical idea adopted in the additional embodiment described in §5. In other words, the basic embodiment and the additional embodiment are common in that an accurate simulation is performed using such a technical idea, and in any of the embodiments, it can be obtained. The feature amounts x1 to xn include information obtained by multiplexing various phenomena having different influence ranges.

 このように、§1~§4で述べた基本的実施形態と、この§5で述べる付加的実施形態は、共通した技術思想に基づき、スケールの異なる各種現象の影響を考慮した正確なシミュレーションを行う発明であるが、特徴量x1~xnを抽出するための具体的な手法が若干異なっている。すなわち、基本的実施形態では、複数n通りの階層画像P1~Pnからなる画像ピラミッドPPを用いて特徴量x1~xnを抽出しているのに対して、ここで述べる付加的実施形態では、複数n通りの算出関数Z1(X,Y)~Zn(X,Y)を用いて特徴量x1~xnを抽出することになる。以下、付加的実施形態における特徴量の抽出方法を具体例に即して詳述する。 As described above, the basic embodiment described in §1 to §4 and the additional embodiment described in §5 are based on a common technical idea, and perform an accurate simulation considering the effects of various phenomena of different scales. Although it is an invention to be performed, the specific methods for extracting the feature amounts x1 to xn are slightly different. In other words, in the basic embodiment, the feature amounts x1 to xn are extracted using the image pyramid PP including a plurality of n kinds of hierarchical images P1 to Pn, whereas in the additional embodiment described here, a plurality of feature quantities x1 to xn are extracted. Feature amounts x1 to xn are extracted using n calculation functions Z1 (X, Y) to Zn (X, Y). Hereinafter, a feature amount extraction method in the additional embodiment will be described in detail with reference to a specific example.

 図32に示すとおり、特徴量抽出ユニット220は、矩形集合体置換部221、特徴量算出部222、算出関数提供部223を有している。ここで、矩形集合体置換部221は、元図形パターン10に含まれる図形を矩形の集合体に置き換える処理を行う。算出関数提供部223は、1つの評価点について、その周囲に位置する矩形に対する位置関係に基づいて特徴量を算出するための算出関数を提供する。そして、特徴量算出部222は、算出関数提供部223から提供される算出関数を用いて、評価点設定ユニット110によって設定された各評価点についての特徴量を算出する処理を行う。 32, the feature quantity extraction unit 220 includes a rectangular aggregate replacement unit 221, a feature quantity calculation unit 222, and a calculation function provision unit 223. Here, the rectangular aggregate replacement unit 221 performs processing for replacing a graphic included in the original graphic pattern 10 with a rectangular aggregate. The calculation function providing unit 223 provides a calculation function for calculating a feature amount for one evaluation point based on a positional relationship with respect to a rectangle positioned around the evaluation point. Then, the feature amount calculation unit 222 performs a process of calculating the feature amount for each evaluation point set by the evaluation point setting unit 110 using the calculation function provided from the calculation function providing unit 223.

 矩形集合体置換部221が、元図形パターン10に含まれる図形を矩形の集合体に置き換える処理を行うのは、特徴量算出部222が、ある1つの評価点について、その周囲に位置する矩形の四辺に対する位置関係に基づいて特徴量を算出する処理を行うためである。図33は、この矩形集合体置換部221によって、元図形パターン10を矩形集合体50に置換する処理の一例を示す平面図である。この例の場合、図33(a) に示すような任意の形状をもった図形を含む元図形パターン10が、図33(b) に示すような4個の矩形(それぞれハッチングを施して示す)からなる矩形集合体50に置き換えられている。図示のとおり、この置き換え処理によって得られる矩形集合体50全体の輪郭線は、元図形パターン10に含まれていた図形の輪郭線に一致する。 The rectangular aggregate replacement unit 221 performs the process of replacing a graphic included in the original graphic pattern 10 with a rectangular aggregate because the feature amount calculation unit 222 has a rectangular evaluation point around a certain evaluation point. This is to perform a process of calculating the feature amount based on the positional relationship with respect to the four sides. FIG. 33 is a plan view showing an example of processing for replacing the original graphic pattern 10 with the rectangular aggregate 50 by the rectangular aggregate replacing unit 221. FIG. In the case of this example, the original figure pattern 10 including a figure having an arbitrary shape as shown in FIG. 33 (a) has four rectangles as shown in FIG. 33 (b) (each shown by hatching). Is replaced by a rectangular assembly 50 made of As shown in the figure, the outline of the entire rectangular aggregate 50 obtained by this replacement processing matches the outline of the figure included in the original figure pattern 10.

 このような置き換えは、元図形パターン10に含まれる図形を複数の矩形に分割する処理により行うことができる。図33(a) に示す例の場合、XY二次元直交座標系を定義し、その上に元図形パターン10を配置すると、元図形パターン10に含まれる図形は、X軸に平行な辺とY軸に平行な辺とをもった多角形になっていることがわかる。半導体集積回路に用いられるパターンは、このように、X軸に平行な辺とY軸に平行な辺とをもった正則多角形によって構成されていることが多い。このような正則多角形を、X軸に平行な直線もしくはY軸に平行な直線によって分割すれば、図33(b) に示すように、X軸に平行な2辺およびY軸に平行な2辺を有する複数の正則矩形が形成できる。 Such replacement can be performed by a process of dividing the graphic included in the original graphic pattern 10 into a plurality of rectangles. In the case of the example shown in FIG. 33 (a), when an XY two-dimensional orthogonal coordinate system is defined and the original figure pattern 10 is arranged thereon, the figure included in the original figure pattern 10 has sides parallel to the X axis and Y It can be seen that the polygon has a side parallel to the axis. As described above, a pattern used in a semiconductor integrated circuit is often constituted by a regular polygon having a side parallel to the X axis and a side parallel to the Y axis. If such a regular polygon is divided by a straight line parallel to the X axis or a straight line parallel to the Y axis, two sides parallel to the X axis and 2 parallel to the Y axis are obtained as shown in FIG. A plurality of regular rectangles having sides can be formed.

 このように、任意の正則多角形を分割して、複数の正則矩形の集合体を形成する方法としては、様々なアルゴリズムを用いた具体的な方法が公知であるため、ここでは詳しい説明は省略する。もちろん、元図形パターン10に含まれる図形は、必ずしもX軸もしくはY軸に平行な辺からなる正則図形であるとは限らないが、任意形状の図形であっても、その輪郭線を正則図形の輪郭線に近似させることにより、複数の正則矩形に分割することが可能である。 As described above, since a specific method using various algorithms is known as a method of dividing an arbitrary regular polygon to form an aggregate of a plurality of regular rectangles, detailed description is omitted here. To do. Of course, the figure included in the original figure pattern 10 is not necessarily a regular figure consisting of sides parallel to the X-axis or the Y-axis. By approximating the outline, it can be divided into a plurality of regular rectangles.

 図34は、矩形集合体置換部221によって、元図形パターン10に含まれる任意形状の図形を、正則矩形からなる矩形集合体50に置換する処理の一例を示す平面図である。図34(a) は、2組の図形を含む元図形パターン10の一例である。具体的には、この元図形パターン10は、図の上段に描かれた任意形状をもつ五角形と、図の下段に描かれた円を含んでいる。上段に示す五角形の各辺は任意の方向を向いており、必ずしもX軸もしくはY軸に平行な辺にはなっていない。また、下段に示す円に至っては、その境界線は辺ではなく円周となっている。 FIG. 34 is a plan view showing an example of a process of replacing the arbitrarily shaped figure included in the original figure pattern 10 with the rectangular set 50 made of regular rectangles by the rectangular set replacing unit 221. FIG. 34A shows an example of the original graphic pattern 10 including two sets of graphics. Specifically, the original figure pattern 10 includes a pentagon having an arbitrary shape drawn in the upper part of the figure and a circle drawn in the lower part of the figure. Each side of the pentagon shown in the upper part faces an arbitrary direction and is not necessarily a side parallel to the X axis or the Y axis. In addition, in the circle shown in the lower part, the boundary line is not a side but a circumference.

 このように、元図形パターン10に任意形状の図形が含まれている場合であっても、図34(b) に示すように、その輪郭線(図では破線で示す)を正則図形の輪郭線に近似させることにより、複数の正則矩形(図では実線で示す)に分割することが可能である。このように、矩形集合体置換部221よる置き換え処理によって得られる矩形集合体50の輪郭線は、元図形パターン10に含まれていた図形の輪郭線に正確に一致する必要はなく、両者の輪郭線が近似的に一致していれば足りる。 In this way, even if the original figure pattern 10 includes a figure of arbitrary shape, as shown in FIG. 34 (b), the outline (indicated by a broken line in the figure) is a regular figure outline. Can be divided into a plurality of regular rectangles (indicated by solid lines in the figure). As described above, the outline of the rectangular aggregate 50 obtained by the replacement processing by the rectangular aggregate replacement unit 221 does not need to exactly match the outline of the graphic included in the original graphic pattern 10, and the outline of both It is sufficient if the lines are approximately matched.

 なお、矩形集合体置換部221の置き換え処理によって得られる矩形集合体50は、必ずしも正則矩形(XY二次元直交座標系を定義したときに、X軸に平行な2辺およびY軸に平行な2辺を有する矩形)の集合体である必要はないが、特徴量算出部222の演算負担を軽減する上では、正則矩形の集合体とするのが好ましい。そこで、以下、矩形集合体置換部221によって、正則矩形からなる矩形集合体50が得られた場合の実施例を説明する。 The rectangular aggregate 50 obtained by the replacement processing of the rectangular aggregate replacement unit 221 is not necessarily a regular rectangle (two sides parallel to the X axis and two parallel to the Y axis when an XY two-dimensional orthogonal coordinate system is defined). Although it is not necessary to reduce the calculation burden of the feature amount calculation unit 222, it is preferable to use a regular rectangular aggregate. Therefore, an embodiment in the case where a rectangular aggregate 50 composed of regular rectangles is obtained by the rectangular aggregate replacement unit 221 will be described below.

 また、§1~§4で述べた基本的実施形態では、それぞれ異なるサイズをもった複数n通りの階層画像P1~Pnからなる画像ピラミッドPPを作成する必要があるため、元図形パターン10に基づいて、画素の集合体からなる元画像(ラスター画像)を作成する必要があるが、この§5で述べる付加的実施形態では、必ずしも画素の集合体からなるラスター画像を作成する必要はないので、元図形パターン10はベクター画像のまま取り扱うことができ、矩形集合体置換部221よる置き換え処理によって得られる矩形集合体50も、ベクター画像の形式でかまわない。 In the basic embodiment described in §1 to §4, it is necessary to create an image pyramid PP composed of a plurality of n types of hierarchical images P1 to Pn having different sizes. Thus, it is necessary to create an original image (raster image) composed of a collection of pixels, but in the additional embodiment described in this section 5, it is not always necessary to create a raster image composed of a collection of pixels. The original graphic pattern 10 can be handled as a vector image, and the rectangular aggregate 50 obtained by the replacement processing by the rectangular aggregate replacement unit 221 may be in the form of a vector image.

 別言すれば、矩形集合体50を構成する個々の矩形は、その4辺を示すベクトルデータによって表現することができる。特に、以下に述べる実施例では、矩形集合体50をXY二次元直交座標系に配置された正則矩形の集合としているため、個々の矩形は、対角2点(たとえば、左上隅点と右下隅点)のXY座標値によって定義することができる。 In other words, each rectangle constituting the rectangular aggregate 50 can be expressed by vector data indicating its four sides. In particular, in the embodiment described below, since the rectangular aggregate 50 is a set of regular rectangles arranged in an XY two-dimensional orthogonal coordinate system, each rectangle has two diagonal points (for example, an upper left corner point and a lower right corner point). It can be defined by the XY coordinate value of point).

 特徴量算出部222は、ある1つの評価点について、その周囲に位置する矩形に対する位置関係に基づいて特徴量を算出する。図35(a) は、この特徴量算出部222による特徴量の算出原理を示す図である。ここでは、矩形集合体置換部221によって、XY二次元直交座標系上に、図示のような5個の正則矩形F1~F5を有する矩形集合体50が定義されたものとし、この矩形集合体50を用いて、ある1つの評価点についての特徴量xを算出する原理を説明する。より具体的には、たとえば、矩形F3の右辺上に設定されている評価点E(X,Y)について、特徴量xを算出する場合を考えてみる。 The feature amount calculation unit 222 calculates a feature amount for a certain evaluation point based on a positional relationship with respect to a rectangle positioned around the evaluation point. FIG. 35A is a diagram illustrating the principle of feature amount calculation by the feature amount calculation unit 222. Here, it is assumed that the rectangular aggregate replacement unit 221 defines a rectangular aggregate 50 having five regular rectangles F1 to F5 as shown in the figure on the XY two-dimensional orthogonal coordinate system. Is used to explain the principle of calculating the feature quantity x for one evaluation point. More specifically, for example, consider a case where the feature amount x is calculated for the evaluation point E (X, Y) set on the right side of the rectangle F3.

 なお、本願では、特徴量を示す符号として小文字xを用い、プロセスバイアスを示す符号として小文字yを用いているため、XY二次元直交座標系の座標値としては、大文字のX,Yを用いることにする。図35(a) に示す例の場合、XY二次元直交座標系上の座標(X,Y)に設定された評価点E(X,Y)についての特徴量xが、評価点E(X,Y)と個々の矩形F1~F5との位置関係に基づいて算出されることになる。 In the present application, since the lowercase letter x is used as the code indicating the feature amount and the lowercase letter y is used as the code indicating the process bias, uppercase X and Y are used as the coordinate values of the XY two-dimensional orthogonal coordinate system. To. In the case of the example shown in FIG. 35A, the feature amount x for the evaluation point E (X, Y) set at the coordinate (X, Y) on the XY two-dimensional orthogonal coordinate system is the evaluation point E (X, Y) is calculated based on the positional relationship between the individual rectangles F1 to F5.

 前述したとおり、実際には、評価点E(X,Y)についての特徴量xとして、複数n個の特徴量x1~xnがそれぞれ算出される。図35(b) は、このn個の特徴量x1~xnと、これらを算出するために用いられる算出関数の一例を示す図である。具体的には、第1番目の特徴量x1を算出するためには、第1の算出関数Z1(X,Y)が用いられ、第2番目の特徴量x2を算出するためには、第2の算出関数Z2(X,Y)が用いられ、... 、第n番目の特徴量xnを算出するためには、第nの算出関数Zn(X,Y)が用いられる。いずれの算出関数も、評価点E(X,Y)の座標値XおよびYを変数として、所定の関数値を与える関数である。 As described above, actually, a plurality of n feature amounts x1 to xn are calculated as the feature amount x for the evaluation point E (X, Y). FIG. 35 (b) is a diagram showing an example of the n feature values x1 to xn and a calculation function used to calculate them. Specifically, in order to calculate the first feature quantity x1, the first calculation function Z1 (X, Y) is used, and in order to calculate the second feature quantity x2, the second feature quantity x1 is calculated. The calculation function Z2 (X, Y) is used,..., And the nth calculation function Zn (X, Y) is used to calculate the nth feature quantity xn. Each of the calculation functions is a function that gives a predetermined function value using the coordinate values X and Y of the evaluation point E (X, Y) as variables.

 たとえば、第1の算出関数Z1(X,Y)は、図示のとおり、
   Z1(X,Y)=Σi=1~5 [K・fhi(σ1)・fvi(σ1)]
なる式で表される関数である。ここで、iは、矩形番号を示すパラメータであり、図35(a) に示す例の場合、合計5個の矩形F1~F5に対する位置関係をそれぞれ算出する必要があるため、i=1~5の範囲に設定される。
For example, the first calculation function Z1 (X, Y) is
Z1 (X, Y) = Σ i = 1 to 5 [K · fhi (σ1) · fvi (σ1)]
Is a function represented by the following expression. Here, i is a parameter indicating a rectangle number, and in the example shown in FIG. 35 (a), since it is necessary to calculate the positional relationship with respect to a total of five rectangles F1 to F5, i = 1 to 5 Is set in the range.

 上記式において、右辺のfhi(σ1)は、第i番目の矩形Fiについての水平方向関数であり、評価点E(X,Y)と矩形Fiとの水平方向(図35(a) に示す例の場合、X軸方向)に関する位置関係を数値として示す役割を果たす。たとえば、i=1の場合の水平方向関数fh1(σ1)は、図35(a) に示す例において、評価点E(X,Y)と矩形F1との間のX軸方向に関する位置関係を示す数値を与える関数ということになる。 In the above formula, fhi (σ1) on the right side is a horizontal function for the i-th rectangle Fi, and the horizontal direction between the evaluation point E (X, Y) and the rectangle Fi (example shown in FIG. 35 (a)). In the case of (3), it plays a role of indicating the positional relationship with respect to the X-axis direction) as a numerical value. For example, the horizontal function fh1 (σ1) when i = 1 indicates the positional relationship in the X-axis direction between the evaluation point E (X, Y) and the rectangle F1 in the example shown in FIG. This is a function that gives a numerical value.

 実際には、後述するように、矩形F1の左辺および右辺のX座標値と評価点E(X,Y)のX座標値との偏差に依存して、水平方向関数fh1(σ1)の関数値が定まることになる。同様に、i=2の場合の水平方向関数fh2(σ1)は、評価点E(X,Y)と矩形F2との間のX軸方向に関する位置関係を示す数値を与える関数ということになる。なお、σ1は、後述するように、拡がり係数であり、X軸方向についての関数の拡がり具合を定めるパラメータになる。 Actually, as described later, the function value of the horizontal function fh1 (σ1) depends on the deviation between the X coordinate values of the left and right sides of the rectangle F1 and the X coordinate value of the evaluation point E (X, Y). Will be determined. Similarly, the horizontal function fh2 (σ1) in the case of i = 2 is a function that gives a numerical value indicating the positional relationship in the X-axis direction between the evaluation point E (X, Y) and the rectangle F2. As will be described later, σ1 is a spreading coefficient, and is a parameter that determines the degree of function spreading in the X-axis direction.

 一方、右辺のfvi(σ1)は、第i番目の矩形Fiについての垂直方向関数であり、評価点E(X,Y)と矩形Fiとの垂直方向(図35(a) に示す例の場合、Y軸方向)に関する位置関係を数値として示す役割を果たす。たとえば、i=1の場合の垂直方向関数fv1(σ1)は、評価点E(X,Y)と矩形F1との間のY軸方向に関する位置関係を示す数値を与える関数ということになる。 On the other hand, fvi (σ1) on the right side is a vertical direction function for the i-th rectangle Fi, and the vertical direction between the evaluation point E (X, Y) and the rectangle Fi (in the case of the example shown in FIG. 35A) , Y-axis direction) as a numerical value. For example, the vertical function fv1 (σ1) in the case of i = 1 is a function that gives a numerical value indicating the positional relationship in the Y-axis direction between the evaluation point E (X, Y) and the rectangle F1.

 実際には、後述するように、矩形F1の上辺および下辺のY座標値と評価点E(X,Y)のY座標値との偏差に依存して、垂直方向関数fv1(σ1)の関数値が定まることになる。同様に、i=2の場合の垂直方向関数fv2(σ1)は、評価点E(X,Y)と矩形F2との間のY軸方向に関する位置関係を示す数値を与える関数ということになる。なお、ここでも、σ1は拡がり係数であり、Y軸方向についての関数の拡がり具合を定めるパラメータになる。 Actually, as will be described later, the function value of the vertical function fv1 (σ1) depends on the deviation between the Y coordinate values of the upper and lower sides of the rectangle F1 and the Y coordinate value of the evaluation point E (X, Y). Will be determined. Similarly, the vertical function fv2 (σ1) in the case of i = 2 is a function that gives a numerical value indicating the positional relationship in the Y-axis direction between the evaluation point E (X, Y) and the rectangle F2. In this case as well, σ1 is a spread coefficient, and is a parameter that determines the extent of the function spread in the Y-axis direction.

 右辺の係数Kは、最終的に得られる特徴量xのスケーリングを調整するための所定の定数であり、ここでは、特徴量算出係数と呼ぶことにする。後述する実施例の場合、特徴量算出係数K=1/4に設定しているが、Kの値は任意の定数に設定してかまわない。結局、第1の算出関数Z1(X,Y)は、第i番目の矩形Fiについて、水平方向の位置関係を数値として示す水平方向関数fhi(σ1)と、垂直方向の位置関係を数値として示す垂直方向関数fvi(σ1)と、特徴量算出係数Kと、の積を、5個の矩形F1~F5のそれぞれについて求め、その総和を求める関数ということになる。 The coefficient K on the right side is a predetermined constant for adjusting the scaling of the finally obtained feature quantity x, and is referred to as a feature quantity calculation coefficient here. In the embodiment described later, the feature amount calculation coefficient K is set to 1/4, but the value of K may be set to an arbitrary constant. After all, the first calculation function Z1 (X, Y) indicates the horizontal direction function fhi (σ1) that indicates the positional relationship in the horizontal direction as a numerical value and the positional relationship in the vertical direction as a numerical value for the i-th rectangle Fi. The product of the vertical function fvi (σ1) and the feature amount calculation coefficient K is obtained for each of the five rectangles F1 to F5, and the sum is obtained.

 別言すれば、第1の算出関数Z1(X,Y)は、ある1つの評価点E(X,Y)について、矩形集合体50を構成する個々の矩形F1~F5との間の水平方向および垂直方向の位置関係を示す数値の総和を求める関数ということになる。後述するように、水平方向関数fh1(σ1)および垂直方向関数fv1(σ1)には、評価点E(X,Y)の座標値XおよびYが変数として含まれているので、その関数値は、評価点E(X,Y)の座標値XおよびYを変数として与えることにより算出される。こうして第1の算出関数Z1(X,Y)を用いて算出された関数値は、第1の特徴量x1として利用される。 In other words, the first calculation function Z1 (X, Y) is in the horizontal direction between the individual rectangles F1 to F5 constituting the rectangular aggregate 50 with respect to a certain evaluation point E (X, Y). This is a function for calculating the sum of numerical values indicating the positional relationship in the vertical direction. As will be described later, the horizontal direction function fh1 (σ1) and the vertical direction function fv1 (σ1) include the coordinate values X and Y of the evaluation point E (X, Y) as variables, and the function values are The coordinate values X and Y of the evaluation point E (X, Y) are given as variables. The function value calculated using the first calculation function Z1 (X, Y) in this way is used as the first feature amount x1.

 これに対して、第2の算出関数Z2(X,Y)は、図示のとおり、
   Z2(X,Y)=Σi=1~5 [K・fhi(σ2)・fvi(σ2)]
なる式で表される関数である。前述した第1の算出関数Z1(X,Y)と上記第2の算出関数Z2(X,Y)との違いは、水平方向関数および垂直方向関数として、前者では、fhi(σ1)およびfvi(σ1)が用いられていたのに対して、後者では、fhi(σ2)・fvi(σ2)が用いられている点だけである。ここで、fhi(σ1)およびfvi(σ1)は、拡がり係数σ1を用いた関数であるのに対して、fhi(σ2)およびfvi(σ2)は、拡がり係数σ2を用いた関数である。
On the other hand, the second calculation function Z2 (X, Y) is
Z2 (X, Y) = Σ i = 1 to 5 [K · fhi (σ2) · fvi (σ2)]
Is a function represented by the following expression. The difference between the first calculation function Z1 (X, Y) and the second calculation function Z2 (X, Y) described above is that the former function fhi (σ1) and fvi ( Whereas σ1) is used, the latter is only the point where fhi (σ2) · fvi (σ2) is used. Here, fhi (σ1) and fvi (σ1) are functions using the spread coefficient σ1, whereas fhi (σ2) and fvi (σ2) are functions using the spread coefficient σ2.

 第3の算出関数Z3(X,Y)以降についても同様であり、それぞれ拡がり係数σを変えることにより、合計n通りの算出関数Z1(X,Y)~Zn(X,Y)を定義することができる。後に詳述するように、これらの算出関数は、ある1つの評価点Eについて、その周囲に位置する矩形の四辺に対する位置関係に基づいて特徴量xを算出する算出関数ということができる。 The same applies to the third calculation function Z3 (X, Y) and thereafter, and the calculation functions Z1 (X, Y) to Zn (X, Y) are defined in total by changing the spread coefficient σ. Can do. As will be described in detail later, these calculation functions can be referred to as calculation functions for calculating a feature quantity x based on a positional relationship with respect to four sides of a rectangle positioned around one evaluation point E.

 拡がり係数σの実体およびその役割については、後述するが、算出関数提供部223は、n通りの拡がり係数σ1~σnを用いたn通りの算出関数Z1(X,Y)~Zn(X,Y)を提供する機能を果たす。そして、特徴量算出部222は、このn通りの算出関数Z1(X,Y)~Zn(X,Y)を用いた演算処理により、1つの評価点Eについてn通りの特徴量x1~xnを算出する。すなわち、各算出関数Z1(X,Y)~Zn(X,Y)に、評価点E(X,Y)の座標値XおよびYを変数として与えることにより求められた各関数値が、それぞれ特徴量x1~xnとして算出される。 The substance of the expansion coefficient σ and its role will be described later. The calculation function providing unit 223 uses n calculation functions Z1 (X, Y) to Zn (X, Y) using n expansion coefficients σ1 to σn. Fulfills the function of providing). Then, the feature quantity calculation unit 222 calculates n feature quantities x1 to xn for one evaluation point E by an arithmetic process using the n calculation functions Z1 (X, Y) to Zn (X, Y). calculate. That is, each function value obtained by giving the coordinate values X and Y of the evaluation point E (X, Y) as variables to the respective calculation functions Z1 (X, Y) to Zn (X, Y) is a characteristic. Calculated as quantities x1 to xn.

 前述したとおり、拡がり係数σは、X軸方向もしくはY軸方向についての関数の拡がり具合を定めるパラメータである。したがって、狭い拡がり具合を示す拡がり係数σ1から、広い拡がり具合を示す拡がり係数σnに至るまで、n通りの拡がり係数σ1~σnを用いて、n通りの算出関数Z1(X,Y)~Zn(X,Y)を提供するようにすれば、評価点E(X,Y)の近傍の狭い範囲を考慮した特徴量x1から、評価点E(X,Y)の遠方まで含めた広い範囲を考慮した特徴量xnに至るまで、考慮範囲を変えた複数n通りの特徴量x1~xnを算出することができる。 As described above, the expansion coefficient σ is a parameter that determines the degree of expansion of the function in the X-axis direction or the Y-axis direction. Accordingly, n calculation functions Z1 (X, Y) to Zn (n) are used by using n expansion coefficients σ1 to σn from a expansion coefficient σ1 indicating a narrow expansion condition to an expansion coefficient σn indicating a wide expansion condition. If X, Y) is provided, a wide range including from a feature amount x1 considering a narrow range near the evaluation point E (X, Y) to a distance from the evaluation point E (X, Y) is considered. A plurality of n feature amounts x1 to xn with different consideration ranges can be calculated until the feature amount xn is reached.

 その結果、§1~§4で述べた基本的実施形態と同様に、近接効果やエッチングのローディング現象など、スケールの異なる各種現象の影響を考慮した正確なシミュレーションを行うことが可能になる。別言すれば、このn通りの算出関数Z1(X,Y)~Zn(X,Y)により、図6に示すn通りの階層画像P1~Pnからなる画像ピラミッドPPと同様の作用効果を奏することができるようになる。 As a result, as in the basic embodiment described in §1 to §4, it is possible to perform an accurate simulation in consideration of the effects of various phenomena of different scales such as proximity effects and etching loading phenomena. In other words, the n calculation functions Z1 (X, Y) to Zn (X, Y) provide the same operational effects as the image pyramid PP including the n hierarchical images P1 to Pn shown in FIG. Will be able to.

 要するに、ここで述べる付加的実施形態では、算出関数提供部223が、評価点E(X,Y)の近傍の狭い範囲を考慮した特徴量x1から、評価点E(X,Y)の遠方まで含めた広い範囲を考慮した特徴量xnに至るまで、考慮範囲を変えた複数n通りの特徴量x1~xnを算出するために、複数n通りの算出関数Z1(X,Y)~Zn(X,Y)を提供する。そして、特徴量算出部222は、この複数n通りの算出関数を用いて、各評価点についてそれぞれ複数n通りの特徴量x1~xnを算出することになる。 In short, in the additional embodiment described here, the calculation function providing unit 223 moves from the feature amount x1 considering a narrow range in the vicinity of the evaluation point E (X, Y) to a distance of the evaluation point E (X, Y). In order to calculate a plurality of n feature amounts x1 to xn with different consideration ranges up to a feature amount xn that takes into account a wide range, a plurality of n calculation functions Z1 (X, Y) to Zn (X , Y). Then, the feature amount calculation unit 222 uses the plurality of n types of calculation functions to calculate a plurality of n types of feature amounts x1 to xn for each evaluation point.

 以上、図32に示す特徴量抽出ユニット220の基本構成および基本動作を説明したが、この特徴量抽出ユニット220によって算出されたn個の特徴量x1~xnは、バイアス推定ユニット130に与えられる。バイアス推定ユニット130の構成および動作は、既に§3で述べたとおりである。 The basic configuration and basic operation of the feature quantity extraction unit 220 shown in FIG. 32 have been described above. The n feature quantities x1 to xn calculated by the feature quantity extraction unit 220 are given to the bias estimation unit 130. The configuration and operation of the bias estimation unit 130 are as described in section 3 above.

 すなわち、図32に示すとおり、バイアス推定ユニット130は、特徴量抽出ユニット220によって抽出された特定の評価点Eについての特徴量x1~xnを入力する特徴量入力部131と、所定の学習情報Lに基づいて、特徴量x1~xnに応じた推定値を、当該特定の評価点Eについてのプロセスバイアスの推定値yとして出力する推定演算部132と、を有している。 That is, as shown in FIG. 32, the bias estimation unit 130 includes a feature amount input unit 131 that inputs feature amounts x1 to xn for a specific evaluation point E extracted by the feature amount extraction unit 220, and predetermined learning information L. And an estimation calculation unit 132 that outputs an estimated value corresponding to the feature amount x1 to xn as an estimated value y of a process bias for the specific evaluation point E.

 ここに示す実施例の場合、推定演算部132は、特徴量入力部131が入力した特徴量x1~xnを入力層とし、プロセスバイアスの推定値yを出力層とするニューラルネットワークを有している。§3で説明したように、このニューラルネットワークは、多数のテストパターン図形を用いたリソグラフィプロセスによって実基板S上に形成される実図形パターン20の実寸法測定によって得られた寸法値と、各テストパターン図形から得られる特徴量と、を用いた学習段階によって得られたパラメータを学習情報Lとして用い、プロセスバイアスの推定処理を行う。また、推定演算部132は、所定の図形の輪郭線上に位置する評価点Eについてのプロセスバイアスの推定値yとして、当該輪郭線の法線方向についての評価点Eのずれ量の推定値を求める処理を行う。 In the case of the embodiment shown here, the estimation calculation unit 132 has a neural network having the feature amounts x1 to xn input by the feature amount input unit 131 as input layers and the process bias estimate y as an output layer. . As described in §3, this neural network includes the dimension value obtained by the actual dimension measurement of the actual figure pattern 20 formed on the actual substrate S by the lithography process using a large number of test pattern figures, and each test. Process bias estimation processing is performed by using, as learning information L, a parameter obtained in a learning step using a feature amount obtained from a pattern figure. In addition, the estimation calculation unit 132 obtains an estimated value of the deviation amount of the evaluation point E in the normal direction of the contour line as the estimated value y of the process bias for the evaluation point E located on the contour line of the predetermined graphic. Process.

 なお、ここで述べた付加的実施形態を、方法発明として捉えた場合、当該方法は、元図形パターン10を用いたリソグラフィプロセスをシミュレートすることにより、実基板S上に形成される実図形パターン20の形状を推定する図形パターンの形状推定方法ということになる。そして、当該方法は、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターン10を入力する元図形パターン入力段階と、上記輪郭線上の所定位置に評価点Eを設定する評価点設定段階と、元図形パターン10について、評価点Eの周囲の特徴を示す特徴量xを抽出する特徴量抽出段階と、特徴量xに基づいて、評価点Eの元図形パターン10上の位置と実図形パターン20上の位置とのずれ量を示すプロセスバイアスyを推定するプロセスバイアス推定段階と、をコンピュータに実行させることにより実現される。 In addition, when the additional embodiment described here is regarded as a method invention, the method can be realized by simulating a lithography process using the original graphic pattern 10 to form a real graphic pattern formed on the real substrate S. This is a figure pattern shape estimation method for estimating 20 shapes. Then, the method includes an original graphic pattern input stage for inputting an original graphic pattern 10 including information on a contour line indicating the boundary between the inside and the outside of the graphic, and an evaluation for setting an evaluation point E at a predetermined position on the contour line. A point setting stage, a feature quantity extraction stage for extracting a feature quantity x indicating features around the evaluation point E for the original figure pattern 10, and a position of the evaluation point E on the original figure pattern 10 based on the feature quantity x And a process bias estimation stage for estimating a process bias y indicating the amount of deviation between the position and the position on the actual graphic pattern 20 is realized by causing the computer to execute.

 しかも、特徴量抽出段階は、元図形パターン10に含まれる図形を矩形の集合体に置き換える矩形集合体置換段階と、各評価点Eについて、その周囲に位置する矩形に対する位置関係に基づいて特徴量xを算出する特徴量算出段階と、を含んでおり、プロセスバイアス推定段階は、予め実施された学習段階によって得られた学習情報Lに基づいて、特徴量xに応じた推定値を求め、求めた推定値を評価点Eについてのプロセスバイアスの推定値yとして出力する推定演算段階を含むことになる。 In addition, the feature quantity extraction stage is based on a rectangular aggregate replacement stage that replaces the graphic included in the original graphic pattern 10 with a rectangular aggregate, and a feature quantity based on the positional relationship of each evaluation point E with respect to the surrounding rectangle. a feature amount calculation stage for calculating x, and the process bias estimation stage obtains an estimated value corresponding to the feature quantity x based on the learning information L obtained by the learning stage performed in advance. In this case, an estimation calculation stage for outputting the estimated value as the estimated value y of the process bias for the evaluation point E is included.

 <5.2 算出関数の具体例>
 図35(b) には、算出関数提供部223によって提供されるn通りの算出関数Z1(X,Y)~Zn(X,Y)の基本形態の一例を示した。ここでは、これら算出関数のより詳細な構成を説明する。
<5.2 Specific examples of calculation functions>
FIG. 35 (b) shows an example of the basic form of n calculation functions Z1 (X, Y) to Zn (X, Y) provided by the calculation function providing unit 223. Here, a more detailed configuration of these calculation functions will be described.

 図35(b) に示す例の場合、第k番目(1≦k≦n)の算出関数Zk(X,Y)は、図36の1行目に示すとおり、
   Zk(X,Y)=Σi=1~q [K・fhi(σk)・fvi(σk)]
なる一般式で表現され、この関数Zk(X,Y)の関数値が、特徴量算出部222から第k番目の特徴量xkの値として出力される。ここで、qは評価点との位置関係を求める対象となる周囲の矩形の総数、iは矩形番号を示すパラメータ(1≦i≦q)、σkは第k番目の拡がり係数、Kは特徴量算出係数である。ここに示す実施例の場合、K=1/4に設定しているが、特徴量算出係数Kは、特徴量xのスケーリングファクタを決める係数であるので、任意の値に設定してかまわない。
In the case of the example shown in FIG. 35 (b), the k-th (1 ≦ k ≦ n) calculation function Zk (X, Y) is as shown in the first line of FIG.
Zk (X, Y) = Σ i = 1 to q [K · fhi (σk) · fvi (σk)]
The function value of the function Zk (X, Y) is output from the feature amount calculation unit 222 as the value of the kth feature amount xk. Here, q is the total number of surrounding rectangles for which the positional relationship with the evaluation points is to be obtained, i is a parameter indicating a rectangle number (1 ≦ i ≦ q), σk is the kth expansion coefficient, and K is a feature amount It is a calculation coefficient. In the embodiment shown here, K is set to 1/4, but the feature quantity calculation coefficient K is a coefficient that determines the scaling factor of the feature quantity x, and may be set to an arbitrary value.

 前述したように、上式において、fhi(σk)は、第i番目の矩形Fiについての水平方向関数であり、評価点E(X,Y)と矩形Fiとの水平方向(X軸方向)に関する位置関係を示すファクターとなる。一方、fvi(σk)は、第i番目の矩形Fiについての垂直方向関数であり、評価点E(X,Y)と矩形Fiとの垂直方向(Y軸方向)に関する位置関係を示すファクターとなる。ここでは、この水平方向関数fhi(σk)および垂直方向関数fvi(σk)の具体例を説明する。 As described above, in the above equation, fhi (σk) is a horizontal function for the i-th rectangle Fi and relates to the horizontal direction (X-axis direction) between the evaluation point E (X, Y) and the rectangle Fi. This is a factor indicating the positional relationship. On the other hand, fvi (σk) is a vertical function for the i-th rectangle Fi, and is a factor indicating the positional relationship in the vertical direction (Y-axis direction) between the evaluation point E (X, Y) and the rectangle Fi. . Here, a specific example of the horizontal function fhi (σk) and the vertical function fvi (σk) will be described.

 図36の第2行目には、水平方向関数fhi(σk)の例として、
   fhi(σk)=erf[(X-Li)/σk]-erf[(X-Ri)/σk]
なる具体的な関数が示されている。また、図36の第3行目には、垂直方向関数fvi(σk)の例として、
   fvi(σk)=erf[(Y-Bi)/σk]-erf[(Y-Ti)/σk]
なる具体的な関数が示されている。前述したとおり、iは矩形番号を示すパラメータ(1≦i≦q)、qは矩形の総数、kは算出関数の番号を示すパラメータ(1≦k≦n)、nは特徴量の総数(算出関数の総数)、σkは第k番目の拡がり係数である。
In the second row of FIG. 36, as an example of the horizontal function fhi (σk),
fhi (σk) = erf [(X−Li) / σk] −erf [(X−Ri) / σk]
A specific function is shown. In the third row of FIG. 36, as an example of the vertical function fvi (σk),
fvi (σk) = erf [(Y−Bi) / σk] −erf [(Y−Ti) / σk]
A specific function is shown. As described above, i is a parameter indicating a rectangle number (1 ≦ i ≦ q), q is a total number of rectangles, k is a parameter indicating a calculation function number (1 ≦ k ≦ n), and n is a total number of feature values (calculation). (Total number of functions), σk is the kth spreading factor.

 一方、Li,Ri,Ti,Biは、図36の下段に示すように、第i番目の正則矩形Fiの四辺の座標値である。具体的には、Liは矩形Fiの左辺LのX座標値、Riは矩形Fiの右辺RのX座標値、Tiは矩形Fiの上辺TのY座標値、Biは矩形Fiの下辺BのY座標値である。XY二次元直交座標系上に配置された正則矩形Fiは、たとえば、その左下隅点LBの座標値(Li,Bi)と右上隅点RTの座標値(Ri,Ti)によって表すことができる。また、上記各関数に変数として代入されるX,Yは、図36の下段に示すように、特微量xの算出対象となる評価点E(X,Y)のX座標値およびY座標値である。 On the other hand, Li, Ri, Ti, and Bi are the coordinate values of the four sides of the i-th regular rectangle Fi, as shown in the lower part of FIG. Specifically, Li is the X coordinate value of the left side L of the rectangle Fi, Ri is the X coordinate value of the right side R of the rectangle Fi, Ti is the Y coordinate value of the upper side T of the rectangle Fi, Bi is Y of the lower side B of the rectangle Fi. It is a coordinate value. The regular rectangle Fi arranged on the XY two-dimensional orthogonal coordinate system can be represented by, for example, the coordinate value (Li, Bi) of the lower left corner point LB and the coordinate value (Ri, Ti) of the upper right corner point RT. In addition, X and Y assigned as variables to the above functions are the X coordinate value and the Y coordinate value of the evaluation point E (X, Y) that is the calculation target of the feature amount x, as shown in the lower part of FIG. is there.

 矩形集合体置換部221は、元図形パターン10に含まれる図形を複数の正則矩形に置換し、個々の正則矩形を座標値(Li,Bi),(Ri,Ti)で表したデータを、特徴量算出部222に与えることができる。したがって、特徴量算出部222は、水平方向関数fhi(σk)および垂直方向関数fvi(σk)について、矩形集合体置換部221から与えられた矩形Fiを示す座標値Li,Bi,Ri,Tiと、評価点設定ユニット110から与えられた評価点E(X,Y)を示す座標値X,Yとを、それぞれ変数として入れた演算を行うことにより、それぞれの関数値を算出することができる。 The rectangular aggregate replacement unit 221 replaces the graphic included in the original graphic pattern 10 with a plurality of regular rectangles, and represents the data representing the individual regular rectangles with coordinate values (Li, Bi), (Ri, Ti). The amount can be given to the amount calculation unit 222. Therefore, the feature amount calculation unit 222 uses the coordinate values Li, Bi, Ri, Ti indicating the rectangle Fi given from the rectangle assembly replacement unit 221 for the horizontal direction function fhi (σk) and the vertical direction function fvi (σk). Each function value can be calculated by performing an operation in which the coordinate values X and Y indicating the evaluation point E (X, Y) given from the evaluation point setting unit 110 are respectively entered as variables.

 なお、拡がり係数σkの値については、第1番目(k=1)の算出関数Z1(X,Y)内の拡がり係数σ1の値を、たとえばσ1=1とし、kが大きくなるに従って、σkも大きくなるような設定を行えばよい。σkの具体的な数値例については後述する。 Regarding the value of the spread coefficient σk, the value of the spread coefficient σ1 in the first (k = 1) calculation function Z1 (X, Y) is set to σ1 = 1, for example, and as k increases, σk also increases. What is necessary is just to set so that it may become large. A specific numerical example of σk will be described later.

 水平方向関数fhi(σk)および垂直方向関数fvi(σk)に含まれている関数erfは、一般に誤差関数と呼ばれている関数である。図37は、この誤差関数erf(ξ)を説明する図である。誤差関数erf(ξ)は、図37(a) に示す数式で定義される関数であり、任意の変数ξに対して、-1≦erf(ξ)≦+1なる範囲の関数値をとる。そして、+erf(ξ)は、図37(b) に示すように、変数ξの増加とともに関数値が-1から+1へと単調増加する関数になり、-erf(ξ)は、図37(c) に示すように、変数ξの増加とともに関数値が+1から-1へと単調減少する関数になる。 The function erf included in the horizontal function fhi (σk) and the vertical function fvi (σk) is a function generally called an error function. FIG. 37 is a diagram for explaining the error function erf (ξ). The error function erf (ξ) is a function defined by the mathematical formula shown in FIG. 37 (a), and takes a function value in a range of −1 ≦ erf (ξ) ≦ + 1 with respect to an arbitrary variable ξ. Then, + erf (ξ) becomes a function whose function value monotonously increases from −1 to +1 as the variable ξ increases, as shown in FIG. 37 (b), and −erf (ξ) As shown in, the function value monotonically decreases from +1 to −1 as the variable ξ increases.

 続いて、誤差関数erf(ξ)の上記特性を踏まえて、水平方向関数fhi(σk)の意味するところを、図38を参照しながら説明する。図38(a) は、XY二次元直交座標系上に配置された第i番目の矩形Fiと、任意の位置に配置された評価点Eとの相互位置関係を示す図である。矩形Fiは、正則矩形であるため、上下両辺はX軸に平行になり、左右両辺はY軸に平行になる。水平方向関数fhi(σk)は、水平方向に関する位置関係を示すファクターであるため、図38(a) には、破線で示す水平線上に同一間隔で配置された11個の評価点E1~E11が例示されている。座標軸X上の点Li,Riは、矩形Fiの左辺および右辺のX座標値を示している。 Next, the meaning of the horizontal function fhi (σk) will be described with reference to FIG. 38 based on the above characteristics of the error function erf (ξ). FIG. 38 (a) is a diagram showing a mutual positional relationship between the i-th rectangle Fi arranged on the XY two-dimensional orthogonal coordinate system and the evaluation point E arranged at an arbitrary position. Since the rectangle Fi is a regular rectangle, the upper and lower sides are parallel to the X axis, and the left and right sides are parallel to the Y axis. Since the horizontal function fhi (σk) is a factor indicating the positional relationship in the horizontal direction, eleven evaluation points E1 to E11 arranged at the same interval on the horizontal line indicated by the broken line are shown in FIG. Illustrated. Points Li and Ri on the coordinate axis X indicate the X coordinate values of the left and right sides of the rectangle Fi.

 特定の評価点Eと矩形Fiとの水平方向に関する位置関係は、矩形Fiの左辺Lと評価点Eとの隔たりを示す左辺位置偏差および矩形Fiの右辺Rと評価点Eとの隔たりを示す右辺位置偏差を用いて定量化される。ここでは、まず、左辺位置偏差を定量化することを考えてみよう。各評価点E1~E11の左辺位置偏差は、矩形Fiの左辺Lに注目したときに(図では、左辺Lを太線で示してある)、この左辺LのX座標値Liと、各評価点E1~E11のX座標値との隔たりに相当し、水平方向関数fhi(σk)は、この隔たりに応じた関数値をとる。 The positional relationship in the horizontal direction between the specific evaluation point E and the rectangle Fi includes the left side position deviation indicating the distance between the left side L of the rectangle Fi and the evaluation point E, and the right side indicating the distance between the right side R of the rectangle Fi and the evaluation point E. Quantified using position deviation. Here, let us first consider quantifying the left side position deviation. The left side position deviation of each of the evaluation points E1 to E11, when focusing on the left side L of the rectangle Fi (in the figure, the left side L is indicated by a thick line), the X coordinate value Li of the left side L and each evaluation point E1 The horizontal function fhi (σk) takes a function value corresponding to the distance from the X coordinate value of E11 to E11.

 ここでは、k=1として、第1番目の算出関数Z1(X,Y)に含まれる水平方向関数fhi(σ1)について考えてみる。σ1は、任意の値に設定することができるが、たとえば、σ1=1に設定したものとすれば、
   fhi(σ1)=erf(X-Li)-erf(X-Ri)
になる。そこで、まず、上式の右辺第1項の+erf(X-Li)がどのような関数になるかを考えてみよう。
Here, let us consider k = 1 and the horizontal function fhi (σ1) included in the first calculation function Z1 (X, Y). σ1 can be set to an arbitrary value. For example, if σ1 = 1 is set,
fhi (σ1) = erf (X-Li) -erf (X-Ri)
become. Therefore, first consider the function of + erf (X−Li) in the first term on the right side of the above equation.

 図38(b) は、誤差関数+erf(X-Li)のグラフを、図38(a) に示されている矩形Fiの左辺Lの位置を基準として配置した状態を示すものである。前述したとおり、誤差関数+erf(ξ)は、図37(b) に示すように、ξの増加とともに関数値が-1から+1へと単調増加する関数であるので、誤差関数+erf(X-Li)のグラフを、矩形Fiの左辺Lの位置において関数値が0になるように配置すれば、図38(b) のようなグラフが得られる。このグラフの横軸は「X-Li」であり、X-Li=0の位置、すなわち、X座標値がLiとなる左辺Lの位置に配置された評価点E(図38(a) における評価点E3)についての+erf(X-Li)の値は0になる。 FIG. 38 (b) shows a state in which the graph of the error function + erf (X−Li) is arranged with reference to the position of the left side L of the rectangle Fi shown in FIG. 38 (a). As described above, the error function + erf (ξ) is a function whose function value monotonously increases from −1 to +1 as ξ increases, as shown in FIG. ) Is arranged so that the function value becomes 0 at the position of the left side L of the rectangle Fi, a graph as shown in FIG. The horizontal axis of this graph is “X-Li”, and the evaluation point E (evaluation at in FIG. 38 (a)) arranged at the position of X-Li = 0, that is, the position of the left side L where the X coordinate value is Li The value of + erf (X−Li) for point E3) is zero.

 一方、評価点E2,E1のように、左辺Lよりも左側に位置する評価点については、+erf(X-Li)の値は負になる。たとえば、評価点E2,E1については、そのX座標値を誤差関数+erf(X-Li)に代入することにより負の関数値e2,e1が得られる。これに対して、評価点E4,E5,... のように、左辺Lよりも右側に位置する評価点については、+erf(X-Li)の値は正になる。たとえば、評価点E4,E5については、そのX座標値を誤差関数+erf(X-Li)に代入することにより正の関数値e4,e5が得られる(値e1等は、グラフ上の点e1等の縦座標値を示している。)。 On the other hand, for the evaluation points located on the left side of the left side L like the evaluation points E2 and E1, the value of + erf (X−Li) is negative. For example, for the evaluation points E2 and E1, negative function values e2 and e1 can be obtained by substituting the X coordinate value into the error function + erf (X−Li). On the other hand, the value of + erf (X−Li) is positive for evaluation points located on the right side of the left side L, such as evaluation points E4, E5,. For example, for the evaluation points E4 and E5, positive function values e4 and e5 are obtained by substituting the X coordinate value into the error function + erf (X−Li) (the value e1 etc. is the point e1 etc. on the graph). The ordinate value of.)

 もっとも、変数値「X-Li」が増加してゆくと、やがて関数値+erf(X-Li)は上限値+1に達して飽和状態となる。したがって、図示の例の場合、関数値e8,e9,e10,e11は、いずれも上限値+1となり、更に右側に配置された評価点(図示されていない)についても、関数値は+1になる。逆に、変数値「X-Li」が減少してゆくと、やがて関数値+erf(X-Li)は下限値-1に達して飽和状態となる。 However, as the variable value “X−Li” increases, the function value + erf (X−Li) eventually reaches the upper limit value + 1 and becomes saturated. Therefore, in the case of the illustrated example, the function values e8, e9, e10, e11 are all the upper limit value +1, and the function value is also +1 for the evaluation point (not shown) arranged on the right side. Conversely, as the variable value “X−Li” decreases, the function value + erf (X−Li) eventually reaches the lower limit value−1 and becomes saturated.

 さて、次に、
   fhi(σ1)=erf(X-Li)-erf(X-Ri)
なる式の右辺第2項の-erf(X-Ri)がどのような関数になるかを考えてみよう。図39(a) は、図38(a) と同様に、XY二次元直交座標系上に配置された第i番目の矩形Fiと、11個の評価点E1~E11との水平方向に関する位置関係を示す図である。前述したとおり、この水平方向に関する位置関係は、左辺Lとの隔たりを示す左辺位置偏差と、右辺Rとの隔たりを示す右辺位置偏差によって定量化される。ここでは、右辺位置偏差を定量化することを考えてみる。各評価点E1~E11の右辺位置偏差は、矩形Fiの右辺Rに注目したときに(図では、右辺Rを太線で示してある)、この右辺RのX座標値Riと、各評価点E1~E11のX座標値との隔たりに相当し、水平方向関数fhi(σ1)は、この隔たりに応じた関数値をとる。
Now, next
fhi (σ1) = erf (X-Li) -erf (X-Ri)
Let us consider what function is -erf (X-Ri) in the second term on the right side of the equation. FIG. 39A shows the positional relationship in the horizontal direction between the i-th rectangle Fi arranged on the XY two-dimensional orthogonal coordinate system and the eleven evaluation points E1 to E11, as in FIG. FIG. As described above, the positional relationship in the horizontal direction is quantified by the left side position deviation indicating the distance from the left side L and the right side position deviation indicating the distance from the right side R. Here, let us consider quantifying the right side position deviation. The right side position deviation of each of the evaluation points E1 to E11 is the X coordinate value Ri of the right side R and each evaluation point E1 when the right side R of the rectangle Fi is noticed (in the figure, the right side R is indicated by a thick line). The horizontal function fhi (σ1) corresponds to a distance from the X coordinate value of E11 to E11, and takes a function value corresponding to the distance.

 図39(b) は、誤差関数-erf(X-Ri)のグラフを、図39(a) に示されている矩形Fiの右辺Rの位置を基準として配置した状態を示すものである。前述したとおり、誤差関数-erf(ξ)は、図37(c) に示すように、ξの増加とともに関数値が+1から-1へと単調減少する関数であるので、誤差関数-erf(X-Ri)のグラフを、矩形Fiの右辺Rの位置において関数値が0になるように配置すれば、図39(b) のようなグラフが得られる。このグラフの横軸は「X-Ri」であり、X-Ri=0の位置、すなわち、X座標値がRiとなる右辺Rの位置に配置された評価点E(図39(a) における評価点E8)についての-erf(X-Ri)の値は0になる。 FIG. 39 (b) shows a state in which the graph of the error function −erf (X−Ri) is arranged with reference to the position of the right side R of the rectangle Fi shown in FIG. 39 (a). As described above, the error function −erf (ξ) is a function whose function value monotonously decreases from +1 to −1 as ξ increases, as shown in FIG. If the graph of -Ri) is arranged so that the function value becomes 0 at the position of the right side R of the rectangle Fi, a graph as shown in FIG. The horizontal axis of this graph is “X-Ri”, and the evaluation point E (evaluation at in FIG. 39 (a)) is arranged at the position of X-Ri = 0, that is, the position of the right side R where the X coordinate value is Ri. The value of -erf (X-Ri) for point E8) is zero.

 一方、評価点E9,E10のように、右辺Rよりも右側に位置する評価点については、-erf(X-Ri)の値は負になる。たとえば、評価点E9,E10については、そのX座標値を誤差関数-erf(X-Ri)に代入することにより負の関数値e9,e10が得られる。これに対して、評価点E7,E6,E5,... のように、右辺Rよりも左側に位置する評価点については、-erf(X-Ri)の値は正になる。たとえば、評価点E7,E6については、そのX座標値を誤差関数-erf(X-Ri)に代入することにより正の関数値e7,e6が得られる。 On the other hand, for the evaluation points located on the right side of the right side R, such as the evaluation points E9 and E10, the value of -erf (X-Ri) is negative. For example, for the evaluation points E9 and E10, the negative function values e9 and e10 are obtained by substituting the X coordinate values into the error function -erf (X-Ri). On the other hand, for the evaluation points located on the left side of the right side R, such as the evaluation points E7, E6, E5,..., The value of -erf (X-Ri) is positive. For example, for the evaluation points E7 and E6, positive function values e7 and e6 are obtained by substituting the X coordinate values into the error function -erf (X-Ri).

 もっとも、変数値「X-Ri」が増加してゆくと、やがて関数値-erf(X-Ri)は下限値-1に達して飽和状態となる。逆に、変数値「X-Ri」が減少してゆくと、やがて関数値-erf(X-Ri)は上限値+1に達して飽和状態となる。したがって、図示の例の場合、関数値e1,e2,e3は、いずれも上限値+1になる。 However, as the variable value “X−Ri” increases, the function value −erf (X−Ri) eventually reaches the lower limit value−1 and becomes saturated. Conversely, as the variable value “X−Ri” decreases, the function value −erf (X−Ri) eventually reaches the upper limit value + 1 and becomes saturated. Therefore, in the case of the illustrated example, the function values e1, e2, e3 are all the upper limit value +1.

 結局、
   fhi(σ1)=erf(X-Li)-erf(X-Ri)
なる式で定義される水平方向関数fhi(σ1)は、左辺Lとの隔たりを示す左辺位置偏差に応じた関数値+erf(X-Li)と、右辺Rとの隔たりを示す右辺位置偏差に応じた関数値-erf(X-Ri)と、の和を示すものであり、σ1=1に設定すれば、図38(b) に示す関数+erf(X-Li)と図39(b) に示す関数-erf(X-Ri)との和になる。ここでは、σ1=1であることを前提として、水平方向関数fhi(σ1)を単にfhiと記述することにする。
After all,
fhi (σ1) = erf (X-Li) -erf (X-Ri)
The horizontal function fhi (σ1) defined by the following formula is based on the function value + erf (X−Li) corresponding to the left side position deviation indicating the distance from the left side L and the right side position deviation indicating the distance from the right side R. The function value −erf (X−Ri) and the sum of the function value −erf (X−Ri) and the function + erf (X−Li) shown in FIG. 38 (b) and FIG. 39 (b) when σ1 = 1. This is the sum of the function -erf (X-Ri). Here, assuming that σ1 = 1, the horizontal function fhi (σ1) is simply described as fhi.

 図40(a) は、図38(a) ,図39(a) と同様に、XY二次元直交座標系上に配置された第i番目の矩形Fiと、11個の評価点E1~E11との水平方向に関する位置関係を示す図である。ここでは、左辺位置偏差の基準になる左辺Lと右辺位置偏差の基準になる右辺Rとを太線で示してある。図40(b) は、図38(b) に示す関数+erf(X-Li)と図39(b) に示す関数-erf(X-Ri)との和を示すグラフであり、水平方向関数fhiのグラフということになる(横軸は、水平方向関数fhiの変数となる座標値Xである)。 FIG. 40 (a) is similar to FIG. 38 (a) and FIG. 39 (a), the i-th rectangle Fi arranged on the XY two-dimensional orthogonal coordinate system, and eleven evaluation points E1 to E11. It is a figure which shows the positional relationship regarding horizontal direction. Here, the left side L, which is the reference for the left side position deviation, and the right side R, which is the reference for the right side position deviation, are indicated by bold lines. 40 (b) b is a graph showing the sum of the function + erf (X−Li) shown in FIG. 38 (b) and the function −erf (X−Ri) shown in FIG. 39 (b). The horizontal function fhi is shown in FIG. (The horizontal axis is the coordinate value X that is a variable of the horizontal function fhi).

 この水平方向関数fhiのグラフは、図示のとおり、矩形Fiの重心Gの位置を中心とする左右対称のグラフになる。また、図38(b) に示すとおり、関数+erf(X-Li)の左右の端部は飽和値-1もしくは+1をとり、図39(b) に示すとおり、関数-erf(X-Ri)の左右の端部は飽和値+1もしくは-1をとる。したがって、図40(b) に示す水平方向関数fhiのグラフは、概ね中心付近がピークとなり(中心位置が窪む場合もある)、左右にゆくにしたがってなだらかに減少するような山状のカーブを描き、左右の端部は0になる。なお、この山状カーブの幅は、矩形Fiの横幅dX(X軸方向幅)に応じて変わることになる。 The graph of the horizontal function fhi is a bilaterally symmetric graph with the position of the center of gravity G of the rectangle Fi as shown in the figure. As shown in FIG. 38 (b), the left and right ends of the function + erf (X−Li) take a saturation value of −1 or +1, and as shown in FIG. 39 (b), the function −erf (X−Ri) The left and right ends of the signal take a saturation value of +1 or -1. Therefore, in the graph of the horizontal function fhi shown in FIG. 40 (b), the peak in the vicinity of the center is a peak (the center position may be depressed), and a mountain-like curve that gradually decreases as it goes to the left and right. Draw and the left and right edges are zero. The width of this mountain-shaped curve changes according to the lateral width dX (X-axis direction width) of the rectangle Fi.

 このようなカーブをもつグラフで示される水平方向関数fhiは、結局、水平方向の位置関係について、グラフのピーク位置に近い評価点Eほど、より大きな値を与える関数ということになる。たとえば、図40に示す例の場合、評価点E5,E6のように、X座標値が重心GのX座標値に近いものほど、値e5,e6のような大きな関数値が与えられる(上限値は2になる)。逆に、評価点E1,E11のように、X座標値が重心GのX座標値から離れているものほど、値e1,e11のような小さな関数値が与えられる(グラフの値は負になることもあるので、下限値は-2になる)。 The horizontal direction function fhi shown by the graph having such a curve is a function that gives a larger value to the evaluation point E closer to the peak position of the graph with respect to the horizontal positional relationship. For example, in the case of the example shown in FIG. 40, the larger the X coordinate value is closer to the X coordinate value of the center of gravity G, such as the evaluation points E5 and E6, the larger function values such as values e5 and e6 are given (upper limit value). Becomes 2). Conversely, the smaller the X coordinate value is from the X coordinate value of the center of gravity G, such as the evaluation points E1 and E11, the smaller function values such as the values e1 and e11 are given (the graph value is negative). In some cases, the lower limit is -2.

 以上、水平方向関数fhiの意味するところを説明してきたが、垂直方向関数fviの意味するところも同様である。すなわち、水平方向関数fhiが、着目する特定の評価点Eと第i番目の矩形Fiについての水平方向(X軸方向)に関する位置関係を示すファクターであるのに対して、垂直方向関数fviは、着目する特定の評価点Eと第i番目の矩形Fiについての垂直方向(Y軸方向)に関する位置関係を示すファクターということになる。 Although the meaning of the horizontal function fhi has been described above, the meaning of the vertical function fvi is the same. That is, the horizontal function fhi is a factor indicating the positional relationship in the horizontal direction (X-axis direction) for the specific evaluation point E of interest and the i-th rectangle Fi, whereas the vertical function fvi is This is a factor indicating the positional relationship in the vertical direction (Y-axis direction) between the specific evaluation point E of interest and the i-th rectangle Fi.

 すなわち、図36の第2行目に記載されている
   fhi(σk)=erf[(X-Li)/σk]-erf[(X-Ri)/σk]
なる水平方向関数fhi(σk)について、変数Xを変数Yに置き換え、左辺Lの座標値Liを下辺Bの座標値Biに置き換え、右辺Rの座標値Riを上辺Tの座標値Tiに置き換えれば、図36の第3行目に記載されている
   fvi(σk)=erf[(Y-Bi)/σk]-erf[(Y-Ti)/σk]
なる垂直方向関数fvi(σk)が得られる。ここで、σ1=1に設定したものとすれば、垂直方向関数fvi(σ1)は、
   fvi(σ1)=erf(Y-Bi)-erf(Y-Ti)
になる。ここでも、σ1=1であることを前提として、垂直方向関数fvi(σ1)を単にfviと記述することにする。
That is, fhi (σk) = erf [(X−Li) / σk] −erf [(X−Ri) / σk] described in the second row of FIG.
For the horizontal function fhi (σk), if the variable X is replaced with the variable Y, the coordinate value Li of the left side L is replaced with the coordinate value Bi of the lower side B, and the coordinate value Ri of the right side R is replaced with the coordinate value Ti of the upper side T 36, fvi (σk) = erf [(Y−Bi) / σk] −erf [(Y−Ti) / σk] described in the third line of FIG.
A vertical function fvi (σk) is obtained. Here, assuming that σ1 = 1 is set, the vertical function fvi (σ1) is
fvi (σ1) = erf (Y−Bi) −erf (Y−Ti)
become. Here again, assuming that σ1 = 1, the vertical function fvi (σ1) is simply described as fvi.

 この垂直方向関数fviのグラフも、図40(b) に示す水平方向関数fhiのグラフと同様に、概ね中心付近がピークとなり、左右にゆくにしたがってなだらかに減少するような山状のカーブになる。但し、グラフの横軸はY軸であり、この山状カーブの幅は、矩形Fiの縦幅dY(Y軸方向幅)に応じて変わることになる。 Similarly to the graph of the horizontal function fhi shown in FIG. 40 (b) の, the graph of the vertical function fvi also has a mountain-like curve that has a peak in the vicinity of the center and gradually decreases toward the left and right. . However, the horizontal axis of the graph is the Y axis, and the width of this mountain-shaped curve changes according to the vertical width dY (Y-axis direction width) of the rectangle Fi.

 図41は、矩形Fiと、水平方向関数fhiのグラフおよびと垂直方向関数fviのグラフとの位置的な関係を示す図である。水平方向関数fhiのグラフは、図の下方に示すとおり、X座標値を変数として関数値fhiを与える関数のグラフであり、矩形Fiの重心GのX座標を中心とした対称形のグラフとなり(グラフの中心を一点鎖線で示す)、概ね中心付近にピークを示す山状のカーブを描いている。一方、垂直方向関数fviのグラフは、図の左方に示すとおり(矩形Fiが配置された座標系と合わせるため、Y軸の向きを逆方向にとった裏返しのグラフになっている)、Y座標値を変数として関数値fviを与える関数のグラフであり、矩形Fiの重心GのY座標を中心とした対称形のグラフとなり(グラフの中心を一点鎖線で示す)、やはり概ね中心付近にピークを示す山状のカーブを描いている。 FIG. 41 is a diagram showing a positional relationship between the rectangle Fi, the graph of the horizontal function fhi, and the graph of the vertical function fvi. As shown in the lower part of the figure, the graph of the horizontal function fhi is a graph of a function that gives the function value fhi using the X coordinate value as a variable, and is a symmetric graph centered on the X coordinate of the gravity center G of the rectangle Fi ( The center of the graph is indicated by a one-dot chain line), and a mountain-like curve showing a peak in the vicinity of the center is drawn. On the other hand, the graph of the vertical function fvi is a reverse graph in which the direction of the Y axis is reversed in order to match the coordinate system in which the rectangle Fi is arranged as shown on the left side of the figure. It is a graph of a function that gives a function value fvi with the coordinate value as a variable, and is a symmetric graph centered on the Y coordinate of the center of gravity G of the rectangle Fi (the center of the graph is indicated by a one-dot chain line), which also has a peak approximately at the center. A mountain-shaped curve is drawn.

 図41に示す例において、水平方向関数fhiのグラフの幅は、垂直方向関数fviのグラフの幅に比べて広くなっているが、これは、矩形Fiの横幅が縦幅より広くなっているためである。別言すれば、水平方向関数fhiのグラフの幅は、矩形Fiの横幅dX(X軸方向幅:dX=Ri-Li)に応じたものになり、垂直方向関数fviのグラフの幅は、矩形Fiの縦幅dY(Y軸方向幅:dY=Ti-Bi)に応じたものになる。 In the example shown in FIG. 41, the width of the graph of the horizontal function fhi is larger than the width of the graph of the vertical function fvi, because the horizontal width of the rectangle Fi is wider than the vertical width. It is. In other words, the width of the graph of the horizontal function fhi corresponds to the horizontal width dX (X-axis direction width: dX = Ri−Li) of the rectangle Fi, and the width of the graph of the vertical function fvi is rectangular. This corresponds to the vertical width dY of Fi (Y-axis direction width: dY = Ti−Bi).

 ここで、再び図36に示す算出関数
   Zk(X,Y)=Σi=1~q [K・fhi(σk)・fvi(σk)]
を振り返ってみよう。この式の右辺には、fhi(σk)・fvi(σk)なる積が含まれている。第1番目の算出関数Z1(X,Y)について、σ1=1であることを前提とすれば、上記式は、
   Z1(X,Y)=Σi=1~q [K・fhi・fvi]
ということになり、右辺は、特徴量算出係数K、水平方向関数fhi,垂直方向関数fviを相互に乗じた積を、第1番目の矩形F1~第q番目の矩形Fqのそれぞれについて計算し、その総和を求める式になる。
Here, the calculation function Zk (X, Y) = Σ i = 1 to q [K · fhi (σk) · fvi (σk)] shown in FIG. 36 again.
Let's look back. The right side of this equation includes the product fhi (σk) · fvi (σk). Assuming that σ1 = 1 for the first calculation function Z1 (X, Y), the above equation is
Z1 (X, Y) = Σ i = 1 to q [K · fhi · fvi]
Therefore, the right side calculates a product obtained by mutually multiplying the feature amount calculation coefficient K, the horizontal function fhi, and the vertical function fvi for each of the first rectangle F1 to the qth rectangle Fq, This is the formula for calculating the sum.

 上述したとおり、水平方向関数fhiは、着目する特定の評価点Eと第i番目の矩形Fiについての水平方向に関する位置関係を示すファクターであり、垂直方向関数fviは、当該特定の評価点Eと第i番目の矩形Fiについての垂直方向に関する位置関係を示すファクターであるから、その積であるfhi・fvi(あるいは、更に特徴量算出係数Kを乗じたK・fhi・fvi)は、着目する特定の評価点Eと第i番目の矩形Fiについての、水平方向および垂直方向の双方に関する位置関係を示すファクターになる。したがって、その総和として求められる算出関数Z1(X,Y)の値は、着目する特定の評価点Eと、その周囲に位置する全q個の矩形F1~Fqについての、水平方向および垂直方向の双方に関する位置関係を示す量ということになる。ここで述べる付加的実施形態では、このような量を、当該特定の評価点Eについての特徴量xとして採用することになる。 As described above, the horizontal direction function fhi is a factor indicating the positional relationship in the horizontal direction with respect to the specific evaluation point E of interest and the i-th rectangle Fi, and the vertical direction function fvi is related to the specific evaluation point E and Since this is a factor indicating the positional relationship in the vertical direction with respect to the i-th rectangle Fi, the product fhi · fvi (or K · fhi · fvi further multiplied by the feature amount calculation coefficient K) is the particular of interest. This is a factor indicating the positional relationship between the evaluation point E and the i-th rectangle Fi in both the horizontal direction and the vertical direction. Therefore, the value of the calculation function Z1 (X, Y) obtained as the sum is the horizontal and vertical directions of the specific evaluation point E of interest and all the q rectangles F1 to Fq positioned around it. This is an amount indicating the positional relationship between the two. In the additional embodiment described here, such an amount is adopted as the feature amount x for the specific evaluation point E.

 たとえば、図41に示す例において、評価点E1と矩形Fiとの水平方向に関する位置関係を示すファクターとなる関数値fhiは、水平方向関数fhiに、評価点E1のX座標値を入れたときの関数値fhi、すなわち、値e11として与えられる。一方、垂直方向に関する位置関係を示すファクターとなる関数値fviは、垂直方向関数fviに、評価点E1のY座標値を入れたときの関数値fvi、すなわち、値e12として与えられる。図示の例の場合、e11>e12となっているが、これは、評価点E1は矩形Fiに対して水平方向に関しては近いが、垂直方向に関しては若干離れているためである。 For example, in the example shown in FIG. 41, the function value fhi, which is a factor indicating the positional relationship between the evaluation point E1 and the rectangle Fi in the horizontal direction, is obtained when the X coordinate value of the evaluation point E1 is entered in the horizontal function fhi. It is given as a function value fhi, that is, a value e11. On the other hand, the function value fvi serving as a factor indicating the positional relationship in the vertical direction is given as a function value fvi when the Y coordinate value of the evaluation point E1 is inserted into the vertical function fvi, that is, a value e12. In the example shown in the figure, e11> e12, but this is because the evaluation point E1 is close to the rectangle Fi in the horizontal direction but slightly apart in the vertical direction.

 この場合、評価点E1と矩形Fiとの二次元的な位置関係は、fhi・fviなる積(上例の場合、e11・e12なる積)によって定量化することができる(実際には、スケーリングファクタとして、特徴量算出係数Kが更に乗じられる)。
   Z1(X,Y)=Σi=1~q [K・fhi・fvi]
なる第1番目の算出関数は、このような二次元的な位置関係を全q個の矩形F1~Fqについて求め、その総和をとる関数ということになり、その関数値は、評価点E1と周囲q個の矩形との総合的な位置関係を定量化した値ということになる。そして、当該関数値が、評価点E1についての第1番目の特徴量x1として出力される。
In this case, the two-dimensional positional relationship between the evaluation point E1 and the rectangle Fi can be quantified by the product fhi · fvi (in the above example, the product e11 · e12) (actually, the scaling factor) As a feature quantity calculation coefficient K).
Z1 (X, Y) = Σ i = 1 to q [K · fhi · fvi]
The first calculation function is a function that obtains such a two-dimensional positional relationship with respect to all the q rectangles F1 to Fq and takes the sum thereof, and the function value is the evaluation point E1 and the surroundings. This is a value obtained by quantifying the total positional relationship with q rectangles. Then, the function value is output as the first feature amount x1 for the evaluation point E1.

 図41に示されている第2の評価点E2についても同様である。すなわち、評価点E2と矩形Fiとの水平方向に関する位置関係を示すファクターとなる関数値fhiは、水平方向関数fhiに、評価点E2のX座標値を入れたときの関数値fhi、すなわち、値e21として与えられ、垂直方向に関する位置関係を示すファクターとなる関数値fviは、垂直方向関数fviに、評価点E2のY座標値を入れたときの関数値fvi、すなわち、値e22として与えられる。 The same applies to the second evaluation point E2 shown in FIG. That is, the function value fhi serving as a factor indicating the positional relationship between the evaluation point E2 and the rectangle Fi in the horizontal direction is a function value fhi when the X coordinate value of the evaluation point E2 is inserted into the horizontal function fhi, that is, the value The function value fvi which is given as e21 and is a factor indicating the positional relationship in the vertical direction is given as the function value fvi when the Y coordinate value of the evaluation point E2 is put in the vertical function fvi, that is, the value e22.

 もちろん、図41の右上に示す第3の評価点E3についても同様である。ただ、図示の評価点E3は、矩形Fiからかなり離れてしまっているため、実際には、fhi=0、fvi=0となり、その積も0になる。別言すれば、評価点E3についての特徴量には、遠い位置にある矩形Fiの存在は影響を与えないことになる。 Of course, the same applies to the third evaluation point E3 shown in the upper right of FIG. However, since the evaluation point E3 shown in the figure is far away from the rectangle Fi, actually, fhi = 0 and fvi = 0, and the product thereof is also zero. In other words, the presence of the rectangle Fi at a distant position does not affect the feature amount for the evaluation point E3.

 これまで、拡がり係数σ1を含む第1番目の算出関数Z1(X,Y)について、σ1=1の場合を例にとって説明した。そこで、続いて、算出関数における拡がり係数σk(1≦k≦n)の役割を図42を参照しながら説明する。 So far, the first calculation function Z1 (X, Y) including the expansion coefficient σ1 has been described by taking the case of σ1 = 1 as an example. Therefore, the role of the spread coefficient σk (1 ≦ k ≦ n) in the calculation function will be described with reference to FIG.

 図42の上段には、横幅dXをもつ第i番目の矩形Fiが示され、中段には、拡がり係数σ1を含む水平方向関数fhi(σ1)のグラフが示され、下段には、拡がり係数σ2を含む水平方向関数fhi(σ2)のグラフが示されている。ここで、水平方向関数fhi(σ1)は、
   fhi(σ1)=erf[(X-Li)/σ1]-erf[(X-Ri)/σ1]
なる関数であり、水平方向関数fhi(σ2)は、
   fhi(σ2)=erf[(X-Li)/σ2]-erf[(X-Ri)/σ2]
なる関数である。両者の相違は、右辺の分母がσ1かσ2かという点だけである。
In the upper part of FIG. 42, the i-th rectangle Fi having a width dX is shown, the middle part shows a graph of the horizontal function fhi (σ1) including the expansion coefficient σ1, and the lower part shows the expansion coefficient σ2. A graph of the horizontal function fhi (σ2) including is shown. Here, the horizontal direction function fhi (σ1) is
fhi (σ1) = erf [(X−Li) / σ1] −erf [(X−Ri) / σ1]
The horizontal function fhi (σ2) is
fhi (σ2) = erf [(X−Li) / σ2] −erf [(X−Ri) / σ2]
It is a function. The only difference is that the denominator on the right side is σ1 or σ2.

 図42に示されているグラフは、σ1=1、σ2=2に設定した例を示すグラフである。図示のとおり、中段に示すfhi(σ1)のグラフに比べて、下段に示すfhi(σ2)のグラフの方が横幅が広くなっている。この山状のグラフの横幅が、基本的には矩形Fiの横幅dXに応じたものになる点は、既に述べたとおりであるが、拡がり係数σによって、横幅を増減することが可能になる。すなわち、拡がり係数σを2倍にすれば、山状のグラフの横幅は2倍に広がることになる。 42 is a graph showing an example in which σ1 = 1 and σ2 = 2 are set. As shown in the drawing, the horizontal width of the fhi (σ2) graph shown in the lower part is wider than that of the fhi (σ1) shown in the middle part. As described above, the horizontal width of the mountain-shaped graph basically corresponds to the horizontal width dX of the rectangle Fi. However, the horizontal width can be increased or decreased by the expansion coefficient σ. That is, if the spread coefficient σ is doubled, the horizontal width of the mountain-shaped graph is doubled.

 図41に示す例において、σ1=1なる拡がり係数を含む第1番目の算出関数Z1(X,Y)を用いた場合、評価点E3の特徴量には、遠い位置にある矩形Fiの存在は影響を与えないことを説明した。しかしながら、σ2=2なる拡がり係数を含む第2番目の算出関数Z2(X,Y)を用いた場合、山状のグラフの横幅は2倍に広がるため、評価点E3の特徴量に、遠い位置にある矩形Fiの存在が影響を与えることになる。そして、たとえば、σ3=4なる拡がり係数を含む第3番目の算出関数Z3(X,Y)を用いた場合、矩形Fiの存在が評価点E3の特徴量に与える影響は更に大きくなる。 In the example shown in FIG. 41, when the first calculation function Z1 (X, Y) including a spread coefficient of σ1 = 1 is used, the feature value of the evaluation point E3 has the presence of a rectangle Fi at a far position. I explained that it has no effect. However, when the second calculation function Z2 (X, Y) including the spread coefficient σ2 = 2 is used, the horizontal width of the mountain-shaped graph is doubled, so that the feature amount of the evaluation point E3 is far from the feature amount. The presence of the rectangle Fi at the position will have an effect. For example, when the third calculation function Z3 (X, Y) including the spread coefficient σ3 = 4 is used, the influence of the existence of the rectangle Fi on the feature amount of the evaluation point E3 is further increased.

 図35(b) には、評価点Eについてのn通りの特徴量x1~xnを算出するために、それぞれ算出関数Z1(X,Y)~Zn(X,Y)を用いた演算を行うことを説明した。ここで、算出関数Z1(X,Y)~Zn(X,Y)の相違は、含まれている拡がり係数がそれぞれσ1~σnとなっている点だけである。拡がり係数σ1~σnとしては、相互に異なる値となっていれば、どのような値を設定してもかまわないが、実用上は、第k番目(1≦k≦n)の拡がり係数σkをσk=2(k-1)に設定するのが好ましい。すなわち、σ1=1,σ2=2,σ3=4,σ4=8,... というように倍々に増加してゆくことになる。 In FIG. 35 (b), in order to calculate n feature values x1 to xn for the evaluation point E, calculations using the calculation functions Z1 (X, Y) to Zn (X, Y) are performed. Explained. Here, the difference between the calculation functions Z1 (X, Y) to Zn (X, Y) is only that the included spread coefficients are σ1 to σn, respectively. As the spreading coefficients σ1 to σn, any values may be set as long as they are different from each other, but in practice, the kth (1 ≦ k ≦ n) spreading coefficient σk is set. It is preferable to set σk = 2 (k−1) . That is, σ1 = 1, σ2 = 2, σ3 = 4, σ4 = 8,...

 拡がり係数σの値が小さい第1の算出関数Z1(X,Y)を用いて算出された特徴量x1は、評価点Eの近傍に位置する矩形との位置関係のみを考慮した値になるが、拡がり係数σの値が大きい第nの算出関数Zn(X,Y)を用いて算出された特徴量xnは、評価点Eの遠方に位置する矩形との位置関係までを考慮した値になる。 The feature quantity x1 calculated using the first calculation function Z1 (X, Y) having a small value of the spread coefficient σ is a value considering only the positional relationship with the rectangle located in the vicinity of the evaluation point E. The feature amount xn calculated using the nth calculation function Zn (X, Y) having a large value of the spread coefficient σ is a value that takes into account the positional relationship with a rectangle positioned far from the evaluation point E. .

 このように、算出関数提供部223によって提供されるn通りの算出関数Z1(X,Y)~Zn(X,Y)は、評価点E(X,Y)の近傍の狭い範囲を考慮した特徴量x1から、評価点E(X,Y)の遠方まで含めた広い範囲を考慮した特徴量xnに至るまで、考慮範囲を変えた複数n通りの特徴量x1~xnを算出するために適した算出関数になっている。 As described above, the n calculation functions Z1 (X, Y) to Zn (X, Y) provided by the calculation function providing unit 223 are characteristics considering a narrow range in the vicinity of the evaluation point E (X, Y). Suitable for calculating a plurality of n feature quantities x1 to xn with different consideration ranges from the quantity x1 to the feature quantity xn that takes into account a wide range including the distance from the evaluation point E (X, Y). It is a calculation function.

 結局、基本的実施形態では、複数n通りの階層画像P1~Pnからなる画像ピラミッドPPを用いて特徴量x1~xnを抽出する手法を採用していたのに対し、ここで述べる付加的実施形態では、拡がり係数σの値が異なる複数n通りの算出関数Z1(X,Y)~Zn(X,Y)を用いて特徴量x1~xnを抽出する手法を採用していることになる。いずれの手法を採用しても、各評価点の近傍から遠方に至るまでの様々な特徴を示す個別の特徴量を得ることができ、近接効果やエッチングのローディング現象など、スケールの異なる各種現象の影響を考慮した正確なシミュレーションを行うことが可能になる。 After all, in the basic embodiment, the technique of extracting the feature amounts x1 to xn using the image pyramid PP composed of a plurality of n kinds of hierarchical images P1 to Pn is adopted, whereas the additional embodiment described here is used. In this case, a method of extracting the feature amounts x1 to xn using a plurality of n types of calculation functions Z1 (X, Y) to Zn (X, Y) having different values of the spread coefficient σ is employed. Regardless of which method is used, individual feature quantities indicating various features from the vicinity of each evaluation point to the distance can be obtained, and various phenomena with different scales such as proximity effects and etching loading phenomena can be obtained. It becomes possible to perform an accurate simulation considering the influence.

 以上、本発明の付加的実施形態に用いる算出関数を、具体例な実施例を提示して説明してきたが、この実施例の骨子をまとめると、次のようになる。まず、図32に示す矩形集合体置換部221は、X軸正方向を右方向、Y軸正方向を上方向にとったXY二次元直交座標系において、元図形パターン10に含まれる図形を、X軸に平行な上辺Tおよび下辺BとY軸に平行な左辺Lおよび右辺Rとを有する正則矩形Fiの矩形集合体50に置き換えるようにする。これは、特徴量算出部222が、ある1つの評価点Eについて、その周囲に位置する矩形Fiの四辺に対する位置関係に基づいて特徴量xを算出する演算を行う際に、矩形Fiが正則矩形になっていた方が演算負担の軽減を図ることができるためである。 The calculation function used in the additional embodiment of the present invention has been described above by presenting a specific example. The summary of this example is summarized as follows. First, the rectangular aggregate replacement unit 221 shown in FIG. 32 has a graphic included in the original graphic pattern 10 in an XY two-dimensional orthogonal coordinate system in which the X-axis positive direction is the right direction and the Y-axis positive direction is the upward direction. A rectangular aggregate 50 of regular rectangles Fi having an upper side T and a lower side B parallel to the X axis and a left side L and a right side R parallel to the Y axis is replaced. This is because when the feature amount calculation unit 222 performs an operation for calculating a feature amount x based on the positional relationship with respect to the four sides of the rectangle Fi positioned around one evaluation point E, the rectangle Fi is a regular rectangle. This is because the burden of calculation can be reduced.

 そして、算出関数提供部223は、XY二次元直交座標系上に定義された個々の正則矩形Fiについて、評価点EのX軸方向に関する左辺Lとの隔たりを示す左辺位置偏差「X-Li」および右辺Rとの隔たりを示す右辺位置偏差「X-Ri」、ならびに、評価点EのY軸方向に関する上辺Tとの隔たりを示す上辺位置偏差「Y-Ti」および下辺Bとの隔たりを示す下辺位置偏差「Y-Bi」に基づいて、特徴量xを算出する算出関数を提供すればよい。 Then, the calculation function providing unit 223 determines, for each regular rectangle Fi defined on the XY two-dimensional orthogonal coordinate system, the left side position deviation “X-Li” indicating the distance from the left side L in the X-axis direction of the evaluation point E. And the right side position deviation “X-Ri” indicating the distance from the right side R, and the upper side position deviation “Y-Ti” indicating the distance from the upper side T in the Y-axis direction of the evaluation point E and the distance from the lower side B. A calculation function for calculating the feature amount x may be provided based on the lower side position deviation “Y−Bi”.

 より具体的には、ある1つの着目矩形Fiについて、変数値の増加とともに関数値が単調増加し、着目矩形Fiの左辺LのX座標値Liを変数として与えたときの関数値が0になるX軸単調増加関数(たとえば、+erf[(X-Li/σk)])と、変数値の増加とともに関数値が単調減少し、着目矩形Fiの右辺RのX座標値Riを変数として与えたときの関数値が0になるX軸単調減少関数(たとえば、-erf[(X-Ri/σk)])と、の和である水平方向関数fhi(σk)を定義する。同様に、変数値の増加とともに関数値が単調増加し、着目矩形Fiの下辺BのY座標値を変数として与えたときの関数値が0になるY軸単調増加関数(たとえば、+erf[(Y-Bi/σk)])と、変数値の増加とともに関数値が単調減少し、着目矩形Fiの上辺TのY座標値Tiを変数として与えたときの関数値が0になるY軸単調減少関数(たとえば、-erf[(Y-Ti/σk)])と、の和である垂直方向関数fvi(σk)を定義する。 More specifically, for a certain target rectangle Fi, the function value monotonously increases as the variable value increases, and the function value becomes 0 when the X coordinate value Li of the left side L of the target rectangle Fi is given as a variable. When the X-axis monotonically increasing function (for example, + erf [(X−Li / σk)]) and the function value monotonously decrease as the variable value increases, the X coordinate value Ri of the right side R of the target rectangle Fi is given as a variable. A horizontal function fhi (σk) that is the sum of an X-axis monotonically decreasing function (for example, −erf [(X−Ri / σk)]) for which the function value of is defined as 0 is defined. Similarly, the function value monotonously increases as the variable value increases, and a Y-axis monotonically increasing function (for example, + erf [(Y -Bi / σk)]), and the function value monotonously decreases as the variable value increases, and the Y-axis monotonically decreasing function becomes 0 when the Y coordinate value Ti of the upper side T of the target rectangle Fi is given as a variable. (For example, −erf [(Y−Ti / σk)]) and a vertical direction function fvi (σk) that is the sum of these.

 そして、ある1つの着目評価点Eについての着目矩形Fiに対する位置関係を示す量を、着目評価点EのX座標値Xを変数とする水平方向関数fhi(σk)の関数値と、着目評価点EのY座標値Yを変数とする垂直方向関数fvi(σk)の関数値と、の積に基づいて算出し(必要に応じて、スケーリングを行うための特徴量算出係数Kを乗じてもよい)、着目評価点Eの周囲に位置する各矩形に対する位置関係を示す量の総和を、当該着目評価点Eについての特徴量xとして算出すればよい。 Then, the amount indicating the positional relationship with respect to the target rectangle Fi for one target evaluation point E, the function value of the horizontal function fhi (σk) with the X coordinate value X of the target evaluation point E as a variable, and the target evaluation point Calculated based on the product of the function value of the vertical function fvi (σk) with the Y coordinate value Y of E as a variable (if necessary, it may be multiplied by a feature value calculation coefficient K for scaling). ), The sum of the amounts indicating the positional relationship with respect to each rectangle located around the target evaluation point E may be calculated as the feature amount x for the target evaluation point E.

 なお、スケールの異なる各種現象の影響を考慮した正確なシミュレーションを行うためには、各評価点Eの近傍から遠方に至るまでの様々な特徴を示すn通りの特徴量x1~xnを算出するのが好ましい。そのためには、算出関数提供部223が、単調増加もしくは単調減少の度合いがそれぞれ異なる関数を用いた複数n通りの算出関数Z1(X,Y)~Zn(X,Y)を、考慮範囲を変えた複数n通りの特徴量x1~xnを算出するための算出関数として提供すればよい。 In order to perform an accurate simulation in consideration of the effects of various phenomena with different scales, n types of feature amounts x1 to xn indicating various features from the vicinity of each evaluation point E to the far side are calculated. Is preferred. For this purpose, the calculation function providing unit 223 changes a plurality of n types of calculation functions Z1 (X, Y) to Zn (X, Y) using functions having different monotonic increases or monotonic decreases, with different consideration ranges. It is sufficient to provide a calculation function for calculating a plurality of n types of feature amounts x1 to xn.

 具体的には、算出関数提供部223は、左辺位置偏差「X-Li」、右辺位置偏差「X-Ri」、上辺位置偏差「X-Ti」および下辺位置偏差「X-Bi」を拡がり係数σで除した値を変数とする単調増加関数もしくは単調減少関数を含む算出関数を用意し、この拡がり係数σの値を複数n通りに変える(σ1~σn)ことにより、複数n通りの算出関数Z1(X,Y)~Zn(X,Y)を提供すればよい。なお、複数n通りの拡がり係数σ1~σnの値は、1≦k≦nの範囲をとるパラメータkを用いて、第k番目の拡がり係数をσkと表したときに、σk=2(k-1)に設定するのが好ましい。 Specifically, the calculation function providing unit 223 expands the left side position deviation “X-Li”, the right side position deviation “X-Ri”, the upper side position deviation “X-Ti”, and the lower side position deviation “X-Bi”. A calculation function including a monotonically increasing function or a monotonically decreasing function having a value divided by σ as a variable is prepared, and a plurality of n calculation functions are obtained by changing the value of the spread coefficient σ into a plurality of n values (σ1 to σn). Z1 (X, Y) to Zn (X, Y) may be provided. The values of a plurality of n types of spread coefficients σ1 to σn are expressed as σk = 2 (k− when the kth spread coefficient is expressed as σk using a parameter k that takes a range of 1 ≦ k ≦ n. It is preferable to set to 1) .

 <5.3 算出関数の変形例>
 上述した§5.2では、算出関数の基本的な具体例を示したが、本発明の付加的実施形態で用いる算出関数は、上述した具体例に限定されるものではない。たとえば、単調増加関数や単調減少関数としては、必ずしも誤差関数erf(ξ)を用いる必要はなく、他の関数を用いてもかまわない。ここでは、この算出関数についてのいくつかの変形例を述べる。
<5.3 Modification of Calculation Function>
In §5.2 described above, the basic specific example of the calculation function is shown, but the calculation function used in the additional embodiment of the present invention is not limited to the specific example described above. For example, the error function erf (ξ) is not necessarily used as the monotonically increasing function or the monotonically decreasing function, and other functions may be used. Here, some modified examples of this calculation function will be described.

 (1) ドーズ量を考慮した算出関数
 §2.1では、基本的実施形態において、ドーズ量付きの元図形パターン10が与えられたときの特徴量の抽出方法を、図12および図13を参照して説明した。ここでは、付加的実施形態において、ドーズ量付きの元図形パターン10が与えられたときの特徴量の抽出方法を、図43を参照しながら説明する。
(1) Calculation Function Considering Dose Amount In §2.1, refer to FIG. 12 and FIG. 13 for a method of extracting a feature amount when the original figure pattern 10 with a dose amount is given in the basic embodiment. Explained. Here, in an additional embodiment, a method for extracting a feature amount when an original figure pattern 10 with a dose amount is given will be described with reference to FIG.

 図43(a) は、ドーズ量付きの元図形パターン10に含まれる図形に基づいて作成された矩形集合体60の平面図である。図示のとおり、XY二次元直交座標系上に、5個の正則矩形F1d~F5dが定義されている。この矩形集合体60を構成する各正則矩形F1d~F5dの形状自体は、図35(a) に示す矩形集合体50を構成する各正則矩形F1~F5の形状と全く同じであるが、矩形集合体60を構成する各正則矩形F1d~F5dには、それぞれ所定のドーズ量が定義されている。これらのドーズ量は、もともと元図形パターン10に含まれていた図形に付加されていた情報である。 FIG. 43 (a) is a plan view of the rectangular aggregate 60 created based on the graphic included in the original graphic pattern 10 with the dose amount. As shown, five regular rectangles F1d to F5d are defined on the XY two-dimensional orthogonal coordinate system. The shapes of the regular rectangles F1d to F5d constituting the rectangular aggregate 60 are exactly the same as the shapes of the regular rectangles F1 to F5 constituting the rectangular aggregate 50 shown in FIG. 35 (a). A predetermined dose amount is defined for each of the regular rectangles F1d to F5d constituting the body 60. These dose amounts are information added to the graphic originally included in the original graphic pattern 10.

 ドーズ量付きの元図形パターン10は、図形の内部と外部との境界を示す輪郭線の情報に加えて、リソグラフィプロセスにおける各図形に関するドーズ量の情報を含んでいる。そこで、矩形集合体置換部221は、この元図形パターン10に基づいて、各図形の内部領域と外部領域とを認識した上で、更に、各図形に関するドーズ量を認識し、各図形に対応する矩形F1d~F5dに、それぞれドーズ量を設定する処理を行えばよい。図43(a) に示す矩形集合体60は、このような処理によって作成されたものであり、個々の矩形F1d~F5dには、もとの図形のドーズ量の情報がそのまま付加されている。 The original figure pattern 10 with a dose amount includes information on a dose amount for each figure in the lithography process in addition to information on a contour line indicating the boundary between the inside and the outside of the figure. Therefore, the rectangular aggregate replacement unit 221 recognizes the internal area and the external area of each graphic based on the original graphic pattern 10, and further recognizes the dose amount for each graphic, and corresponds to each graphic. A process for setting a dose amount may be performed for each of the rectangles F1d to F5d. The rectangular aggregate 60 shown in FIG. 43 (a) is created by such processing, and the information on the dose amount of the original figure is added as it is to the individual rectangles F1d to F5d.

 このように、矩形集合体置換部221によって、ドーズ量付きの矩形F1d~F5dを有する矩形集合体60が作成された場合、算出関数提供部223は、各矩形F1d~F5dに設定されたドーズ量を変数として含む算出関数を提供するようにし、特徴量算出部222が、このドーズ量を変数として含む算出関数に基づいて、所望の評価点E(X,Y)の特徴量xを算出すればよい。 As described above, when the rectangular aggregate 60 having the rectangles F1d to F5d with the dose amount is created by the rectangular aggregate replacement unit 221, the calculation function providing unit 223 sets the dose amount set to each of the rectangles F1d to F5d. Is provided as a variable, and the feature amount calculation unit 222 calculates the feature amount x of the desired evaluation point E (X, Y) based on the calculation function including the dose amount as a variable. Good.

 このようなドーズ量を変数として含む算出関数の一例を図43(b) に示す。ここに示す算出関数Zk(X,Y)は、第k番目(1≦k≦n)の算出関数であり、この算出関数によって算出された関数値は、第k番目の特徴量xkとして出力される。この図43(b) に示す算出関数
   Zk(X,Y)=Σi=1~q [Di・K・fhi(σk)・fvi(σk)]
は、図36の1行目に示す算出関数
   Zk(X,Y)=Σi=1~q [K・fhi(σk)・fvi(σk)]
の特徴量算出係数Kに、更に、第i番目の矩形についてのドーズ量Diを乗じた形態をとる。
An example of a calculation function including such a dose amount as a variable is shown in FIG. The calculation function Zk (X, Y) shown here is the kth (1 ≦ k ≦ n) calculation function, and the function value calculated by this calculation function is output as the kth feature amount xk. The The calculation function Zk (X, Y) = Σ i = 1 to q [Di · K · fhi (σk) · fvi (σk)] shown in FIG.
Is the calculation function Zk (X, Y) = Σ i = 1 to q [K · fhi (σk) · fvi (σk)] shown in the first line of FIG.
The feature amount calculation coefficient K is further multiplied by the dose amount Di for the i-th rectangle.

 特徴量算出係数Kは、スケーリングのための共通の定数であるのに対して、ドーズ量Diは、個々の矩形ごとに設定された個別の値になる。たとえば、図43(a) に示す例の場合、100%のドーズ量が設定されている矩形F1d,F2d,F3dについての演算(i=1,2,3の演算)に用いるドーズ量D1,D2,D3は、いずれも1になるが、50%のドーズ量が設定されている矩形F4dについての演算(i=4の演算)に用いるドーズ量D4は、0.5になり、10%のドーズ量が設定されている矩形F5dについての演算(i=5の演算)に用いるドーズ量D5は、0.1になる。 The feature amount calculation coefficient K is a common constant for scaling, whereas the dose amount Di is an individual value set for each rectangle. For example, in the example shown in FIG. 43A, doses D1 and D2 used for calculations (i = 1, 2, and 3) for rectangles F1d, F2d, and F3d for which a 100% dose is set. , D3 are both 1, but the dose amount D4 used for the calculation (i = 4 calculation) for the rectangle F4d for which the 50% dose is set is 0.5, and the dose is 10%. The dose amount D5 used for the computation (i = 5 computation) for the rectangle F5d for which the amount is set is 0.1.

 その結果、図43(a) に示す例の場合、算出関数Zk(X,Y)の具体的な演算式は、図43(c) に示すようになる。第i番目の矩形についての式には、ドーズ量Diを乗じる演算が含まれており、各矩形のドーズ量を考慮した特徴量xkが得られることになる。§2.1で述べた基本的実施形態では、図13に示すドーズ密度マップM3を用いて画像ピラミッドPPを作成することにより、ドーズ量を考慮した特徴量xkを抽出する処理が行われていた。これに対して、ここで述べる付加的実施形態では、図43(b) に示すように、ドーズ量を変数として含む算出関数に基づく演算により、ドーズ量を考慮した特徴量xkを算出していることになる。 As a result, in the case of the example shown in FIG. 43 (a) 演算, a specific arithmetic expression of the calculation function Zk (X, Y) is as shown in FIG. 43 (c). The expression for the i-th rectangle includes an operation of multiplying the dose amount Di, and a feature amount xk in consideration of the dose amount of each rectangle is obtained. In the basic embodiment described in §2.1, the process of extracting the feature amount xk in consideration of the dose amount has been performed by creating the image pyramid PP using the dose density map M3 shown in FIG. . On the other hand, in the additional embodiment described here, as shown in FIG. 43B, the feature quantity xk taking the dose amount into consideration is calculated by an operation based on a calculation function including the dose amount as a variable. It will be.

 (2) 図形のエッジに対応する矩形集合体を前提とした算出関数
 §2.1では、基本的実施形態の変形例として、図11に示すようなエッジ長密度マップM2を用いて画像ピラミッドPPを作成することにより、特徴量xを抽出する処理を説明した。このエッジ長密度マップM2は、元図形パターン10に含まれる図形について、図形の内部/外部という領域に関する情報ではなく、その輪郭線(エッジ)の配置に着目した情報である。
(2) Calculation function based on a rectangular aggregate corresponding to the edge of a figure In §2.1, as a modification of the basic embodiment, an image pyramid PP is used by using an edge length density map M2 as shown in FIG. The process of extracting the feature amount x by creating the above has been described. The edge length density map M2 is information that focuses on the arrangement of the outlines (edges) of the graphic included in the original graphic pattern 10, not the information on the area inside / outside the graphic.

 ここで述べる付加的実施形態についても、図形を領域の情報として取り扱う代わりに、輪郭線の情報として取り扱うことが可能である。これまで述べてきた実施例では、矩形集合体置換部221は、元図形パターン10に含まれる図形の領域を分割して、矩形集合体50に置き換える処理を行っていた。たとえば、元図形パターン10に、図44(a) に示すような図形が含まれていた場合、この図形を上下に分割して、図44(b) にハッチングを施して示すような2個の矩形F1,F2からなる矩形集合体に置き換えていた。このような置換方法は、あくまでも図形を領域の情報として取り扱う考え方に基づく方法である。 As for the additional embodiment described here, it is possible to handle a figure as outline information instead of handling the figure as area information. In the embodiments described so far, the rectangular aggregate replacement unit 221 performs a process of dividing the area of the graphic included in the original graphic pattern 10 and replacing it with the rectangular aggregate 50. For example, if the original figure pattern 10 includes a figure as shown in FIG. 44 (a), the figure is divided into two parts, and two pieces as shown in FIG. 44 (b) are shown by hatching. It was replaced with a rectangular aggregate consisting of rectangles F1 and F2. Such a replacement method is based on the idea of handling graphics as area information.

 ここで述べる変形例の場合、矩形集合体置換部221は、元図形パターン10に基づいて各図形の輪郭線を構成する単位線分を認識し、各単位線分について微小幅を設定することにより、元図形パターン10に含まれる図形を、当該微小幅をもった矩形の集合体に置き換える処理を行うことになる。このような置換方法は、図形を輪郭線の情報として取り扱う考え方に基づく方法である。 In the case of the modification described here, the rectangular aggregate replacement unit 221 recognizes the unit line segments that form the outline of each graphic based on the original graphic pattern 10, and sets a minute width for each unit line segment. Then, a process of replacing the graphic included in the original graphic pattern 10 with a rectangular aggregate having the minute width is performed. Such a replacement method is a method based on the idea of handling a graphic as contour line information.

 図45は、矩形集合体置換部221によって、図形の輪郭線を構成する単位線分に微小幅を設定することにより、矩形集合体に置換する処理を示す平面図である。図45(a) ,(b) に破線で示す図形は、図44(a) に実線で示す図形と同一の図形である。この図形は、6本の辺によって構成されているが、ここで述べる変形例では、この6本の辺を、それぞれ微小幅をもった矩形に置き換えることになる。その結果、この図形は、6個の細長い矩形の集合体に置換される。 FIG. 45 is a plan view showing a process of replacing the rectangular aggregate by setting a minute width to the unit line segment constituting the outline of the figure by the rectangular aggregate replacing unit 221. FIG. 45 (a) and (b) are shown by broken lines in FIG. 44 (a). This figure is composed of six sides. In the modification described here, these six sides are replaced with rectangles each having a very small width. As a result, this figure is replaced with a collection of six elongated rectangles.

 この図形の輪郭を構成する6本の辺のうち、3本は水平方向(XY二次元座標系に配置した場合にX軸に平行な方向)を向いた辺であり、ここでは水平単位線分と呼ぶ。残りの3本は、垂直方向(XY二次元座標系に配置した場合にY軸に平行な方向)を向いた辺であり、ここでは垂直単位線分と呼ぶ。半導体集積回路などに利用される元図形パターン10の場合、このように、輪郭線が水平単位線分および垂直単位線分のみによって構成される図形を用いることが少なくない。 Of the six sides constituting the outline of the figure, three are sides facing the horizontal direction (direction parallel to the X axis when arranged in the XY two-dimensional coordinate system). Here, the horizontal unit line segment Call it. The remaining three are sides facing the vertical direction (the direction parallel to the Y axis when arranged in the XY two-dimensional coordinate system), and are called vertical unit line segments here. In the case of the original graphic pattern 10 used for a semiconductor integrated circuit or the like, it is often the case that a contour is composed of only horizontal unit line segments and vertical unit line segments.

 そこで、矩形集合体置換部221によって、水平単位線分については、垂直方向に微小幅を設定して水平矩形に置き換え、垂直単位線分については、水平方向に微小幅を設定して垂直矩形に置き換える処理を行うようにする。図45(a) は、3本の水平単位線分をそれぞれ水平矩形Fh1,Fh2,Fh3(ハッチングを施した領域を構成する矩形)に置き換えた状態を示す平面図であり、図45(b) は、3本の垂直単位線分をそれぞれ垂直矩形Fv1,Fv2,Fv3(ハッチングを施した領域を構成する矩形)に置き換えた状態を示す平面図である。 Therefore, the rectangular assembly replacement unit 221 sets the horizontal unit line segment to a horizontal rectangle by setting a minute width in the vertical direction, and sets the vertical unit line segment to a vertical rectangle by setting a minute width in the horizontal direction. Perform replacement processing. FIG. 45A is a plan view showing a state in which three horizontal unit line segments are replaced with horizontal rectangles Fh1, Fh2, and Fh3 (rectangles that form hatched areas), respectively. FIG. 5 is a plan view showing a state in which three vertical unit line segments are replaced with vertical rectangles Fv1, Fv2, and Fv3 (rectangles that form hatched regions), respectively.

 元図形パターン10を構成するデータには、各図形の輪郭線の幾何学的な位置を示す情報(たとえば、単位線分の両端点の座標値)が含まれているので、各水平矩形Fh1,Fh2,Fh3および各垂直矩形Fv1,Fv2,Fv3を示すデータは、この元図形パターン10を構成するデータに基づいて作成することができる。 Since the data constituting the original figure pattern 10 includes information indicating the geometric position of the outline of each figure (for example, the coordinate values of the end points of the unit line segment), each horizontal rectangle Fh1, Data indicating Fh2 and Fh3 and the respective vertical rectangles Fv1, Fv2 and Fv3 can be created based on the data constituting the original graphic pattern 10.

 たとえば、図45(a) に示す水平矩形Fh1の場合、左辺のX座標値は、破線で示す元の図形の上辺を構成する水平単位線分の左端のX座標値として定義することができ、右辺のX座標値は、この水平単位線分の右端のX座標値として定義することができる。また、上辺のY座標値は、この水平単位線分のY座標値をT1としたときに、T1+wとして定義することができ、下辺のY座標値は、この水平単位線分のY座標値をB1としたときに、B1-wとして定義することができる。ここで、wは、微小幅の半分に相当する値であり、任意の数値に設定しておけばよい。 For example, in the case of the horizontal rectangle Fh1 shown in FIG. 45 (a) 座標, the X coordinate value of the left side can be defined as the X coordinate value of the left end of the horizontal unit line segment constituting the upper side of the original figure indicated by the broken line. The X coordinate value of the right side can be defined as the X coordinate value of the right end of this horizontal unit line segment. The Y coordinate value of the upper side can be defined as T1 + w where the Y coordinate value of this horizontal unit line segment is T1, and the Y coordinate value of the lower side is the Y coordinate value of the horizontal unit line segment. When B1, it can be defined as B1-w. Here, w is a value corresponding to half of the minute width, and may be set to an arbitrary numerical value.

 同様に、図45(b) に示す垂直矩形Fv1の場合、上辺のY座標値は、破線で示す元の図形の左辺を構成する垂直単位線分の上端のY座標値として定義することができ、下辺のY座標値は、この垂直単位線分の下端のY座標値として定義することができる。また、右辺のX座標値は、この垂直単位線分のX座標値をR1としたときに、R1+wとして定義することができ、左辺のX座標値は、この垂直単位線分のX座標値をL1としたときに、L1-wとして定義することができる。ここでも、wは、微小幅の半分に相当する値であり、任意の数値に設定しておけばよい。 Similarly, in the case of the vertical rectangle Fv1 shown in FIG. 45 (b) Y, the Y coordinate value of the upper side can be defined as the Y coordinate value of the upper end of the vertical unit line segment constituting the left side of the original figure indicated by the broken line. The Y coordinate value of the lower side can be defined as the Y coordinate value of the lower end of this vertical unit line segment. Further, the X coordinate value of the right side can be defined as R1 + w when the X coordinate value of the vertical unit line segment is R1, and the X coordinate value of the left side is the X coordinate value of the vertical unit line segment. When L1, it can be defined as L1-w. Here, w is a value corresponding to half of the minute width, and may be set to an arbitrary numerical value.

 結局、ここに示す変形例の場合、図44(a) に示す6辺を有する図形は、図45(a),(b) にハッチングを施して示す6個の正則矩形Fh1,Fh2,Fh3,Fv1,Fv2,Fv3に置き換えられたことになる。これらの正則矩形は、微小幅2wをもった細長い矩形であり、元の図形の輪郭線を構成する各単位線分に沿って配置されることになる。特徴量算出部222は、ある1つの評価点Eについて、これら6個の矩形との位置関係に基づいて特徴量xを算出すればよい。 After all, in the case of the modification shown here, the figure having the six sides shown in FIG. 44 (a) 、 has six regular rectangles Fh1, Fh2, Fh3 shown in FIGS. 45 (a) and 45 (b) with hatching. It is replaced with Fv1, Fv2, and Fv3. These regular rectangles are elongate rectangles having a minute width 2w, and are arranged along the unit line segments constituting the contour line of the original figure. The feature amount calculation unit 222 may calculate the feature amount x for a certain evaluation point E based on the positional relationship with these six rectangles.

 続いて、このような算出演算を行うために、算出関数提供部223から特徴量算出部222に提供する算出関数の具体例を説明する。図46は、図45に示す矩形集合体(ハッチングを施して示す6個の矩形からなる集合体)について適用する算出関数Zk(X,Y)の一例を示す図である。この算出関数Zk(X,Y)は、第k番目(1≦k≦n)の特徴量xkを抽出するための関数であり、図46の上段に示すとおり、
   Zk(X,Y)
   =Σi=1~qh [K・fhi(σk)・fvi′(σk)]
   +Σi=1~qv [K・fhi′(σk)・fvi(σk)]
なる形態をとる。ここで、右辺の第1項は、図45(a) に示す水平矩形Fh1,Fh2,Fh3に関する演算項であり、右辺の第2項は、図45(b) に示す垂直矩形Fv1,Fv2,Fv3に関する演算項である。
Next, a specific example of a calculation function provided from the calculation function providing unit 223 to the feature amount calculating unit 222 in order to perform such calculation operation will be described. FIG. 46 is a diagram illustrating an example of a calculation function Zk (X, Y) applied to the rectangular aggregate illustrated in FIG. 45 (aggregate including six rectangles illustrated by hatching). This calculation function Zk (X, Y) is a function for extracting the k-th (1 ≦ k ≦ n) feature quantity xk. As shown in the upper part of FIG.
Zk (X, Y)
= Σ i = 1 to qh [K · fhi (σk) · fvi ′ (σk)]
+ Σ i = 1 to qv [K · fhi ′ (σk) · fvi (σk)]
Takes the form Here, the first term on the right side is an arithmetic term relating to the horizontal rectangles Fh1, Fh2, and Fh3 shown in FIG. 45A, and the second term on the right side is the vertical rectangles Fv1, Fv2, and Fb2, shown in FIG. This is an operation term related to Fv3.

 そして、図46の中段に示すとおり、
   fhi(σk)=erf[(X-Li)/σk]
          -erf[(X-Ri)/σk]
   fvi(σk)=erf[(Y-Bi)/σk]
          -erf[(Y-Ti)/σk]
   fhi′(σk)=erf[(X-(Li-w))/σk]
           -erf[(X-(Ri+w))/σk]
   fvi′(σk)=erf[(Y-(Bi-w))/σk]
           -erf[(Y-(Ti+w))/σk]
である。
And as shown in the middle of FIG.
fhi (σk) = erf [(X−Li) / σk]
-Erf [(X-Ri) / σk]
fvi (σk) = erf [(Y−Bi) / σk]
-Erf [(Y-Ti) / σk]
fhi ′ (σk) = erf [(X− (Li−w)) / σk]
−erf [(X− (Ri + w)) / σk]
fvi ′ (σk) = erf [(Y− (Bi−w)) / σk]
−erf [(Y− (Ti + w)) / σk]
It is.

 ここで、Li,Ri,Bi,Tiは、図45に示すとおり、第i番目の水平単位線分もしくは第i番目の垂直単位線分の端点位置もしくは線分位置を示す座標値である。たとえば、図45(a) に示す水平矩形Fh1の場合(i=1の場合)、破線で示す水平単位線分の左端のX座標値がL1,右端のX座標値がR1,線分のY座標値がT1およびB1になる。同様に、図45(b) に示す垂直矩形Fv1の場合(i=1の場合)、破線で示す垂直単位線分の上端のY座標値がT1,下端のY座標値がB1,線分のX座標値がL1およびR1になる。 Here, Li, Ri, Bi, and Ti are coordinate values indicating the end point position or line segment position of the i-th horizontal unit line segment or the i-th vertical unit line segment, as shown in FIG. For example, in the case of the horizontal rectangle Fh1 shown in FIG. 45 (a) (when i = 1), the X coordinate value of the left end of the horizontal unit line segment indicated by the broken line is L1, the X coordinate value of the right end is R1, and the Y of the line segment. The coordinate values are T1 and B1. Similarly, in the case of the vertical rectangle Fv1 shown in FIG. 45 (b) (when i = 1), the Y coordinate value of the upper end of the vertical unit line segment indicated by the broken line is T1, the Y coordinate value of the lower end is B1, and the line segment. The X coordinate values are L1 and R1.

 また、図46の下段に示すとおり、qhは水平矩形の総数、qvは垂直矩形の総数である(図45に示す例の場合、qh=3、qv=3になる)。そして、水平矩形Fh1,Fh2,Fh3に関する演算項(図46の第2行目)におけるパラメータiは、i=1~qhの範囲をとり、垂直矩形Fv1,Fv2,Fv3に関する演算項(図46の第3行目)におけるパラメータiは、i=1~qvの範囲をとる。kは、算出関数の番号を示すパラメータであり、第k番目の算出関数Zk(X,Y)には、第k番目の拡がり係数σkが用いられる。係数Kは、前述したとおり、スケーリングのための特徴量算出係数である。このような算出関数Zk(X,Y)によって、特定の評価点E(X,Y)についての特徴量xkが算出できることは、§5.2の内容を踏まえれば容易に理解できよう。 46, qh is the total number of horizontal rectangles and qv is the total number of vertical rectangles (in the example shown in FIG. 45, qh = 3 and qv = 3). The parameter i in the calculation terms relating to the horizontal rectangles Fh1, Fh2, Fh3 (second line in FIG. 46) takes a range of i = 1 to qh, and the calculation terms relating to the vertical rectangles Fv1, Fv2, Fv3 (in FIG. 46). The parameter i in the third line) takes a range of i = 1 to qv. k is a parameter indicating the number of the calculation function, and the k-th spread coefficient σk is used for the k-th calculation function Zk (X, Y). The coefficient K is a feature amount calculation coefficient for scaling as described above. It can be easily understood that the feature quantity xk for the specific evaluation point E (X, Y) can be calculated by such a calculation function Zk (X, Y) in view of the contents of §5.2.

 (3) 負担を軽減する演算方法
 ここでは、特徴量算出部222が、算出関数Zk(X,Y)を用いた演算により特徴量xkを算出する際の演算負担を軽減する方法を述べておく。§5.1では、図35(a) に示す例を参照しながら、所定の評価点E(X,Y)についての特徴量xを算出する方法として、評価点E(X,Y)と5個の矩形F1~F5との位置関係を示す値を演算し、その総和を特徴量xとする手順を説明した。しかしながら、実際には、必ずしもすべての矩形F1~F5との位置関係を示す値を演算する必要はない。
(3) Calculation Method for Reducing the Load Here, a method for reducing the calculation load when the feature amount calculation unit 222 calculates the feature amount xk by the calculation using the calculation function Zk (X, Y) will be described. . In §5.1, with reference to the example shown in FIG. 35 (a), as a method of calculating the feature amount x for a predetermined evaluation point E (X, Y), evaluation points E (X, Y) and 5 The procedure for calculating a value indicating the positional relationship with the individual rectangles F1 to F5 and setting the sum to the feature amount x has been described. However, actually, it is not always necessary to calculate a value indicating the positional relationship with all the rectangles F1 to F5.

 たとえば、図41に示す例の場合、評価点E1についての特徴量xを算出する際には、矩形Fiを考慮に入れて、矩形Fiとの位置関係を示す値を算出することは意味のあることである。これは、評価点E1が、矩形Fiの近傍に位置するため、水平方向関数fhiや垂直方向関数fviの値が0にはならないためである。ところが、評価点E3についての特徴量xを算出する際には、矩形Fiを考慮に入れる必要はない。これは、図示のとおり、評価点E3は、矩形Fiの遠方に位置するため、水平方向関数fhiや垂直方向関数fviの値はいずれも0になるので、矩形Fiとの位置関係を示す値は、算出関数Zk(X,Y)の値に何ら貢献しないためである。 For example, in the case of the example shown in FIG. 41, when calculating the feature quantity x for the evaluation point E1, it is meaningful to calculate a value indicating the positional relationship with the rectangle Fi taking into account the rectangle Fi. That is. This is because the evaluation point E1 is located in the vicinity of the rectangle Fi, and the values of the horizontal direction function fhi and the vertical direction function fvi do not become zero. However, it is not necessary to take the rectangle Fi into consideration when calculating the feature value x for the evaluation point E3. As shown in the figure, since the evaluation point E3 is located far from the rectangle Fi, the values of the horizontal function fhi and the vertical function fvi are both 0, so the value indicating the positional relationship with the rectangle Fi is This is because it does not contribute to the value of the calculation function Zk (X, Y).

 このように、水平方向関数fhiや垂直方向関数fviのグラフが、山状のカーブを描き、左右の端部における関数値が0になることを踏まえると、ある1つの評価点Eについての特徴量xを算出する際に、あまり遠方にある矩形は、考慮する必要がないことがわかる。そこで、実用上は、特徴量算出部222が、評価点Eについての特徴量xを算出する際に、当該評価点Eを中心として所定半径rをもった基準円Cを定義し、この基準円Cに応じた所定近傍範囲内に属する矩形との位置関係のみを考慮した演算を行うようにすればよい。 Thus, given that the graph of the horizontal function fhi and the vertical function fvi draws a mountain-shaped curve and the function values at the left and right ends are 0, the feature amount for one evaluation point E It can be seen that rectangles that are too far away need not be considered when calculating x. Therefore, in practice, when the feature quantity calculation unit 222 calculates the feature quantity x for the evaluation point E, a reference circle C having a predetermined radius r around the evaluation point E is defined. It is only necessary to perform the calculation considering only the positional relationship with the rectangle belonging to the predetermined neighborhood range corresponding to C.

 図47は、このように算出演算を効率化するために、矩形集合体上に所定半径rをもった基準円Cを定義した例を示す平面図である。図示の例の場合、矩形集合体には、合計12個の矩形F1~F12が含まれている。ここでは、矩形F6の右辺上に設定された評価点Eについての第k番目の特徴量xkを、第k番目の算出関数Zk(X,Y)を用いて算出することを考えてみる。これまで述べてきた実施例では、全12個の矩形F1~F12すべてを演算対象としていたが、ここで述べる変形例の場合、図示のとおり、評価点Eを中心として所定半径rをもった基準円Cを定義し、この基準円Cに応じた所定近傍範囲内に属する矩形のみを考慮した演算が行われる。 FIG. 47 is a plan view showing an example in which a reference circle C having a predetermined radius r is defined on a rectangular aggregate in order to make the calculation operation efficient in this way. In the illustrated example, the rectangular aggregate includes a total of 12 rectangles F1 to F12. Here, it is assumed that the k-th feature amount xk for the evaluation point E set on the right side of the rectangle F6 is calculated using the k-th calculation function Zk (X, Y). In the embodiments described so far, all the 12 rectangles F1 to F12 are subject to calculation. However, in the case of the modification described here, a reference having a predetermined radius r around the evaluation point E as shown in the figure. A calculation is performed in which a circle C is defined and only rectangles belonging to a predetermined neighborhood range corresponding to the reference circle C are considered.

 図示の例の場合、「少なくとも一部分が基準円Cの内部に含まれる矩形」を演算対象とする、という選出基準に基づいて、演算対象となる矩形を選出している。その結果、図にハッチングを施して示す7個の矩形F2,F5~F10のみが演算対象として選出される。したがって、算出関数Zk(X,Y)の演算は、これら7個の矩形のみを対象として行われる。もちろん、別な選出基準を用いてもかまわない。たとえば、「全体が基準円Cの内部に含まれる矩形」を演算対象とする、という選出基準を採用すれば、4個の矩形F5,F6,F9,F10のみが演算対象として選出される。あるいは、「重心Gの位置が基準円Cの内部もしくは円周上に含まれる矩形」を演算対象とする、という選出基準を採用すれば、6個の矩形F5~F10のみが演算対象として選出される。 In the case of the illustrated example, the rectangle to be calculated is selected based on the selection criterion that “the rectangle in which at least a part is included in the reference circle C” is to be calculated. As a result, only the seven rectangles F2, F5 to F10 indicated by hatching in the figure are selected as calculation targets. Therefore, the calculation function Zk (X, Y) is calculated only for these seven rectangles. Of course, other selection criteria may be used. For example, if the selection criterion “a rectangle that is entirely contained in the reference circle C” is used as the calculation target, only four rectangles F5, F6, F9, and F10 are selected as the calculation targets. Alternatively, if the selection criterion “the rectangle whose position of the center of gravity G is included in or on the circumference of the reference circle C” is selected, only six rectangles F5 to F10 are selected for calculation. The

 もちろん、このような選出基準で一義的に演算対象となる矩形を選出すると、選出に漏れた矩形の中には、算出関数Zk(X,Y)の値に貢献する矩形も含まれてしまう可能性もあるが、選出に漏れた矩形による貢献度合いはそれほど大きくないので、選出に漏れた矩形を演算対象から外しても、大きな支障は生じない。このように、算出関数Zk(X,Y)の値に貢献する度合いが小さい矩形を演算対象から外すことにより、特徴量算出部222による特徴量算出演算を効率化することができ、演算負担の軽減を図ることができる。 Of course, if a rectangle to be calculated is selected on the basis of such a selection criterion, a rectangle that contributes to the value of the calculation function Zk (X, Y) may be included in the rectangle that is not selected. However, since the degree of contribution due to the rectangle that is not selected is not so large, even if the rectangle that is not selected is excluded from the calculation target, there is no significant problem. As described above, by removing a rectangle having a small degree of contribution to the value of the calculation function Zk (X, Y) from the calculation target, the feature amount calculation calculation by the feature amount calculation unit 222 can be made efficient, and the calculation burden is reduced. Mitigation can be achieved.

 なお、個々の矩形を演算対象とするか否かを定めるために用いる基準円Cの半径rは、算出関数Zk(X,Y)に含まれている水平方向関数fhiや垂直方向関数fviの山状グラフの幅(裾野の広さ)に応じて定めるのが好ましい。前述したとおり、山状グラフの幅は、拡がり係数σkによって定まるので、拡がり係数σkが大きくなればなるほど、基準円Cの半径rも大きくなるように設定するのが好ましい。実用上は、半径rの値を、5σk<r<10σkの範囲、すなわち、拡がり係数σkの5~10倍程度の値に設定するのが好ましい。 Note that the radius r of the reference circle C used to determine whether or not each rectangle is to be calculated is a peak of the horizontal function fhi and the vertical function fvi included in the calculation function Zk (X, Y). It is preferable to determine according to the width (width of the skirt) of the shape graph. As described above, since the width of the mountain graph is determined by the expansion coefficient σk, it is preferable to set the radius r of the reference circle C to be larger as the expansion coefficient σk is larger. Practically, it is preferable to set the value of the radius r to a range of 5σk <r <10σk, that is, a value about 5 to 10 times the spread coefficient σk.

 <5.4 特徴量抽出処理に必要な処理時間の比較>
 最後に、§1~§4で述べた基本的実施形態と、§5で述べた付加的実施形態とについて、特徴量抽出に必要な処理時間を、いくつかの元図形パターン10について比較した検証結果を提示しておく。なお、基本的実施形態については、図10に示す面積密度マップM1を用いた検証結果を示し、付加的実施形態については、§5.3 (3) で述べた負担を軽減する演算方法を採用した検証結果を示す。
<5.4 Comparison of processing time required for feature extraction processing>
Finally, verification of the basic embodiment described in §1 to §4 and the additional embodiment described in §5 by comparing the processing time required for feature quantity extraction for several original figure patterns 10 Present the results. For the basic embodiment, the verification results using the area density map M1 shown in FIG. 10 are shown, and for the additional embodiment, the calculation method for reducing the burden described in §5.3 (3) is adopted. The verification result is shown.

 まず、図48は、Line & Space パターンからなる元図形パターン10を用いて、所定数の評価点Eについての特徴量抽出処理を行った検証結果を示す図である。図48(a) は、実際に検証に用いた元図形パターン10の図形構成を示す平面図である。この元図形パターン10は、一般にLine & Space パターンと呼ばれる多数の平行線状のパターンである。より具体的には、幅100nm、長さ65μmの縦方向に細長い線状矩形を、間隔100nmをあけて横方向に配置したものである。パターン全体は、65μm角の正方形領域に形成されている。 First, FIG. 48 is a diagram showing a verification result obtained by performing a feature amount extraction process on a predetermined number of evaluation points E using the original graphic pattern 10 including the Line & Space pattern. FIG. 48A is a plan view showing a graphic configuration of the original graphic pattern 10 actually used for verification. The original figure pattern 10 is a pattern of a number of parallel lines generally called “Line & Space” patterns. More specifically, linear rectangles having a width of 100 nm and a length of 65 μm are arranged in the horizontal direction with an interval of 100 nm. The entire pattern is formed in a 65 μm square area.

 図48(b) は、図48(a) に示す元図形パターン10について、所定数の評価点Eについての特徴量抽出処理を行った場合に要した処理時間を、基本的実施形態と付加的実施形態とについて比較したグラフである。図に「密度マップ」と記載された棒グラフは、基本的実施形態による処理時間(図1に示す特徴量抽出ユニット120による処理時間)を示し、「関数計算」と記載された棒グラフは、付加的実施形態による処理時間(図32に示す特徴量抽出ユニット220による処理時間)を示している。 48 (b) b shows the processing time required when the feature amount extraction processing for a predetermined number of evaluation points E is performed on the original graphic pattern 10 shown in FIG. 48 (a), in addition to the basic embodiment. It is the graph compared about embodiment. The bar graph described as “density map” in the figure shows the processing time according to the basic embodiment (processing time by the feature quantity extraction unit 120 shown in FIG. 1), and the bar graph described as “function calculation” The processing time (processing time by the feature-value extraction unit 220 shown in FIG. 32) by embodiment is shown.

 各棒グラフは複数の区画に分割されており、個々の区画には、それぞれ所定の丸数字が記載されている。この丸数字は、右欄に記載したように、様々な個別処理に対応しており、棒グラフの各区画は、それぞれの個別処理に要した処理時間を示している。たとえば、「密度マップ」の棒グラフにおける丸数字5が記された区画は、画像ピラミッドの作成処理に要した時間を示し、丸数字6が記された区画は、面積密度マップM1の作成処理に要した時間を示す。同様に、「関数計算」の棒グラフにおける丸数字3が記された区画は、前述した算出関数Zk(X,Y)の計算処理に要した時間を示す。 Each bar graph is divided into a plurality of sections, and each section has a predetermined circle number. This circled number corresponds to various individual processes as described in the right column, and each section of the bar graph indicates the processing time required for each individual process. For example, in the bar graph of “density map”, a section marked with a circle number 5 indicates a time required for the image pyramid creation process, and a section marked with a circle number 6 is necessary for the area density map M1 creation process. Show time. Similarly, the section marked with the circled number 3 in the “function calculation” bar graph indicates the time required for the calculation processing of the calculation function Zk (X, Y) described above.

 両方の棒グラフを比較すると、全体的な処理時間は、「密度マップ」よりも「関数計算」の方が短くなっている。これは、基本的実施形態では、面積密度マップM1の作成処理(丸数字6が記された区画の処理)に長時間を要しているためである。したがって、図48(a) に示すようなLine & Space パターンについては、処理時間に関する限り、基本的実施形態よりも付加的実施形態を利用した方が効率的であることがわかる。 比較 Comparing both bar graphs, the overall processing time is shorter for “Function calculation” than for “Density map”. This is because, in the basic embodiment, a long time is required for the process of creating the area density map M1 (the process of the section marked with the circled number 6). Therefore, it can be seen that the Line & Space pattern as shown in Fig. 48 (a) is more efficient than using the basic embodiment as far as the processing time is concerned.

 次の図49は、Array Hole パターンからなる元図形パターン10を用いて、所定数の評価点Eについての特徴量抽出処理を行った検証結果を示す図である。図49(a) は、実際に検証に用いた元図形パターン10の図形構成を示す平面図である。この元図形パターン10は、一般にArray Hole パターンと呼ばれる多数の正方形をマトリックス状に並べたパターンである。より具体的には、縦横100nmの正方形を、間隔100nmをおいて縦横に配置したものである。パターン全体は、65μm角の正方形領域に形成されている。 FIG. 49 is a diagram showing a verification result obtained by performing a feature amount extraction process on a predetermined number of evaluation points E using the original figure pattern 10 including an Array Hole pattern. FIG. 49A is a plan view showing a graphic configuration of the original graphic pattern 10 actually used for verification. The original figure pattern 10 is a pattern in which a large number of squares generally called an Array Hole pattern are arranged in a matrix. More specifically, squares of 100 nm in length and breadth are arranged vertically and horizontally with an interval of 100 nm. The entire pattern is formed in a 65 μm square area.

 図49(b) は、図49(a) に示す元図形パターン10について、所定数の評価点Eについての特徴量抽出処理を行った場合に要した処理時間を、基本的実施形態と付加的実施形態とについて比較したグラフである。ここでも、図に「密度マップ」と記載された棒グラフは、基本的実施形態による処理時間を示し、「関数計算」と記載された棒グラフは、付加的実施形態による処理時間を示している。これらの棒グラフを構成する各区画が個々の個別処理を示している点は、図48(b) の棒グラフと同様である。 FIG. 49 (b) を shows the processing time required when the feature amount extraction processing for a predetermined number of evaluation points E is performed on the original graphic pattern 10 shown in FIG. 49 (a) と. It is the graph compared about embodiment. Again, the bar graph labeled “Density Map” in the figure represents the processing time according to the basic embodiment, and the bar graph labeled “Functional calculation” represents the processing time according to the additional embodiment. The point that each section constituting these bar graphs indicates individual processing is the same as the bar graph of FIG. 48 (b).

 両方の棒グラフを比較すると、全体的な処理時間は、「関数計算」よりも「密度マップ」の方が短くなっている。これは、付加的実施形態では、算出関数Zk(X,Y)の計算処理に長時間を要しているためである。図49(a) に示すArray Hole パターンの場合、矩形集合体置換部221によって作成される矩形の数が膨大になるため、必然的に算出関数Zk(X,Y)の計算処理負担が増大することになる。したがって、このArray Hole パターンのように、矩形集合体置換部221によって作成される矩形の数が多くなるパターンについては、処理時間に関する限り、付加的実施形態よりも基本的実施形態を利用した方が効率的であることがわかる。 When comparing both bar graphs, the overall processing time is shorter for the “density map” than for the “function calculation”. This is because in the additional embodiment, the calculation process of the calculation function Zk (X, Y) takes a long time. In the case of the Array Hole pattern shown in FIG. 49 (a), the number of rectangles created by the rectangle assembly replacing unit 221 is enormous, and the calculation processing load of the calculation function Zk (X, Y) inevitably increases. It will be. Therefore, for a pattern in which the number of rectangles created by the rectangular aggregate replacement unit 221 is large, such as this Array Hole pattern, it is better to use the basic embodiment than the additional embodiment as far as the processing time is concerned. It turns out to be efficient.

 最後に示す図50は、ISO-Space パターンからなる元図形パターン10を用いて、所定数の評価点Eについての特徴量抽出処理を行った検証結果を示す図である。図50(a) は、実際に検証に用いた元図形パターン10の図形構成を示す平面図である。この元図形パターン10は、一般にISO-Space パターンと呼ばれる1本の細長い線状パターンである。より具体的には、幅100nm、長さ65μmの縦方向に細長い1本の線状矩形によって構成される極めて単純なパターンである。パターン全体は、65μm角の正方形領域に形成されている。 FIG. 50 shown at the end is a diagram showing a verification result obtained by performing the feature amount extraction process for a predetermined number of evaluation points E using the original graphic pattern 10 including the ISO-Space IV pattern. FIG. 50A is a plan view showing a graphic configuration of the original graphic pattern 10 actually used for verification. The original figure pattern 10 is a single elongated linear pattern generally called an ISO-Space IV pattern. More specifically, it is a very simple pattern composed of one linear rectangle elongated in the vertical direction with a width of 100 nm and a length of 65 μm. The entire pattern is formed in a 65 μm square area.

 図50(b) は、図50(a) に示す元図形パターン10について、所定数の評価点Eについての特徴量抽出処理を行った場合に要した処理時間を、基本的実施形態と付加的実施形態とについて比較したグラフである。ここでも、図に「密度マップ」と記載された棒グラフは、基本的実施形態による処理時間を示し、「関数計算」と記載された棒グラフは、付加的実施形態による処理時間を示している。これらの棒グラフを構成する各区画が個々の個別処理を示している点は、図48(b) の棒グラフと同様である。 50 (b) shows the processing time required when the feature amount extraction processing for a predetermined number of evaluation points E is performed on the original figure pattern 10 shown in FIG. It is the graph compared about embodiment. Again, the bar graph labeled “Density Map” in the figure represents the processing time according to the basic embodiment, and the bar graph labeled “Functional calculation” represents the processing time according to the additional embodiment. The point that each section constituting these bar graphs indicates individual processing is the same as the bar graph of FIG. 48 (b).

 両方の棒グラフを比較すると、全体的な処理時間は、「密度マップ」よりも「関数計算」の方が極めて短くなっている。これは、基本的実施形態では、密度マップ作成処理や画像ピラミッド作成処理に長時間を要しているのに対して、付加的実施形態では、算出関数Zk(X,Y)の計算処理が極めて短時間で済むためである。図50(a) に示すISO-Space パターンの場合、矩形集合体置換部221によって作成される矩形の数が比較的少なくて済むため、必然的に算出関数Zk(X,Y)の計算処理負担が低減することになる。したがって、このISO-Space パターンのように、矩形集合体置換部221によって作成される矩形の数が少なくなるパターンについては、処理時間に関する限り、基本的実施形態よりも付加的実施形態を利用した方が効率的であることがわかる。 比較 Comparing both bar graphs, the overall processing time is much shorter for “Functional calculation” than for “Density map”. This is because, in the basic embodiment, the density map creation process and the image pyramid creation process take a long time, whereas in the additional embodiment, the calculation process of the calculation function Zk (X, Y) is extremely difficult. This is because it takes a short time. In the case of the ISO-Space pattern shown in FIG. 50 (a) た め, the number of rectangles created by the rectangular aggregate replacement unit 221 can be relatively small, so that the calculation processing load of the calculation function Zk (X, Y) is inevitably required. Will be reduced. Therefore, as for the pattern in which the number of rectangles created by the rectangular aggregate replacement unit 221 is reduced as in this ISO-Space IV pattern, as long as processing time is concerned, the additional embodiment is used rather than the basic embodiment. Can be seen to be efficient.

 このように、§1~§4で述べた基本的実施形態と、§5で述べた付加的実施形態とでは、取り扱う元図形パターン10の特徴によって、特徴量抽出にかかる処理時間に差が生じることになる。したがって、実用上は、取り扱う元図形パターン10の種類に応じて、基本的実施形態と付加的実施形態とを使い分けるようにして、より効率的な特徴量抽出処理が行われるようにするのが好ましい。 As described above, the basic embodiment described in §1 to §4 and the additional embodiment described in §5 have a difference in processing time required for feature amount extraction depending on the feature of the original graphic pattern 10 to be handled. It will be. Therefore, in practice, it is preferable to perform more efficient feature amount extraction processing by properly using the basic embodiment and the additional embodiment according to the type of the original graphic pattern 10 to be handled. .

 本発明に係る図形パターンの形状推定装置は、半導体デバイスの製造プロセスなど、特定の材料層に対して微細なパターニング加工を施す必要がある分野において、元図形パターンを用いたリソグラフィプロセスをシミュレートすることにより、実基板上に形成される実図形パターンの形状を推定する技術として広く利用することができる。 The figure pattern shape estimation apparatus according to the present invention simulates a lithography process using an original figure pattern in a field where a specific material layer needs to be finely patterned, such as a semiconductor device manufacturing process. Thus, it can be widely used as a technique for estimating the shape of a real graphic pattern formed on a real substrate.

10:元図形パターン
15:補正図形パターン
20:実図形パターン
50:矩形集合体
60:矩形集合体(ドーズ量付き)
100:図形パターンの形状補正装置
100′:図形パターンの形状推定装置
110:評価点設定ユニット
120:特徴量抽出ユニット
121:元画像作成部
122:画像ピラミッド作成部
123:特徴量算出部
130:バイアス推定ユニット
131:特徴量入力部
132:推定演算部
140:パターン補正ユニット
200:図形パターンの形状補正装置
200′:図形パターンの形状推定装置
220:特徴量抽出ユニット
221:矩形集合体置換部
222:特徴量算出部
223:算出関数提供部
A~D:個々の画素/個々の画素の画素値
B:矩形の下辺
Bi:第i番目の矩形の下辺/そのY座標値
a~d:評価点Eと各画素の中心点との横方向距離もしくは縦方向距離
b,b(1,1)~b(i+1,M(i+1)),b(N+1):ニューラルネットワークのパラメータ
C:基準円
C1,C2:参考円
D1~Dn:差分画像
Di:第i番目の矩形についてのドーズ量
dX:矩形のX軸方向幅
dY:矩形のY軸方向幅
E,E1~E23:評価点
E(X,Y):XY二次元直交座標系上の評価点
e1~e22:グラフ上の点
F1~F12:元図形パターン10を構成する図形(長方形)
F1d~F5d:ドーズ量付きの矩形
Fh1~Fh3:水平矩形
Fv1~Fv3:垂直矩形
Fi:第i番目の矩形
f(ξ):ニューラルネットワークの演算に用いる関数
fhi:水平方向関数
fhi(σk):第i番目の矩形についての第k番目の拡がり係数σkを用いた水平方向関数
fvi:垂直方向関数
fvi(σk):第i番目の矩形についての第k番目の拡がり係数σkを用いた垂直方向関数
G:按分値/重心
GF33,GF55:ガウシアンフィルタ
H:按分値
h(1,1)~h(N,M(N)):ニューラルネットワークの隠れ層のニューロン/その演算値
i:ニューラルネットワークの隠れ層の段数を示すパラメータ/矩形番号を示すパラメータ
K:特徴量算出係数
k:画像番号を示すパラメータ/算出関数の番号を示すパラメータ
L:学習情報/矩形の左辺
Li:第i番目の矩形の左辺/そのX座標値
LB:矩形の左下隅点
LF33,LF55:ラプラシアンフィルタ
M1:面積密度マップ
M2:エッジ長密度マップ
M3:ドーズ密度マップ
M(1)~M(N):ニューラルネットワークの各隠れ層の次元
N:ニューラルネットワークの隠れ層の段数
n:画像ピラミッドの階層数/特徴量(算出関数)の総数
O:XY二次元直交座標系の原点
P1~Pn:階層画像
Pk:第k番目の階層画像
PC:補正画像
PD:画像ピラミッド(副画像ピラミッド)
PP:画像ピラミッド(主画像ピラミッド)
Q1:第1番目の準備画像(元画像)
Qk:第k番目の準備画像
q,qh,qv:矩形の総数
R:矩形の右辺
Ri:第i番目の矩形の右辺/そのX座標値
RT:矩形の右上隅点
r:基準円Cの半径
S:実基板
S1~S848:流れ図の各ステップ
T:矩形の上辺
Ti:第i番目の矩形の上辺/そのY座標値
U:画素
u:画素寸法
W,W(1,11)~W(N+1,1M(N)):ニューラルネットワークのパラメータ
w:微小幅の半分
X:XY二次元直交座標系の横方向座標軸
x,x1~xn:特徴量
Y:XY二次元直交座標系の縦方向座標軸
y,y11~y13:プロセスバイアス
Zk(X,Y):第k番目の算出関数
ξ:関数fの引数
σ,σ1,σ2:拡がり係数
σk:第k番目の拡がり係数
10: Original graphic pattern 15: Corrected graphic pattern 20: Real graphic pattern 50: Rectangular aggregate 60: Rectangular aggregate (with dose)
100: figure pattern shape correction apparatus 100 ′: figure pattern shape estimation apparatus 110: evaluation point setting unit 120: feature quantity extraction unit 121: original image creation section 122: image pyramid creation section 123: feature quantity calculation section 130: bias Estimation unit 131: Feature value input unit 132: Estimation operation unit 140: Pattern correction unit 200: Graphic pattern shape correction device 200 ': Graphic pattern shape estimation device 220: Feature value extraction unit 221: Rectangle aggregate replacement unit 222: Feature quantity calculation unit 223: calculation function providing units A to D: individual pixels / pixel values of individual pixels B: lower side of rectangle Bi: lower side of i-th rectangle / Y coordinate values a to d thereof: evaluation point E And the horizontal distance or vertical distance b, b (1, 1) to b (i + 1, M (i + 1)), b (N + 1) between the pixel and the center point of each pixel: Ral network parameters C: reference circles C1, C2: reference circles D1 to Dn: difference image Di: dose amount for the i-th rectangle dX: rectangular X-axis direction width dY: rectangular Y-axis direction widths E, E1 E23: Evaluation point E (X, Y): Evaluation points e1 to e22 on the XY two-dimensional orthogonal coordinate system F1 to F12 on the graph: Figures (rectangles) constituting the original figure pattern 10
F1d to F5d: Rectangles with doses Fh1 to Fh3: Horizontal rectangles Fv1 to Fv3: Vertical rectangles Fi: i-th rectangle f (ξ): Function used for calculation of neural network fhi: Horizontal function fhi (σk): Horizontal function fvi using the kth expansion coefficient σk for the i-th rectangle: Vertical function fvi (σk): Vertical function using the kth expansion coefficient σk for the i-th rectangle G: apportioned value / centroid GF33, GF55: Gaussian filter H: apportioned values h (1, 1) to h (N, M (N)): neurons in the hidden layer of the neural network / calculated values i: hidden of the neural network A parameter indicating the number of layers / a parameter indicating a rectangle number K: a feature amount calculation coefficient k: a parameter indicating an image number / a parameter indicating a calculation function number L: Learning information / Left side of rectangle Li: Left side of i-th rectangle / X coordinate value LB: Lower left corner point LF33, LF55: Laplacian filter M1: Area density map M2: Edge length density map M3: Dose Density maps M (1) to M (N): dimensions of each hidden layer of the neural network N: number of hidden layers of the neural network n: number of layers of the image pyramid / total number of feature quantities (calculation functions) O: XY two-dimensional Origin P1 to Pn of Cartesian coordinate system: hierarchical image Pk: kth hierarchical image PC: corrected image PD: image pyramid (sub-image pyramid)
PP: Image pyramid (main image pyramid)
Q1: First preparation image (original image)
Qk: k-th prepared image q, qh, qv: total number of rectangles R: right side of rectangle Ri: right side of i-th rectangle / its X coordinate value RT: upper right corner point of rectangle r: radius of reference circle C S: Actual substrates S1 to S848: Steps in flowchart T: Upper side Ti of rectangle: Upper side of i-th rectangle / Y coordinate value U: Pixel u: Pixel dimensions W, W (1, 11) to W (N + 1) , 1M (N)): Neural network parameter w: Half of minute width X: Horizontal coordinate axis x, x1 to xn of XY two-dimensional orthogonal coordinate system: Feature amount Y: Vertical coordinate axis y of XY two-dimensional orthogonal coordinate system , Y11 to y13: Process bias Zk (X, Y): k-th calculation function ξ: arguments σ, σ1, σ2 of function f: spread coefficient σk: k-th spread coefficient

Claims (40)

 元図形パターン(10)を用いたリソグラフィプロセスをシミュレートすることにより、実基板(S)上に形成される実図形パターン(20)の形状を推定する図形パターンの形状推定装置(100′)であって、
 前記元図形パターン(10)上に評価点(E)を設定する評価点設定ユニット(110)と、
 前記元図形パターン(10)について、前記評価点(E)の周囲の特徴を示す特徴量(x1~xn)を抽出する特徴量抽出ユニット(120)と、
 前記特徴量(x1~xn)に基づいて、前記評価点(E)の前記元図形パターン(10)上の位置と前記実図形パターン(20)上の位置とのずれ量を示すプロセスバイアス(y)を推定するバイアス推定ユニット(130)と、
 を備え、
 前記評価点設定ユニット(110)は、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターン(10)に基づいて、前記輪郭線上の所定位置に評価点(E)を設定し、
 前記特徴量抽出ユニット(120)は、
 前記元図形パターン(10)に基づいて、それぞれ所定の画素値を有する画素(U)の集合体からなる元画像を作成する元画像作成部(121)と、
 前記元画像を縮小して縮小画像を作成する縮小処理を含む画像ピラミッド作成処理を行い、それぞれ異なるサイズをもった複数の階層画像(P1~Pn)からなる画像ピラミッド(PP)を作成する画像ピラミッド作成部(122)と、
 前記画像ピラミッド(PP)を構成する各階層画像(P1~Pn)について、前記評価点(E)の位置に応じた画素の画素値に基づいて特徴量(x1~xn)を算出する特徴量算出部(123)と、
 を有し、
 前記バイアス推定ユニット(130)は、
 前記評価点(E)について算出された特徴量(x1~xn)を入力する特徴量入力部(131)と、
 予め実施された学習段階によって得られた学習情報(L)に基づいて、前記特徴量(x1~xn)に応じた推定値(y)を求め、求めた推定値を前記評価点(E)についてのプロセスバイアスの推定値として出力する推定演算部(132)と、
 を有することを特徴とする図形パターンの形状推定装置。
A figure pattern shape estimation device (100 ′) that estimates the shape of the actual figure pattern (20) formed on the actual substrate (S) by simulating a lithography process using the original figure pattern (10). There,
An evaluation point setting unit (110) for setting an evaluation point (E) on the original figure pattern (10);
A feature amount extraction unit (120) for extracting feature amounts (x1 to xn) indicating features around the evaluation point (E) for the original figure pattern (10);
Based on the feature quantities (x1 to xn), a process bias (y indicating the amount of deviation between the position of the evaluation point (E) on the original figure pattern (10) and the position on the actual figure pattern (20) ) A bias estimation unit (130) for estimating
With
The evaluation point setting unit (110) sets an evaluation point (E) at a predetermined position on the contour line based on the original graphic pattern (10) including the contour line information indicating the boundary between the inside and the outside of the graphic. And
The feature quantity extraction unit (120)
Based on the original figure pattern (10), an original image creation unit (121) that creates an original image composed of an aggregate of pixels (U) each having a predetermined pixel value;
An image pyramid for performing an image pyramid creation process including a reduction process for creating a reduced image by reducing the original image to create an image pyramid (PP) composed of a plurality of hierarchical images (P1 to Pn) having different sizes. Creating unit (122);
Feature amount calculation for calculating feature amounts (x1 to xn) based on pixel values of pixels corresponding to the positions of the evaluation points (E) for each hierarchical image (P1 to Pn) constituting the image pyramid (PP) Part (123);
Have
The bias estimation unit (130) is
A feature amount input unit (131) for inputting feature amounts (x1 to xn) calculated for the evaluation point (E);
Based on learning information (L) obtained in advance by a learning stage, an estimated value (y) corresponding to the feature amount (x1 to xn) is obtained, and the obtained estimated value is obtained for the evaluation point (E). An estimation calculation unit (132) that outputs an estimated value of the process bias of
An apparatus for estimating the shape of a graphic pattern, comprising:
 請求項1に記載の図形パターンの形状推定装置(100′)において、
 元画像作成部(121)が、画素(U)の二次元配列からなるメッシュ上に元図形パターン(10)を重ね合わせ、個々の画素の位置と元図形パターンを構成する図形(F1~F5)の輪郭線の位置との関係に基づいて、個々の画素(U)の画素値を決定することを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation apparatus (100 ') according to claim 1,
The original image creation unit (121) superimposes the original figure pattern (10) on a mesh composed of a two-dimensional array of pixels (U), and positions of individual pixels and figures (F1 to F5) constituting the original figure pattern A shape estimation apparatus for a graphic pattern, wherein the pixel value of each pixel (U) is determined based on the relationship with the position of the contour line.
 請求項2に記載の図形パターンの形状推定装置(100′)において、
 元画像作成部(121)が、元図形パターン(10)に基づいて各図形(F1~F5)の内部領域と外部領域とを認識し、各画素内における前記内部領域の占有率を当該画素の画素値とする面積密度マップ(M1)を元画像として作成することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to claim 2,
Based on the original graphic pattern (10), the original image creation unit (121) recognizes the internal area and the external area of each graphic (F1 to F5), and determines the occupancy rate of the internal area in each pixel. An apparatus for estimating a shape of a graphic pattern, wherein an area density map (M1) having pixel values is created as an original image.
 請求項2に記載の図形パターンの形状推定装置(100′)において、
 元画像作成部(121)が、元図形パターン(10)に基づいて各図形(F1~F5)の輪郭線を認識し、各画素内に存在する前記輪郭線の長さを当該画素の画素値とするエッジ長密度マップ(M2)を元画像として作成することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to claim 2,
The original image creation unit (121) recognizes the contour lines of the respective graphics (F1 to F5) based on the original graphic pattern (10), and determines the length of the contour line existing in each pixel as the pixel value of the pixel. An edge length density map (M2) is created as an original image.
 請求項2に記載の図形パターンの形状推定装置(100′)において、
 元画像作成部(121)が、図形(F1~F5)の内部と外部との境界を示す輪郭線の情報と、リソグラフィプロセスにおける各図形に関するドーズ量の情報と、を含む元図形パターン(10)に基づいて、各図形の内部領域と外部領域とを認識し、更に、各図形に関するドーズ量を認識し、各画素内に存在する各図形について「内部領域の占有率と当該図形のドーズ量との積」を求め、当該積の総和を当該画素の画素値とするドーズ密度マップ(M3)を元画像として作成することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to claim 2,
An original graphic pattern (10) including information on a contour line indicating the boundary between the inside and the outside of the graphic (F1 to F5) and information on a dose amount for each graphic in the lithography process. Based on the internal area and external area of each figure, and further, the dose amount for each figure is recognized, and for each figure existing in each pixel, the occupancy of the internal area and the dose amount of the figure The figure pattern shape estimation apparatus is characterized in that a dose density map (M3) in which a sum of the products is obtained as a pixel value is generated as an original image.
 請求項1~5のいずれかに記載の図形パターンの形状推定装置(100′)において、
 画像ピラミッド作成部(122)が、元画像もしくは縮小画像に対して、所定の画像処理フィルタ(GF33)を用いたフィルタ処理を行う機能を有し、このフィルタ処理と縮小処理とを交互に実行することにより、複数の階層画像(P1~Pn)からなる画像ピラミッド(PP)を作成することを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation apparatus (100 ') according to any one of claims 1 to 5,
The image pyramid creation unit (122) has a function of performing a filter process using a predetermined image processing filter (GF33) on the original image or the reduced image, and alternately executes the filter process and the reduction process. Thus, a graphic pattern shape estimation device for creating an image pyramid (PP) composed of a plurality of hierarchical images (P1 to Pn).
 請求項6に記載の図形パターンの形状推定装置(100′)において、
 画像ピラミッド作成部(122)が、元画像作成部(121)によって作成された元画像を第1の準備画像Q1とし、第kの準備画像Qk(但し、kは自然数)に対するフィルタ処理によって得られる画像を第kの階層画像Pkとし、第kの階層画像Pkに対する縮小処理によって得られる画像を第(k+1)の準備画像Q(k+1)として、第nの階層画像Pnが得られるまでフィルタ処理と縮小処理とを交互に実行することにより、第1の階層画像P1~第nの階層画像Pnを含む複数n通りの階層画像(P1~Pn)からなる画像ピラミッド(PP)を作成することを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation apparatus (100 ') according to claim 6,
The image pyramid creation unit (122) uses the original image created by the original image creation unit (121) as the first preparation image Q1, and is obtained by filtering the kth preparation image Qk (where k is a natural number). Filtering is performed until the nth layer image Pn is obtained, assuming that the image is the kth layer image Pk, the image obtained by the reduction process on the kth layer image Pk is the (k + 1) th preparation image Q (k + 1). By alternately executing reduction processing, an image pyramid (PP) composed of a plurality of n hierarchical images (P1 to Pn) including the first hierarchical image P1 to the nth hierarchical image Pn is created. A shape pattern shape estimation apparatus.
 請求項6に記載の図形パターンの形状推定装置(100′)において、
 画像ピラミッド作成部(122)が、元画像作成部(121)によって作成された元画像を第1の準備画像Q1とし、第kの準備画像Qk(但し、kは自然数)に対するフィルタ処理によって得られるフィルタ処理画像Pkと前記第kの準備画像Qkとの差分画像Dkを求め、当該差分画像Dkを第kの階層画像Dkとし、第kのフィルタ処理画像Pkに対する縮小処理によって得られる画像を第(k+1)の準備画像Q(k+1)として、第nの階層画像Dnが得られるまでフィルタ処理と縮小処理とを交互に実行することにより、第1の階層画像D1~第nの階層画像Dnを含む複数n通りの階層画像(D1~Dn)からなる画像ピラミッド(PD)を作成することを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation apparatus (100 ') according to claim 6,
The image pyramid creation unit (122) uses the original image created by the original image creation unit (121) as the first preparation image Q1, and is obtained by filtering the kth preparation image Qk (where k is a natural number). A difference image Dk between the filter-processed image Pk and the k-th preparation image Qk is obtained, the difference image Dk is set as the k-th layer image Dk, and an image obtained by the reduction process on the k-th filter-process image Pk is the (( As the preparation image Q (k + 1) of (k + 1), the first hierarchical image D1 to the nth hierarchical image Dn are included by alternately executing the filtering process and the reduction process until the nth hierarchical image Dn is obtained. An apparatus for estimating a shape of a graphic pattern, which creates an image pyramid (PD) composed of a plurality of n hierarchical images (D1 to Dn).
 請求項6に記載の図形パターンの形状推定装置(100′)において、
 画像ピラミッド作成部(122)が、
 元画像作成部(121)によって作成された元画像を第1の準備画像Q1とし、第kの準備画像Qk(但し、kは自然数)に対するフィルタ処理によって得られる画像を第kの主階層画像Pkとし、第kの主階層画像Pkに対する縮小処理によって得られる画像を第(k+1)の準備画像Q(k+1)として、第nの主階層画像Pnが得られるまでフィルタ処理と縮小処理とを交互に実行することにより、第1の主階層画像P1~第nの主階層画像Pnを含む複数n通りの階層画像(P1~Pn)からなる主画像ピラミッド(PP)を作成し、
 更に、前記第kの主階層画像Pkと前記第kの準備画像Qkとの差分画像Dkを求め、当該差分画像Dkを第kの副階層画像Dkとすることにより、第1の副階層画像D1~第nの副階層画像Dnを含む複数n通りの階層画像(D1~Dn)からなる副画像ピラミッド(PD)を作成し、
 特徴量算出部(123)が、前記主画像ピラミッド(PP)および前記副画像ピラミッド(PD)を構成する各階層画像について、評価点(E)の位置に応じた画素の画素値に基づいて特徴量(y)を算出することを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation apparatus (100 ') according to claim 6,
The image pyramid creation unit (122)
The original image created by the original image creation unit (121) is defined as the first preparation image Q1, and an image obtained by filtering the kth preparation image Qk (where k is a natural number) is the kth main layer image Pk. The image obtained by the reduction process for the k-th main layer image Pk is defined as the (k + 1) th preparation image Q (k + 1), and the filter process and the reduction process are alternately performed until the n-th main layer image Pn is obtained. By executing, a main image pyramid (PP) composed of a plurality of n kinds of layer images (P1 to Pn) including the first main layer image P1 to the nth main layer image Pn is created,
Further, a difference image Dk between the k-th main layer image Pk and the k-th preparation image Qk is obtained, and the difference image Dk is set as the k-th sub-layer image Dk, whereby the first sub-layer image D1. A sub-image pyramid (PD) composed of a plurality of n hierarchical images (D1 to Dn) including the n-th sub-hierarchical image Dn,
A feature amount calculation unit (123) performs a feature on each hierarchical image constituting the main image pyramid (PP) and the sub image pyramid (PD) based on the pixel value of the pixel corresponding to the position of the evaluation point (E). A figure pattern shape estimation apparatus for calculating a quantity (y).
 請求項6~9のいずれかに記載の図形パターンの形状推定装置(100′)において、
 画像ピラミッド作成部(122)が、画像処理フィルタとしてガウシアンフィルタもしくはラプラシアンフィルタを用いた畳込演算によってフィルタ処理を実行して画像ピラミッド(PP)を作成することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to any one of claims 6 to 9,
An image pyramid creation unit (122) executes a filtering process by a convolution operation using a Gaussian filter or a Laplacian filter as an image processing filter to create an image pyramid (PP), and a shape estimation apparatus for a graphic pattern .
 請求項1~10のいずれかに記載の図形パターンの形状推定装置(100′)において、
 画像ピラミッド作成部(122)が、縮小処理として、複数m個の隣接画素を、これら複数m個の隣接画素の画素値の平均値を画素値とする単一の画素に置き換えるアベレージ・プーリング処理を実行することにより縮小画像を作成することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to any one of claims 1 to 10,
The image pyramid creation unit (122) performs, as a reduction process, an average pooling process in which a plurality of m adjacent pixels are replaced with a single pixel having an average value of pixel values of the m adjacent pixels as a pixel value. An apparatus for estimating a shape of a graphic pattern, wherein a reduced image is created by execution.
 請求項1~10のいずれかに記載の図形パターンの形状推定装置(100′)において、
 画像ピラミッド作成部(122)が、縮小処理として、複数m個の隣接画素を、これら複数m個の隣接画素の画素値の最大値を画素値とする単一の画素に置き換えるマックス・プーリング処理を実行することにより縮小画像を作成することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to any one of claims 1 to 10,
The image pyramid creation unit (122) performs, as a reduction process, a max pooling process in which a plurality of m adjacent pixels are replaced with a single pixel having a maximum pixel value of the plurality of m adjacent pixels as a pixel value. An apparatus for estimating a shape of a graphic pattern, wherein a reduced image is created by execution.
 請求項1~12のいずれかに記載の図形パターンの形状推定装置(100′)において、
 元画像作成部(121)が、互いに異なる複数通りのアルゴリズムに基づく元画像作成処理を行い、複数通りの元画像を作成し、
 画像ピラミッド作成部(122)が、前記複数通りの元画像に基づいてそれぞれ画像ピラミッド作成処理を行い、複数通りの画像ピラミッド(PP)を作成し、
 特徴量算出部(123)が、前記複数通りの画像ピラミッドのそれぞれを構成する各階層画像について、評価点(E)の位置に応じた画素の画素値に基づいて特徴量(y)を算出することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to any one of claims 1 to 12,
An original image creation unit (121) performs an original image creation process based on a plurality of different algorithms, creates a plurality of original images,
An image pyramid creation unit (122) performs image pyramid creation processing based on the plurality of original images to create a plurality of image pyramids (PP),
A feature value calculation unit (123) calculates a feature value (y) for each hierarchical image constituting each of the plurality of image pyramids based on the pixel value of the pixel corresponding to the position of the evaluation point (E). An apparatus for estimating the shape of a graphic pattern characterized by the above.
 請求項1~13のいずれかに記載の図形パターンの形状推定装置(100′)において、
 画像ピラミッド作成部(122)が、1つの元画像について、互いに異なる複数通りのアルゴリズムに基づく画像ピラミッド作成処理を行い、複数通りの画像ピラミッド(PP,PD)を作成し、
 特徴量算出部(123)が、前記複数通りの画像ピラミッド(PP,PD)のそれぞれを構成する各階層画像について、評価点(E)の位置に応じた画素の画素値に基づいて特徴量を算出することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to any one of claims 1 to 13,
An image pyramid creation unit (122) performs image pyramid creation processing based on a plurality of different algorithms for one original image, creates a plurality of image pyramids (PP, PD),
A feature amount calculation unit (123) calculates a feature amount for each hierarchical image constituting each of the plurality of image pyramids (PP, PD) based on the pixel value of the pixel corresponding to the position of the evaluation point (E). An apparatus for estimating a shape of a graphic pattern, characterized by calculating.
 請求項1~14のいずれかに記載の図形パターンの形状推定装置(100′)において、
 特徴量算出部(123)が、特定の階層画像上の特定の評価点(E)についての特徴量(y)を算出する際に、前記特定の階層画像を構成する画素から、前記特定の評価点に近い順に合計j個の画素を着目画素(A~D)として抽出し、抽出したj個の着目画素の画素値について、前記特定の評価点と各着目画素との距離に応じた重みを考慮した加重平均を求める演算を行うことを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to any one of claims 1 to 14,
When the feature quantity calculation unit (123) calculates the feature quantity (y) for the specific evaluation point (E) on the specific hierarchical image, the specific evaluation is performed from the pixels constituting the specific hierarchical image. A total of j pixels are extracted as the target pixels (A to D) in the order close to the point, and a weight corresponding to the distance between the specific evaluation point and each target pixel is assigned to the pixel values of the extracted j target pixels. An apparatus for estimating a shape of a graphic pattern, which performs an operation for obtaining a weighted average in consideration.
 請求項1~15のいずれかに記載の図形パターンの形状推定装置(100′)において、
 推定演算部(132)が、特徴量入力部(131)が入力した特徴量(x1~xn)を入力層とし、プロセスバイアスの推定値(y)を出力層とするニューラルネットワークを有することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to any one of claims 1 to 15,
The estimation calculation unit (132) includes a neural network having the feature amounts (x1 to xn) input by the feature amount input unit (131) as input layers and the process bias estimate (y) as an output layer. A shape pattern shape estimation apparatus.
 請求項16に記載の図形パターンの形状推定装置(100′)において、
 推定演算部(132)に含まれるニューラルネットワークが、多数のテストパターン図形を用いたリソグラフィプロセスによって実基板上に形成される実図形パターンの実寸法測定によって得られた寸法値と、各テストパターン図形から得られる特徴量と、を用いた学習段階によって得られたパラメータを学習情報(L)として用い、プロセスバイアスの推定処理を行うことを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to claim 16,
The neural network included in the estimation calculation unit (132) uses the dimension value obtained by measuring the actual dimension of the actual figure pattern formed on the actual substrate by the lithography process using a large number of test pattern figures, and each test pattern figure. A shape estimation apparatus for a graphic pattern, characterized in that a process bias estimation process is performed by using, as learning information (L), a parameter obtained from a learning step using
 請求項16または17に記載の図形パターンの形状推定装置(100′)において、
 推定演算部(132)が、所定の図形の輪郭線上に位置する評価点(E)についてのプロセスバイアスの推定値(y)として、前記輪郭線の法線方向についての前記評価点のずれ量の推定値を求めることを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (100 ') according to claim 16 or 17,
The estimation calculation unit (132) uses the estimated value (y) of the process bias for the evaluation point (E) located on the contour line of the predetermined graphic as the deviation amount of the evaluation point in the normal direction of the contour line. An apparatus for estimating a shape of a graphic pattern, wherein an estimated value is obtained.
 請求項1~18のいずれかに記載の図形パターンの形状推定装置(100′)を用いて、元図形パターン(10)の形状を補正する図形パターンの形状補正装置(100)であって、
 前記図形パターンの形状推定装置(100′)を構成する評価点設定ユニット(110)、特徴量抽出ユニット(120)、バイアス推定ユニット(130)に加えて、
 前記バイアス推定ユニット(130)から出力されるプロセスバイアスの推定値(y)に基づいて、前記元図形パターン(10)に対する補正を行うパターン補正ユニット(140)を更に備え、
 前記パターン補正ユニット(140)による補正によって得られた補正図形パターン(15)を、前記図形パターンの形状推定装置(100′)に新たな元図形パターンとして与えることにより、図形パターンに対する補正を繰り返し実行する機能を有することを特徴とする図形パターンの形状補正装置。
A figure pattern shape correction apparatus (100) for correcting the shape of an original figure pattern (10) using the figure pattern shape estimation apparatus (100 ') according to any one of claims 1 to 18, comprising:
In addition to the evaluation point setting unit (110), the feature quantity extraction unit (120), and the bias estimation unit (130) that constitute the figure pattern shape estimation apparatus (100 ′),
A pattern correction unit (140) for correcting the original figure pattern (10) based on the estimated value (y) of the process bias output from the bias estimation unit (130);
The corrected graphic pattern (15) obtained by the correction by the pattern correction unit (140) is given as a new original graphic pattern to the graphic pattern shape estimation device (100 '), thereby repeatedly performing correction on the graphic pattern. A figure pattern shape correction apparatus having a function of:
 請求項1~18のいずれかに記載の図形パターンの形状推定装置(100′)もしくは請求項19に記載の図形パターンの形状補正装置(100)としてコンピュータを機能させるプログラム。 A program that causes a computer to function as the figure pattern shape estimation apparatus (100 ') according to any one of claims 1 to 18 or the figure pattern shape correction apparatus (100) according to claim 19.  元図形パターン(10)を用いたリソグラフィプロセスをシミュレートすることにより、実基板(S)上に形成される実図形パターン(20)の形状を推定する図形パターンの形状推定方法であって、
 コンピュータが、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターン(10)を入力する元図形パターン入力段階(S1)と、
 コンピュータが、前記輪郭線上の所定位置に評価点(E)を設定する評価点設定段階(S2)と、
 コンピュータが、前記元図形パターン(10)について、前記評価点(E)の周囲の特徴を示す特徴量(x1~xn)を抽出する特徴量抽出段階(S3)と、
 コンピュータが、前記特徴量(x1~xn)に基づいて、前記評価点(E)の前記元図形パターン(10)上の位置と前記実図形パターン(20)上の位置とのずれ量を示すプロセスバイアス(y)を推定するプロセスバイアス推定段階(S4)と、
 を有し、
 前記特徴量抽出段階(S3)は、
 前記元図形パターン(10)に基づいて、それぞれ所定の画素値を有する画素の集合体からなる元画像を作成する元画像作成段階と、
 前記元画像を縮小して縮小画像を作成する縮小処理を含む画像ピラミッド作成処理を行い、それぞれ異なるサイズをもった複数の階層画像(P1~Pn)からなる画像ピラミッド(PP)を作成する画像ピラミッド作成段階と、
 前記画像ピラミッド(PP)を構成する各階層画像(P1~Pn)について、前記評価点(E)の位置に応じた画素の画素値に基づいて特徴量(x1~xn)を算出する特徴量算出段階と、
 を含み、
 前記プロセスバイアス推定段階(S4)は、予め実施された学習段階によって得られた学習情報(L)に基づいて、前記特徴量(x1~xn)に応じた推定値(y)を求め、求めた推定値を前記評価点についてのプロセスバイアスの推定値として出力する推定演算段階を含むことを特徴とする図形パターンの形状推定方法。
A figure pattern shape estimation method for estimating the shape of an actual figure pattern (20) formed on an actual substrate (S) by simulating a lithography process using an original figure pattern (10),
An original graphic pattern input step (S1) in which a computer inputs an original graphic pattern (10) including information on a contour line indicating the boundary between the inside and the outside of the graphic;
An evaluation point setting step (S2) in which the computer sets an evaluation point (E) at a predetermined position on the contour line;
A feature amount extraction step (S3) in which the computer extracts feature amounts (x1 to xn) indicating features around the evaluation point (E) for the original figure pattern (10);
A process in which the computer indicates a deviation amount between the position on the original graphic pattern (10) of the evaluation point (E) and the position on the actual graphic pattern (20) based on the feature values (x1 to xn). A process bias estimation step (S4) for estimating the bias (y);
Have
In the feature quantity extraction step (S3),
Based on the original graphic pattern (10), an original image creating step of creating an original image composed of a collection of pixels each having a predetermined pixel value;
An image pyramid for performing an image pyramid creation process including a reduction process for creating a reduced image by reducing the original image to create an image pyramid (PP) composed of a plurality of hierarchical images (P1 to Pn) having different sizes. The creation stage,
Feature amount calculation for calculating feature amounts (x1 to xn) based on pixel values of pixels corresponding to the positions of the evaluation points (E) for each hierarchical image (P1 to Pn) constituting the image pyramid (PP) Stages,
Including
In the process bias estimation step (S4), an estimated value (y) corresponding to the feature amount (x1 to xn) is obtained based on learning information (L) obtained in a learning step performed in advance. A method for estimating a shape of a graphic pattern, comprising: an estimation calculation step of outputting an estimated value as an estimated value of a process bias for the evaluation point.
 請求項21に記載の図形パターンの形状推定方法において、
 画像ピラミッド作成段階で、元画像もしくは縮小画像に対して所定の画像処理フィルタを用いたフィルタ処理を行うフィルタ処理段階と、このフィルタ処理後の画像に対して縮小処理を行う縮小処理段階と、を交互に実行することにより、複数の階層画像(P1~Pn)からなる画像ピラミッド(PP)を作成することを特徴とする図形パターンの形状推定方法。
The shape estimation method for a graphic pattern according to claim 21,
In the image pyramid creation stage, a filter process stage that performs a filter process using a predetermined image processing filter on the original image or the reduced image, and a reduction process stage that performs a reduction process on the image after the filter process A method for estimating a shape of a graphic pattern, characterized in that an image pyramid (PP) composed of a plurality of hierarchical images (P1 to Pn) is created by performing alternately.
 請求項22に記載の図形パターンの形状推定方法において、
 画像ピラミッド作成段階で、フィルタ処理後の画像、もしくはフィルタ処理後の画像とフィルタ処理前の画像との差分画像(D1~Dn)を階層画像とする画像ピラミッド(PD)を作成することを特徴とする図形パターンの形状推定方法。
The shape estimation method for a graphic pattern according to claim 22,
In the image pyramid creation stage, an image pyramid (PD) is created in which the image after filtering, or the difference image (D1 to Dn) between the image after filtering and the image before filtering, is used as a hierarchical image. Shape estimation method of figure pattern to be performed.
 元図形パターン(10)を用いたリソグラフィプロセスをシミュレートすることにより、実基板(S)上に形成される実図形パターン(20)の形状を推定する図形パターンの形状推定装置(200′)であって、
 前記元図形パターン(10)上に評価点(E)を設定する評価点設定ユニット(110)と、
 前記元図形パターン(10)について、前記評価点(E)の周囲の特徴を示す特徴量(x1~xn)を抽出する特徴量抽出ユニット(220)と、
 前記特徴量(x1~xn)に基づいて、前記評価点(E)の前記元図形パターン(10)上の位置と前記実図形パターン(20)上の位置とのずれ量を示すプロセスバイアス(y)を推定するバイアス推定ユニット(130)と、
 を備え、
 前記評価点設定ユニット(110)は、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターン(10)に基づいて、前記輪郭線上の所定位置に評価点(E)を設定し、
 前記特徴量抽出ユニット(220)は、
 前記元図形パターン(10)に含まれる図形を矩形の集合体(50)に置き換える矩形集合体置換部(221)と、
 1つの評価点(E)について、その周囲に位置する矩形に対する位置関係に基づいて特徴量(x1~xn)を算出するための算出関数(Zk(X,Y))を提供する算出関数提供部(223)と、
 前記算出関数提供部(223)から提供される算出関数(Zk(X,Y))を用いて、前記評価点設定ユニット(110)によって設定された各評価点(E)についての特徴量を算出する特徴量算出部(222)と、
 を有し、
 前記バイアス推定ユニット(130)は、
 前記評価点(E)について算出された特徴量(x1~xn)を入力する特徴量入力部(131)と、
 予め実施された学習段階によって得られた学習情報(L)に基づいて、前記特徴量(x1~xn)に応じた推定値(y)を求め、求めた推定値を前記評価点についてのプロセスバイアスの推定値として出力する推定演算部(132)と、
 を有することを特徴とする図形パターンの形状推定装置。
A figure pattern shape estimation apparatus (200 ′) that estimates the shape of the actual figure pattern (20) formed on the actual substrate (S) by simulating a lithography process using the original figure pattern (10). There,
An evaluation point setting unit (110) for setting an evaluation point (E) on the original figure pattern (10);
A feature amount extraction unit (220) for extracting feature amounts (x1 to xn) indicating features around the evaluation point (E) for the original figure pattern (10);
Based on the feature quantities (x1 to xn), a process bias (y indicating the amount of deviation between the position of the evaluation point (E) on the original figure pattern (10) and the position on the actual figure pattern (20) ) A bias estimation unit (130) for estimating
With
The evaluation point setting unit (110) sets an evaluation point (E) at a predetermined position on the contour line based on the original graphic pattern (10) including the contour line information indicating the boundary between the inside and the outside of the graphic. And
The feature quantity extraction unit (220)
A rectangular aggregate replacement unit (221) that replaces a graphic included in the original graphic pattern (10) with a rectangular aggregate (50);
A calculation function providing unit that provides a calculation function (Zk (X, Y)) for calculating feature quantities (x1 to xn) based on a positional relationship with respect to a rectangle positioned around one evaluation point (E) (223),
Using the calculation function (Zk (X, Y)) provided from the calculation function providing unit (223), the feature amount for each evaluation point (E) set by the evaluation point setting unit (110) is calculated. A feature amount calculation unit (222) to perform,
Have
The bias estimation unit (130) is
A feature amount input unit (131) for inputting feature amounts (x1 to xn) calculated for the evaluation point (E);
Based on learning information (L) obtained in advance by a learning stage, an estimated value (y) corresponding to the feature amount (x1 to xn) is obtained, and the obtained estimated value is used as a process bias for the evaluation point. An estimation calculation unit (132) that outputs an estimated value of
An apparatus for estimating the shape of a graphic pattern, comprising:
 請求項24に記載の図形パターンの形状推定装置(200′)において、
 算出関数提供部(223)が、評価点(E)の近傍の狭い範囲を考慮した特徴量(x1)から、評価点の遠方まで含めた広い範囲を考慮した特徴量(xn)に至るまで、考慮範囲を変えた複数n通りの特徴量(x1~xn)を算出するために、複数n通りの算出関数を提供し、
 特徴量算出部(222)が、この複数n通りの算出関数を用いて、各評価点(E)についてそれぞれ複数n通りの特徴量(x1~xn)を算出することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (200 ') according to claim 24,
From the feature amount (x1) considering the narrow range in the vicinity of the evaluation point (E) to the feature amount (xn) considering the wide range including the distance from the evaluation point, the calculation function providing unit (223) In order to calculate a plurality of n feature quantities (x1 to xn) with different consideration ranges, a plurality of n calculation functions are provided,
A feature amount calculation unit (222) uses the plurality of n types of calculation functions to calculate a plurality of n feature amounts (x1 to xn) for each evaluation point (E). Shape estimation device.
 請求項25に記載の図形パターンの形状推定装置(200′)において、
 算出関数提供部(223)が、ある1つの評価点(E)について、その周囲に位置する矩形の四辺に対する位置関係に基づいて特徴量(x1~xn)を算出する算出関数を提供することを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation apparatus (200 ') according to claim 25,
The calculation function providing unit (223) provides a calculation function for calculating a feature amount (x1 to xn) based on a positional relationship with respect to four sides of a rectangle positioned around one evaluation point (E). An apparatus for estimating the shape of a featured graphic pattern.
 請求項26に記載の図形パターンの形状推定装置(200′)において、
 矩形集合体置換部(221)が、X軸正方向を右方向、Y軸正方向を上方向にとったXY二次元直交座標系において、元図形パターン(10)に含まれる図形を、X軸に平行な上辺および下辺とY軸に平行な左辺および右辺とを有する矩形の集合体(50)に置き換え、
 算出関数提供部(223)が、前記XY二次元直交座標系上における、評価点(E)のX軸方向に関する前記左辺との隔たりを示す左辺位置偏差および前記右辺との隔たりを示す右辺位置偏差、ならびに、評価点のY軸方向に関する前記上辺との隔たりを示す上辺位置偏差および前記下辺との隔たりを示す下辺位置偏差、に基づいて、特徴量(x1~xn)を算出する算出関数を提供することを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation apparatus (200 ') according to claim 26,
In the XY two-dimensional orthogonal coordinate system in which the rectangular aggregate replacement unit (221) takes the X axis positive direction as the right direction and the Y axis positive direction as the upward direction, the figure included in the original figure pattern (10) is converted into the X axis. Is replaced by a rectangular collection (50) having upper and lower sides parallel to, and left and right sides parallel to the Y-axis,
The calculation function providing unit (223) has a left side positional deviation indicating a distance from the left side in the X-axis direction of the evaluation point (E) and a right side positional deviation indicating a distance from the right side on the XY two-dimensional orthogonal coordinate system. And a calculation function for calculating feature quantities (x1 to xn) based on an upper side position deviation indicating a distance from the upper side in the Y-axis direction of an evaluation point and a lower side position deviation indicating a distance from the lower side An apparatus for estimating a shape of a graphic pattern, characterized in that:
 請求項27に記載の図形パターンの形状推定装置(200′)において、
 算出関数提供部(223)が、
 ある1つの着目矩形について、
 変数値の増加とともに関数値が単調増加し、前記着目矩形の左辺のX座標値を変数として与えたときの関数値が0になるX軸単調増加関数と、変数値の増加とともに関数値が単調減少し、前記着目矩形の右辺のX座標値を変数として与えたときの関数値が0になるX軸単調減少関数と、の和である水平方向関数と、
 変数値の増加とともに関数値が単調増加し、前記着目矩形の下辺のY座標値を変数として与えたときの関数値が0になるY軸単調増加関数と、変数値の増加とともに関数値が単調減少し、前記着目矩形の上辺のY座標値を変数として与えたときの関数値が0になるY軸単調減少関数と、の和である垂直方向関数と、
 を定義して、ある1つの着目評価点についての前記着目矩形に対する位置関係を示す量を、前記着目評価点のX座標値を変数とする前記水平方向関数の関数値と、前記着目評価点のY座標値を変数とする前記垂直方向関数の関数値と、の積に基づいて算出し、
 前記着目評価点の周囲に位置する各矩形に対する位置関係を示す量の総和を、前記着目評価点についての特徴量とする算出関数を提供することを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation apparatus (200 ') according to claim 27,
The calculation function providing unit (223)
For one particular rectangle of interest,
As the variable value increases, the function value increases monotonously, and when the X coordinate value of the left side of the target rectangle is given as a variable, the function value becomes 0, and the function value increases monotonically as the variable value increases. A horizontal function that is the sum of an X-axis monotonically decreasing function that decreases and gives a function value of 0 when the X coordinate value of the right side of the target rectangle is given as a variable;
The function value monotonously increases with the increase of the variable value, and the Y-axis monotonically increasing function when the Y coordinate value of the lower side of the target rectangle is given as a variable, and the function value monotonously with the increase of the variable value A vertical function that is the sum of a Y-axis monotonically decreasing function that decreases and gives a function value of 0 when the Y coordinate value of the upper side of the rectangle of interest is given as a variable;
, And an amount indicating a positional relationship with respect to the target rectangle with respect to a certain target evaluation point, a function value of the horizontal function using the X coordinate value of the target evaluation point as a variable, and a value of the target evaluation point Based on the product of the function value of the vertical function with the Y coordinate value as a variable,
An apparatus for estimating a shape of a graphic pattern, characterized in that a calculation function is provided in which a sum of amounts indicating positional relationships with respect to rectangles positioned around the target evaluation point is used as a characteristic amount for the target evaluation point.
 請求項28に記載の図形パターンの形状推定装置(200′)において、
 算出関数提供部が、単調増加もしくは単調減少の度合いがそれぞれ異なる関数を用いた複数n通りの算出関数を、考慮範囲を変えた複数n通りの特徴量を算出するための算出関数として提供することを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation apparatus (200 ') according to claim 28,
The calculation function providing unit provides a plurality of n types of calculation functions using functions having different degrees of monotonic increase or monotonic decrease as calculation functions for calculating a plurality of n types of feature quantities with different consideration ranges. An apparatus for estimating the shape of a graphic pattern characterized by the above.
 請求項29に記載の図形パターンの形状推定装置(200′)において、
 算出関数提供部(223)が、左辺位置偏差、右辺位置偏差、上辺位置偏差および下辺位置偏差を拡がり係数σで除した値を変数とする単調増加関数もしくは単調減少関数を含む算出関数を用意し、前記拡がり係数σの値を複数n通りに変えることにより、複数n通りの算出関数を提供することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (200 ') according to claim 29,
The calculation function providing unit (223) prepares a calculation function including a monotone increasing function or a monotone decreasing function having a variable obtained by dividing the left side position deviation, the right side position deviation, the upper side position deviation, and the lower side position deviation by the spread coefficient σ. An apparatus for estimating the shape of a graphic pattern, which provides a plurality of n types of calculation functions by changing the value of the spread coefficient σ into a plurality of n types.
 請求項30に記載の図形パターンの形状推定装置(200′)において、
 算出関数提供部(223)が、1≦k≦nの範囲をとるパラメータkを用いて、第k番目の拡がり係数をσkと表したときに、σk=2(k-1)に設定することを特徴とする図形パターンの形状推定装置。
The shape estimation apparatus (200 ') for graphic patterns according to claim 30,
The calculation function providing unit (223) sets σk = 2 (k−1) when the k-th expansion coefficient is expressed as σk using the parameter k in the range of 1 ≦ k ≦ n. An apparatus for estimating the shape of a graphic pattern characterized by the above.
 請求項24~31のいずれかに記載の図形パターンの形状推定装置(200′)において、
 矩形集合体置換部(221)が、図形の内部と外部との境界を示す輪郭線の情報と、リソグラフィプロセスにおける各図形に関するドーズ量の情報と、を含む元図形パターン(10)に基づいて、各図形の内部領域と外部領域とを認識し、更に、各図形に関するドーズ量を認識し、各図形に対応する矩形に、それぞれドーズ量を設定し、
 算出関数提供部(223)が、各矩形に設定されたドーズ量を変数として含む算出関数を提供することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (200 ') according to any one of claims 24 to 31,
Based on the original graphic pattern (10), the rectangular aggregate replacement unit (221) includes information on a contour line indicating the boundary between the inside and the outside of the graphic, and information on a dose amount for each graphic in the lithography process. Recognize the internal area and external area of each figure, further recognize the dose amount for each figure, set the dose amount for each rectangle corresponding to each figure,
A shape estimation device for a graphic pattern, characterized in that a calculation function providing unit (223) provides a calculation function including a dose amount set for each rectangle as a variable.
 請求項24~31のいずれかに記載の図形パターンの形状推定装置(200′)において、
 矩形集合体置換部(221)が、元図形パターン(10)に基づいて各図形の輪郭線を構成する単位線分を認識し、各単位線分について微小幅を設定することにより、前記元図形パターンに含まれる図形を、前記微小幅をもった矩形の集合体に置き換えることを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (200 ') according to any one of claims 24 to 31,
The rectangular aggregate replacement unit (221) recognizes the unit line segment constituting the contour line of each figure based on the original figure pattern (10), and sets a minute width for each unit line segment, whereby the original figure An apparatus for estimating a shape of a graphic pattern, wherein a graphic included in the pattern is replaced with a rectangular aggregate having the minute width.
 請求項24~33のいずれかに記載の図形パターンの形状推定装置(200′)において、
 特徴量算出部(222)が、評価点(E)についての特徴量(x1~xn)を算出する際に、前記評価点(E)を中心として所定半径をもった基準円(C)を定義し、前記基準円に応じた所定近傍範囲内に属する矩形との位置関係のみを考慮した演算を行うことを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (200 ') according to any one of claims 24 to 33,
When the feature quantity calculation unit (222) calculates the feature quantities (x1 to xn) for the evaluation point (E), a reference circle (C) having a predetermined radius around the evaluation point (E) is defined. And a figure pattern shape estimation apparatus that performs a calculation considering only a positional relationship with a rectangle belonging to a predetermined neighborhood range corresponding to the reference circle.
 請求項24~34のいずれかに記載の図形パターンの形状推定装置(200′)において、
 推定演算部(132)が、特徴量入力部(131)が入力した特徴量(x1~xn)を入力層とし、プロセスバイアスの推定値(y)を出力層とするニューラルネットワークを有することを特徴とする図形パターンの形状推定装置。
The figure pattern shape estimation apparatus (200 ') according to any one of claims 24 to 34,
The estimation calculation unit (132) includes a neural network having the feature amounts (x1 to xn) input by the feature amount input unit (131) as input layers and the process bias estimate (y) as an output layer. A shape pattern shape estimation apparatus.
 請求項35に記載の図形パターンの形状推定装置(200′)において、
 推定演算部(132)に含まれるニューラルネットワークが、多数のテストパターン図形を用いたリソグラフィプロセスによって実基板上に形成される実図形パターンの実寸法測定によって得られた寸法値と、各テストパターン図形から得られる特徴量(x1~xn)と、を用いた学習段階によって得られたパラメータを学習情報(L)として用い、プロセスバイアスの推定処理を行うことを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation apparatus (200 ') according to claim 35,
The neural network included in the estimation calculation unit (132) uses the dimension value obtained by measuring the actual dimension of the actual figure pattern formed on the actual substrate by the lithography process using a large number of test pattern figures, and each test pattern figure. A shape estimation apparatus for a graphic pattern characterized in that process bias estimation processing is performed by using, as learning information (L), the parameters obtained in the learning step using the feature amounts (x1 to xn) obtained from the above.
 請求項35または36に記載の図形パターンの形状推定装置(200′)において、
 推定演算部(132)が、所定の図形の輪郭線上に位置する評価点(E)についてのプロセスバイアスの推定値(y)として、前記輪郭線の法線方向についての前記評価点のずれ量の推定値を求めることを特徴とする図形パターンの形状推定装置。
In the figure pattern shape estimation device (200 ') according to claim 35 or 36,
The estimation calculation unit (132) uses the estimated value (y) of the process bias for the evaluation point (E) located on the contour line of the predetermined graphic as the deviation amount of the evaluation point in the normal direction of the contour line. An apparatus for estimating a shape of a graphic pattern, wherein an estimated value is obtained.
 請求項24~37のいずれかに記載の図形パターンの形状推定装置(200′)を用いて、元図形パターン(10)の形状を補正する図形パターンの形状補正装置(200)であって、
 前記図形パターンの形状推定装置(200′)を構成する評価点設定ユニット(110)、特徴量抽出ユニット(220)、バイアス推定ユニット(130)に加えて、
 前記バイアス推定ユニット(130)から出力されるプロセスバイアスの推定値(y)に基づいて、前記元図形パターン(10)に対する補正を行うパターン補正ユニット(140)を更に備え、
 前記パターン補正ユニット(140)による補正によって得られた補正図形パターンを、前記図形パターンの形状推定装置(200′)に新たな元図形パターンとして与えることにより、図形パターンに対する補正を繰り返し実行する機能を有することを特徴とする図形パターンの形状補正装置。
A figure pattern shape correction apparatus (200) for correcting the shape of an original figure pattern (10) using the figure pattern shape estimation apparatus (200 ') according to any one of claims 24 to 37,
In addition to the evaluation point setting unit (110), the feature quantity extraction unit (220), and the bias estimation unit (130) that constitute the figure pattern shape estimation apparatus (200 ′),
A pattern correction unit (140) for correcting the original figure pattern (10) based on the estimated value (y) of the process bias output from the bias estimation unit (130);
A function of repeatedly executing correction on a graphic pattern by giving the corrected graphic pattern obtained by the correction by the pattern correction unit (140) as a new original graphic pattern to the graphic pattern shape estimation device (200 '). A shape correction apparatus for a graphic pattern, comprising:
 請求項24~37のいずれかに記載の図形パターンの形状推定装置(200′)もしくは請求項38に記載の図形パターンの形状補正装置(200)としてコンピュータを機能させるプログラム。 A program that causes a computer to function as the figure pattern shape estimation apparatus (200 ') according to any one of claims 24 to 37 or the figure pattern shape correction apparatus (200) according to claim 38.  元図形パターン(10)を用いたリソグラフィプロセスをシミュレートすることにより、実基板(S)上に形成される実図形パターン(20)の形状を推定する図形パターンの形状推定方法であって、
 コンピュータが、図形の内部と外部との境界を示す輪郭線の情報を含む元図形パターン(10)を入力する元図形パターン入力段階と、
 コンピュータが、前記輪郭線上の所定位置に評価点(E)を設定する評価点設定段階と、
 コンピュータが、前記元図形パターン(10)について、前記評価点(E)の周囲の特徴を示す特徴量(x1~xn)を抽出する特徴量抽出段階と、
 コンピュータが、前記特徴量(x1~xn)に基づいて、前記評価点(E)の前記元図形パターン(10)上の位置と前記実図形パターン(20)上の位置とのずれ量を示すプロセスバイアス(y)を推定するプロセスバイアス推定段階と、
 を有し、
 前記特徴量抽出段階は、
 前記元図形パターン(10)に含まれる図形を矩形の集合体(50)に置き換える矩形集合体置換段階と、
 各評価点(E)について、その周囲に位置する前記矩形に対する位置関係に基づいて特徴量(x1~xn)を算出する特徴量算出段階と、
 を含み、
 前記プロセスバイアス推定段階は、予め実施された学習段階によって得られた学習情報(L)に基づいて、前記特徴量(x1~xn)に応じた推定値(y)を求め、求めた推定値を前記評価点についてのプロセスバイアスの推定値として出力する推定演算段階を含むことを特徴とする図形パターンの形状推定方法。
A figure pattern shape estimation method for estimating the shape of an actual figure pattern (20) formed on an actual substrate (S) by simulating a lithography process using an original figure pattern (10),
An original graphic pattern input stage in which a computer inputs an original graphic pattern (10) including information on a contour line indicating the boundary between the inside and the outside of the graphic;
An evaluation point setting step in which the computer sets an evaluation point (E) at a predetermined position on the contour line;
A feature amount extraction step in which a computer extracts feature amounts (x1 to xn) indicating features around the evaluation point (E) for the original figure pattern (10);
A process in which the computer indicates a deviation amount between the position on the original graphic pattern (10) of the evaluation point (E) and the position on the actual graphic pattern (20) based on the feature values (x1 to xn). A process bias estimation stage for estimating the bias (y);
Have
The feature extraction step includes:
A rectangular aggregate replacement step of replacing a graphic included in the original graphic pattern (10) with a rectangular aggregate (50);
For each evaluation point (E), a feature amount calculating step for calculating feature amounts (x1 to xn) based on the positional relationship with respect to the rectangle positioned around the evaluation point (E);
Including
In the process bias estimation step, an estimated value (y) corresponding to the feature amount (x1 to xn) is obtained based on learning information (L) obtained in a learning step performed in advance, and the obtained estimated value is obtained. A method for estimating a shape of a graphic pattern, comprising: an estimation calculation step of outputting as an estimated value of a process bias for the evaluation point.
PCT/JP2018/022100 2017-06-16 2018-06-08 Device for estimating shape of figure pattern Ceased WO2018230476A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2017-118921 2017-06-16
JP2017118921 2017-06-16
JP2018061898A JP6508496B2 (en) 2017-06-16 2018-03-28 Shape estimation device for figure pattern
JP2018-061898 2018-03-28

Publications (1)

Publication Number Publication Date
WO2018230476A1 true WO2018230476A1 (en) 2018-12-20

Family

ID=64659118

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/022100 Ceased WO2018230476A1 (en) 2017-06-16 2018-06-08 Device for estimating shape of figure pattern

Country Status (1)

Country Link
WO (1) WO2018230476A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634053A (en) * 2018-12-21 2019-04-16 苏州苏纳光电有限公司 Reticle and preparation method thereof based on graph compensation
CN112651896A (en) * 2020-12-30 2021-04-13 成都星时代宇航科技有限公司 Valid vector range determining method and device, electronic equipment and readable storage medium
CN117111399A (en) * 2023-10-25 2023-11-24 合肥晶合集成电路股份有限公司 Optical proximity correction method, system, computer equipment and medium
CN117745955A (en) * 2024-02-20 2024-03-22 北京飞渡科技股份有限公司 Method and device for generating urban building scene based on building base vector data

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06104163A (en) * 1992-09-18 1994-04-15 Hitachi Ltd Focus correction method for electron beam writer
JPH08195339A (en) * 1995-01-18 1996-07-30 Hitachi Ltd Electron beam drawing method
JPH10144684A (en) * 1996-11-11 1998-05-29 Nec Corp Charged particle beam writing method and apparatus therefor
JPH118187A (en) * 1997-06-18 1999-01-12 Sony Corp Verification method of electron beam exposure data, electron beam exposure data creation device, and mask creation device
JP2003151885A (en) * 2001-11-15 2003-05-23 Hitachi Ltd Pattern forming method and semiconductor device manufacturing method
JP2004294977A (en) * 2003-03-28 2004-10-21 Nikon Corp Pattern production method and pattern production system, mask production method and mask production system, mask, exposure method and exposure apparatus, and device production method
JP2006032480A (en) * 2004-07-13 2006-02-02 Fujitsu Ltd Charged particle beam exposure method
JP2006171113A (en) * 2004-12-13 2006-06-29 Toshiba Corp Mask data creation apparatus, mask data creation method, exposure mask, semiconductor device manufacturing method, and mask data creation program
JP2010066460A (en) * 2008-09-10 2010-03-25 Toshiba Corp Method for correcting pattern and program for correcting pattern
JP2010199159A (en) * 2009-02-23 2010-09-09 Toshiba Corp Method of manufacturing semiconductor device, and program for forming exposure parameter
JP2011065111A (en) * 2009-09-21 2011-03-31 Toshiba Corp Method for designing photomask
JP2013057848A (en) * 2011-09-09 2013-03-28 Fujitsu Semiconductor Ltd Mask pattern correcting device, mask pattern correcting method, and mask pattern correcting program
JP2013182962A (en) * 2012-02-29 2013-09-12 Toshiba Corp Method of manufacturing template
JP2013251484A (en) * 2012-06-04 2013-12-12 Jeol Ltd Electric charge particle beam drawing device and electric charge particle beam drawing method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06104163A (en) * 1992-09-18 1994-04-15 Hitachi Ltd Focus correction method for electron beam writer
JPH08195339A (en) * 1995-01-18 1996-07-30 Hitachi Ltd Electron beam drawing method
JPH10144684A (en) * 1996-11-11 1998-05-29 Nec Corp Charged particle beam writing method and apparatus therefor
JPH118187A (en) * 1997-06-18 1999-01-12 Sony Corp Verification method of electron beam exposure data, electron beam exposure data creation device, and mask creation device
JP2003151885A (en) * 2001-11-15 2003-05-23 Hitachi Ltd Pattern forming method and semiconductor device manufacturing method
JP2004294977A (en) * 2003-03-28 2004-10-21 Nikon Corp Pattern production method and pattern production system, mask production method and mask production system, mask, exposure method and exposure apparatus, and device production method
JP2006032480A (en) * 2004-07-13 2006-02-02 Fujitsu Ltd Charged particle beam exposure method
JP2006171113A (en) * 2004-12-13 2006-06-29 Toshiba Corp Mask data creation apparatus, mask data creation method, exposure mask, semiconductor device manufacturing method, and mask data creation program
JP2010066460A (en) * 2008-09-10 2010-03-25 Toshiba Corp Method for correcting pattern and program for correcting pattern
JP2010199159A (en) * 2009-02-23 2010-09-09 Toshiba Corp Method of manufacturing semiconductor device, and program for forming exposure parameter
JP2011065111A (en) * 2009-09-21 2011-03-31 Toshiba Corp Method for designing photomask
JP2013057848A (en) * 2011-09-09 2013-03-28 Fujitsu Semiconductor Ltd Mask pattern correcting device, mask pattern correcting method, and mask pattern correcting program
JP2013182962A (en) * 2012-02-29 2013-09-12 Toshiba Corp Method of manufacturing template
JP2013251484A (en) * 2012-06-04 2013-12-12 Jeol Ltd Electric charge particle beam drawing device and electric charge particle beam drawing method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634053A (en) * 2018-12-21 2019-04-16 苏州苏纳光电有限公司 Reticle and preparation method thereof based on graph compensation
CN112651896A (en) * 2020-12-30 2021-04-13 成都星时代宇航科技有限公司 Valid vector range determining method and device, electronic equipment and readable storage medium
CN117111399A (en) * 2023-10-25 2023-11-24 合肥晶合集成电路股份有限公司 Optical proximity correction method, system, computer equipment and medium
CN117111399B (en) * 2023-10-25 2024-02-20 合肥晶合集成电路股份有限公司 Optical proximity correction method, system, computer equipment and medium
CN117745955A (en) * 2024-02-20 2024-03-22 北京飞渡科技股份有限公司 Method and device for generating urban building scene based on building base vector data
CN117745955B (en) * 2024-02-20 2024-05-07 北京飞渡科技股份有限公司 Method and device for generating urban building scenes based on building base vector data

Similar Documents

Publication Publication Date Title
JP7120127B2 (en) Figure pattern shape estimation device
JP6497494B1 (en) Shape correction apparatus and shape correction method for figure pattern
Hosseiny A deep learning model for predicting river flood depth and extent
TWI732472B (en) Method for fabricating semiconductor device
WO2018230476A1 (en) Device for estimating shape of figure pattern
Jia et al. Machine learning for inverse lithography: using stochastic gradient descent for robustphotomask synthesis
CN110426914B (en) Correction method of sub-resolution auxiliary graph and electronic equipment
US12412017B2 (en) Methods for modeling of a design in reticle enhancement technology
EP2113109B1 (en) Simulation site placement for lithographic process models
JP6508504B1 (en) Shape correction apparatus and shape correction method for figure pattern
JP2019003170A5 (en)
CN103376644A (en) Mask pattern correction method
CN116107155A (en) Apparatus and method for generating a photomask
JP6337511B2 (en) Patterning method using multi-beam electron beam lithography system
US20240045321A1 (en) Optical proximity correction method using neural jacobian matrix and method of manufacturing mask by using the optical proximity correction method
JP2004077837A (en) How to correct design patterns
US7328424B2 (en) Method for determining a matrix of transmission cross coefficients in an optical proximity correction of mask layouts
CN115933328A (en) Photoetching model calibration method and system based on convex optimization
CN111538213B (en) A Neural Network-based Correction Method for Electron Beam Proximity Effect
US20200064732A1 (en) Hessian-free calculation of product of hessian matrix and vector for lithography optimization
Zhu et al. Machine learning-enhanced model-based optical proximity correction by using convolutional neural network-based variable threshold method
US20190219933A1 (en) System and method for analyzing printed masks for lithography based on representative contours
CN116266407A (en) Image-based method for semiconductor device patterning using deep neural networks
CN119376180B (en) A mask layout method for manufacturing semiconductor devices, and a mask.
CN120370633A (en) Optical proximity correction method based on U-Net neural network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18816589

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18816589

Country of ref document: EP

Kind code of ref document: A1