WO2023112302A1 - 教師データ作成支援装置、教師データ作成支援方法 - Google Patents
教師データ作成支援装置、教師データ作成支援方法 Download PDFInfo
- Publication number
- WO2023112302A1 WO2023112302A1 PCT/JP2021/046717 JP2021046717W WO2023112302A1 WO 2023112302 A1 WO2023112302 A1 WO 2023112302A1 JP 2021046717 W JP2021046717 W JP 2021046717W WO 2023112302 A1 WO2023112302 A1 WO 2023112302A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- data creation
- feature amount
- result
- creation support
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30121—CRT, LCD or plasma display
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Definitions
- the present invention relates to a teacher data creation support device and method for assisting the creation of teacher data in machine learning, and is particularly applicable to inspection and measurement devices that perform automatic inspection or measurement using an image recognition model constructed by machine learning. and effective technology.
- Patent Literature 1 discloses a “teaching data creation support device for assisting creation of teaching data used for learning of a classifier for classifying data”.
- the teacher data in which any one of the multiple categories is taught is reduced in dimension by principal component analysis and mapped to a low-dimensional area.
- the principal component axis and the area range are appropriately set, and the discretized distribution image obtained as a result is used to suitably support the understanding of the distribution state of the teacher data.
- the accuracy of the image recognition model depends on the training images used for learning during the development stage.
- Patent Document 1 humans can visually recognize the distribution of the teacher data by expressing the multidimensional feature values of the teacher data in a low-dimensional space. Therefore, by applying this technique to inspection images and mapping them into a low-dimensional space, it becomes possible to efficiently collect and select images from the distribution.
- Patent Document 1 since the input is the entire image, it can be applied only when one defect appears in the image, but it cannot be applied when multiple defects appear in the image. In this case, it is necessary to specify the feature amount corresponding to each defect and map it in a low-dimensional space. However, since the technique maps the feature amount of the entire image, the feature amount of each defect cannot be accurately reflected. Moreover, when specifying the feature amount corresponding to each defect, it is important to specify the feature amount of the peripheral area in addition to the area where the defect exists. This is particularly important in creating training data for an image recognition model that has a plurality of defects in an image and detects these defects individually (details will be described in Examples).
- the object of the present invention is to specify the feature amount corresponding to each defect in an image in which a plurality of defects appear in the image in consideration of the surrounding area and map it to a low-dimensional space, thereby enabling efficient learning.
- An object of the present invention is to provide a teacher data creation support device and a teacher data creation support method using the same that enable collection and selection of images.
- the present invention provides an image recognition unit that extracts a feature amount from an input image based on a learning result, performs image processing from the feature amount, and outputs a recognition result, and the image recognition unit.
- a feature quantity specifying unit for inputting one or more prediction results or specified regions of and specifying the corresponding feature quantity for each of the prediction results or the specified regions; and a dimension reduction unit that performs dimensionality reduction and projects the feature amount stored in the inspection result feature amount database into a low-dimensional space
- the feature amount specifying unit is an important area calculation unit that obtains an important area holding area information around the detection area of the prediction result or the specified area for each prediction result or each specified area; and a feature quantity extraction unit that extracts the feature quantity corresponding to each of the prediction results or each of the specified regions by weighting the feature quantity by the important region.
- the present invention also provides a teaching data creation support method for assisting creation of teaching data in machine learning, comprising: (a) a step of individually detecting the types and positions of a plurality of defects appearing in an inspection image; b) a step of specifying a feature quantity for each defect for the result detected in step (a) and storing it in a database; projecting into space and displaying the result on a display; characterized by having
- the feature quantity corresponding to each defect is specified in consideration of the peripheral area, and mapped onto a low-dimensional space, thereby efficiently producing a learning image. It is possible to realize a teacher data creation support device that enables collection and selection and a teacher data creation support method using the same.
- FIG. 1 is a diagram showing a schematic configuration of a teaching data creation support device according to the present invention
- FIG. 1 is a block diagram showing the configuration of a teacher data creation support device according to Example 1 of the present invention
- FIG. 3 is a flow chart showing processing by the training data creation support device of FIG. 2
- FIG. 3 is a flow chart showing processing by a feature quantity specifying unit 6 of FIG. 2
- FIG. It is a figure which shows an example of the preservation
- FIG. 3 is a block diagram showing a system configuration related to storage processing of a learning result feature amount DB 18 of FIG. 2
- 7 is a flowchart showing processing by the system configuration of FIG.
- FIG. 6 3 is a diagram showing an example of a dimension reduction result by the dimension reduction unit 8 of FIG. 2;
- FIG. 3 is a diagram showing a display example of a display unit 20 of FIG. 2;
- FIG. FIG. 3 is a diagram showing the effect of the important region calculation unit 15 of FIG. 2;
- FIG. 3 is a diagram showing the effect of the important region calculation unit 15 of FIG. 2;
- FIG. 5 is a block diagram showing the configuration of a teacher data creation support device according to Example 2 of the present invention; 13 is a flow chart showing processing by the training data creation support device of FIG. 12; 13 is a diagram showing an example of a storage form of an imaging result DB 29 of FIG. 12;
- FIG. 13 is a diagram showing a display example of a display unit 34 of FIG. 12;
- FIG. 1 is a diagram showing a schematic configuration of a teaching data creation support device according to the present invention.
- the feature amount specifying unit 6 specifies the feature amount for each defect from the detection result 3 of the image recognition unit 2 for the inspection image 1 and stores it in the feature amount DB 7. Then, the dimension reduction unit 8 performs dimension reduction on the feature amount specified by the feature amount specifying unit 6 and is stored in the feature amount DB 7, and projects it onto a low-dimensional space, and the display unit 9 displays the result.
- the image recognition unit 2 individually detects the types and positions of defects appearing in the inspection image 1 as detection results 4 and 5, respectively.
- a feature quantity specifying unit 6 receives the results detected by the image recognition unit 2 and specifies a feature quantity for each detection result. Since the dimension reduction unit 8 dimensionally reduces the feature amount for each detection result and the display unit 9 displays the results, the display unit 9 displays data points after the dimension reduction corresponding to each defect. For example, if there are two defects in the image and the image recognition unit 2 detects them separately, the display unit 9 displays two data points corresponding to these defects separately.
- FIG. 2 is a block diagram showing the configuration of the teaching data creation support device according to the first embodiment of the present invention.
- the inspection device 11 captures an inspection image 12 for the sample 10 .
- the sample 10 is, for example, a semiconductor wafer, and the inspection device 11 corresponds to, for example, a defect inspection device using a mirror electron microscope that forms an image of mirror electrons, an optical defect inspection device, or the like.
- the image recognition unit 2 performs defect inspection on the acquired inspection image 12 .
- the image recognition unit 2 extracts feature amounts from the inspection image 12, and detects defects appearing in the inspection image 12 from the extracted feature amounts. When a plurality of defects appear in the inspection image 12, the image recognition unit 2 detects the defects individually. Therefore, it is a model that can predict the type and position of the defect of the image recognition unit 2 .
- an SSD Single Shot Multibox Detector
- CNN Convolution Neural Network
- RetinaNet RetinaNet
- the feature quantity specifying unit 6 is composed of an important region calculation unit 15 and a feature quantity extraction unit 16 . The details of the processing contents of each component will be described later.
- the important area calculation unit 15 receives the detection result 13 and obtains an important area for the detection result. This important area holds peripheral area information including the detection area, and indicates an area considered important when the image recognition unit 2 detects a defect.
- the feature amount extraction unit 16 weights the feature amount extracted by the image recognition unit 2 using the important area calculated by the important area calculation unit 15, and extracts the feature amount that is the factor of the detection result. is output as a feature value corresponding to .
- the feature amount corresponding to the detection result specified by the feature amount specifying unit 6 is stored in the inspection result feature amount DB 17 .
- the learning result feature amount DB 18 stores the corresponding feature amount for each detection result obtained by the feature amount specifying unit 6 for the detection result of the image recognition unit 2 for the learning image used for the learning of the image recognition unit 2 .
- the details of the saving process to the learning result feature value DB 18 will be described later.
- the dimension reduction unit 8 performs dimension reduction on the results stored in the inspection result feature amount DB 17 and the learning result feature amount DB 18, and maps them into a two-dimensional or three-dimensional low-dimensional space.
- t-Distributed Stochastic Neighbor Embedding t-SNE
- other dimensionality reduction algorithms such as principal component analysis and independent component analysis may be used.
- the display unit 20 displays the results of dimension reduction by the dimension reduction unit 8 and the results stored in the detection result DB 14 .
- the teacher data creation unit 21 has a function that allows the user to perform operations such as selecting data from the results displayed by the display unit 20 and labeling the selected data. It is saved in the data DB 22 .
- FIG. 3 is a flowchart showing processing by the teacher data creation support device of FIG.
- step S ⁇ b>101 the inspection device 11 captures an inspection image 12 of the sample 10 .
- step S102 the image recognition unit 2 predicts the types and positions of defects appearing in the inspection image 12 and outputs them as detection results 13. Also, the detection result 13 is stored in the detection result DB 14 .
- step S103 the important area calculation unit 15 obtains an important area holding area information around the detection area for each detection result 13, including the detection area.
- step S104 the feature amount extraction unit 16 weights the feature amount extracted by the image recognition unit 2 using the important area obtained by the important area calculation unit 15, thereby calculating the feature amount that is the factor of the detection result. are extracted and stored in the inspection result feature amount DB 17 for each detection result class and detection result.
- step S105 the dimension reduction unit 8 performs dimension reduction on the feature values stored in the inspection result feature value DB 17 and the learning result feature value DB 18.
- step S106 the display unit 20 displays the result of the dimension reduction unit 8.
- step S107 the teacher data creation unit 21 stores the created teacher data in the teacher data DB 22.
- FIG. 4 The details of the processing contents of the important region calculation unit 15 and the feature amount extraction unit 16 will be explained using FIGS. 4 and 5.
- FIG. 4 The details of the processing contents of the important region calculation unit 15 and the feature amount extraction unit 16 will be explained using FIGS. 4 and 5.
- FIG. 4 is a flow chart showing the processing of the feature quantity specifying unit 6 (the important region calculating unit 15 and the feature quantity extracting unit 16).
- step S108 the important region calculation unit 15 calculates the differentiation of the feature map of the image recognition unit 2 with respect to the detection result by error backpropagation, and obtains Sk ,c,box_pre representing the important region for the detection result.
- the feature quantity map holds the feature quantity extracted from the inspection image 12 by the image recognition unit 2 . This processing is shown in equation (1).
- Equation (1) y c,box_pre is the score for class c (defect type) predicted by the image recognition unit 2, and box_pre represents the predicted position.
- Ak represents a feature quantity map possessed by the image recognition unit 2, and k is a channel number.
- Sk,c,box_pre obtained by equation (1) represents the spatial importance of the prediction result (class c, position box_pre) at each pixel of the feature map with channel number k.
- Information around the detection area is also taken into consideration for the important area determined by Equation (1). Therefore, when the image recognition model attaches importance to the area around the detection area and detects a defect, that area is also output as a large value, as will be described later.
- a mask that assigns 1 to the inside of the detection area and a preset area around the detection area and 0 to the rest of the area, or a preset template area is also used as the important area. be able to. If a plurality of defects appear in the inspection image 12 and there are a plurality of detection results, the processing of formula (1) is performed for each detection result. Therefore, an important area corresponding to each detection result is obtained.
- step S109 the feature amount extraction unit 16 weights the feature amount map held by the image recognition unit 2 by the important area obtained by the important area calculation unit 15. This processing is shown in equation (2).
- step S110 the feature amount extraction unit 16 averages and normalizes the weighted feature amounts G k, c, box_pre for each channel, and outputs the result as a feature amount corresponding to the detection result. Since G k, c, box_pre is a two-dimensional tensor, it is possible to obtain a feature amount as a scalar value for each channel by the above process. For example, if the number of channels is 512, 512 feature quantities are obtained.
- Equation (2) information representing the degree of importance of each channel with respect to the detection result may be entered in the feature quantity map held by the image recognition unit 2 .
- the processing represented by the following formula (3) is performed.
- Equation (3) ⁇ k,c,box_pre represents the importance of the feature quantity held by the feature quantity map of channel number k with respect to the detection result, and is obtained by Equation (4) below.
- FIG. 5 shows an example of a storage form of the feature amount corresponding to the detection result obtained by the important region calculation unit 15 and the feature amount extraction unit 16 in the inspection result feature amount DB 17.
- FIG. 5 shows an example of a storage form of the feature amount corresponding to the detection result obtained by the important region calculation unit 15 and the feature amount extraction unit 16 in the inspection result feature amount DB 17.
- the feature values corresponding to the detection results are stored for each detection target class and for each detection result.
- there are classes of defects to be detected, defect A to defect N, and feature quantities corresponding to each detection result are stored for each class.
- Fig. 6 shows the system configuration required for the saving process to the learning result feature value DB 18.
- the learning image 23 is learning data used for learning by the image recognition unit 2 .
- the image recognition unit 2 detects defects in the learning image 23 and outputs detection results 24 .
- the feature quantity specifying unit 6 specifies the feature quantity corresponding to the detection result 24 and stores it in the learning result feature quantity DB 18 .
- a part of the image used for learning may be used as the learning image 23.
- FIG. 7 is a flowchart showing processing by the system configuration of FIG.
- step S ⁇ b>111 the image recognition unit 2 detects defects in the learning image 23 and outputs them as detection results 24 .
- the feature amount specifying unit 6 specifies feature amounts corresponding to the detection results, and stores them in the learning result feature amount DB 18 for each detection result class and each detection result.
- the processing content of the feature quantity specifying unit 6 at this time is the same as the processing shown in the flowchart of FIG. Further, the feature amount specified by the feature amount specifying unit 6 is stored in the learning result feature amount DB 18 in the same form as the storage form in the inspection result feature amount DB 17 shown in FIG. 5 .
- step S113 it is determined whether or not all the learning images 23 have been processed. If it is determined that all the learning images 23 have been processed (YES), the processing is terminated, If it is determined that the process has not been completed for the image 23 (NO), the process returns to step S111 and the processes after step S111 are executed again.
- FIG. 8 shows an example of mapping the results stored in the inspection result feature amount DB 17 and the learning result feature amount DB 18 by the dimension reduction unit 8 to a low-dimensional space.
- black circle points are data corresponding to feature amounts stored in the learning result feature amount DB 18
- black triangle points are data corresponding to feature amounts stored in the inspection result feature amount DB 17. .
- the inspection data in the lower left of Fig. 8 exists in almost the same area as the learning data, these inspection data are similar in features to the learning data.
- the inspection data existing in the upper right of FIG. 8 exist in a region separate from the learning data, and have different characteristics from the learning data.
- image recognition models cause performance degradation such as false detections and omissions for images that are not included in the training data. Therefore, the user can efficiently create teacher data capable of improving the performance of the image recognition model by preferentially labeling inspection images such as those shown in the upper right of FIG. 8 to create teacher data.
- the dimension reduction unit 8 dimensionally reduces data corresponding to each defect class. For example, when dimension reduction is performed for defect A, the data corresponding to defect A stored in the inspection result feature value DB 17 and the data corresponding to defect A stored in the learning result feature value DB 18 are subjected to dimension reduction. I do. Also, the dimension reduction unit 8 may collectively reduce the dimensions of all data instead of each defect class.
- FIG. 9 is a diagram showing a display example of the display unit 20.
- the display unit 20 includes (1) an inspection data selection unit, (2) a defect class selection unit, (3) a dimension reduction result display unit, (4) a detection result display unit, and (5) a teacher Data creation part, etc. are displayed.
- the inspection data selection unit selects inspection data
- the defect class selection unit selects a defect class to be subjected to dimension reduction.
- the dimension reduction result of the dimension reduction unit 8 is displayed in the dimension reduction result display unit.
- the data points corresponding to the feature amounts stored in the inspection result feature amount DB 17 and the data points corresponding to the feature amounts stored in the learning result feature amount DB 18 are displayed in different colors or shapes.
- the detection result of the image recognition unit 2 is displayed in the detection result display unit. For example, a prediction class, a prediction region (coordinates), a score representing the certainty of prediction, and the like are displayed. At this time, it is displayed in association with the data points displayed in the (3) dimension reduction result display section. For example, (3) when a data point displayed in the dimension reduction result display section is selected, the corresponding detection result is displayed.
- the number of data points after dimensionality reduction is the same as the number of detection results. So, for example, if multiple defects exist in the image and are detected separately, the data points corresponding to those defects are displayed separately.
- the teacher data creation unit has a function that allows the user to create teacher data using a pen tablet or the like for the results displayed in (3) the dimension reduction result display unit and (4) the detection result display unit. . For example, selection of a region in which a defect exists, selection of a class of the defect, and the like are performed.
- the training data creation unit may have a function of using the detection results of the image recognition unit 2 as label candidates. For example, the classes and regions predicted by the image recognition unit 2 are used as class candidates and region candidates.
- the important region calculator 15 determines the important area by considering the peripheral areas including the detection area.
- the left diagram of FIG. 10 shows an example in which a defect in a circuit is detected with a score value of 0.9, which represents the certainty of prediction.
- a circuit pattern exists around the defect. If such a circuit pattern exists in most of the training data used for learning the image recognition model, the circuit pattern that exists around the defect will also be learned, so when detecting the defect, the circuit pattern that exists around the defect will be learned. is detected with emphasis on In this case, an image recognition model that remembers the circuit pattern may detect the defect with a low score for an image in which the circuit pattern has been changed due to a change in the circuit manufacturing process and the circuit pattern has disappeared around the defect. This is shown in the right diagram of FIG.
- the final output result of the image recognition model is determined by setting the score threshold. For example, if the score threshold is set to 0.6, only results with a score of 0.6 or higher are finally output. This makes it possible to exclude low-scoring detection results that are likely to be false positives.
- defects detected with low scores shown in the right diagram of FIG. 10 may be excluded by the score threshold. Therefore, in order to detect defects as shown in the right figure of FIG. 10 with a high score, it is necessary to preferentially collect images without circuit patterns as shown in the right figure of FIG. There is
- the present invention facilitates the discovery of images that are not included in the learning data by mapping the feature amount corresponding to the detection result to the low-dimensional space.
- mapping the image after the pattern change as shown in the right diagram of FIG. 10 to a different area from the image before the pattern change as shown in the left diagram of FIG. It is necessary to perform dimensionality reduction on the feature amount including the surrounding area.
- the left diagram of FIG. 11 shows an example of the result of dimensionality reduction of the feature amount of only the defect to be detected.
- the feature amount to be subjected to dimensionality reduction is only the feature amount of the defect to be detected, it is mapped to the same area on the dimensional space before and after the pattern change.
- the right figure of FIG. 11 shows an example of the result of dimensionality reduction of the feature amount including the surrounding area.
- the important area calculation unit 15 obtains the defect to be detected and the area around it as shown in the black frame in the right diagram of FIG. . Therefore, as shown in the right diagram of FIG. 11, it is possible to map the images before and after the pattern change to different regions on the low-dimensional space.
- FIG. 12 is a block diagram showing the configuration of the teaching data creation support device of this embodiment.
- the imaging device 26 captures an inspection image of the sample 25 and saves it in the imaging result DB 29 .
- the sample 25 is a semiconductor wafer, an electronic device, or the like, and the imaging device 26 is a scanning electron microscope (SEM) that generates an image by irradiating an electron beam, or a scanning electron microscope for length measurement (CD), which is a type of measuring device.
- SEM scanning electron microscope
- CD scanning electron microscope for length measurement
- -SEM Critical Dimension-Scanning Electron Microscope
- the imaging device 26 images the sample 25 according to the recipe created by the recipe creation unit 28 .
- a recipe is a program for controlling the imaging device 26, and imaging conditions such as the imaging position and the number of times of imaging are controlled by the recipe.
- the recipe creation unit 28 creates recipes according to the specified area list 27 .
- the designated area list 27 is a list describing imaging positions obtained from design data describing design information of the sample 25 and/or data describing imaging conditions of the imaging device 26 or the like.
- the design data is expressed, for example, in a GDS (Graphic Data System) format.
- the designated area list 27 may be imaging position information preset by the user.
- the captured image captured by the imaging device 26 is linked to the area information described in the designated area list 27 and stored in the imaging result DB 29 .
- the image recognition unit 30 outputs an output result 31 by performing image recognition processing on the captured image stored in the imaging result DB 29 .
- the image recognition unit 30, for example, uses an image recognition model that predicts a class that appears in a specified area, an image recognition model that predicts a class for each pixel of an image, and compresses an input image once and restores it to its original dimension.
- Such an image recognition model, etc. are all built with CNN.
- the feature amount specifying unit 6 specifies the feature amount corresponding to each specified area described in the specified area list 27 for the output result 31 and stores it in the feature amount DB 32 . At this time, the feature amount corresponding to each specified area is separately stored in the feature amount DB 32 . Therefore, if there are a plurality of specified areas for one inspection image, they are stored separately.
- the dimension reduction unit 8 performs dimension reduction on the results stored in the feature DB 32 .
- the clustering unit 33 clusters the result of the dimension reduction unit 8 into a plurality of groups based on the degree of similarity between the data after the dimension reduction.
- the clustering unit 33 uses, for example, the k-means algorithm.
- the display unit 34 displays the results of the dimension reduction unit 8 and the clustering unit 33 in association with information stored in the imaging result DB 29 .
- the display unit 34 has a function that allows the user to manually cluster the result of the dimension reduction unit 8 and a function that allows the user to manually select data.
- the small amount of data identification unit 35 identifies small amounts of data from the results of the clustering unit 33 and/or the display unit 34 . This is realized by processing such as counting the number of data contained in each region obtained by clustering processing.
- the small amount of data corresponds to an image of a pattern having a smaller number of images than other patterns when the inspection image captured by the imaging device 26 is divided into a plurality of patterns.
- the recipe creating unit 28 has a function of updating the imaging position and the number of times of imaging based on the results specified by the small amount data specifying unit 35 and reflecting them in the recipe. Specifically, the number of times of imaging is preferentially increased for an image specified as a small amount of data. As a result, the number of images for each pattern can be made uniform with respect to the entire captured image.
- FIG. 13 is a flow chart showing processing by the training data creation support device of FIG.
- step S114 the recipe creating unit 28 creates a recipe according to the specified area list 27.
- step S115 the imaging device 26 images the sample 25 according to the recipe, associates it with the specified area described in the specified area list 27, and saves it in the imaging result DB 29.
- step S ⁇ b>116 the image recognition unit 30 performs image recognition processing on the captured image stored in the imaging result DB 29 and outputs it as an output result 31 .
- step S117 the feature amount specifying unit 6 specifies a feature amount for each specified area and stores it in the feature amount DB 32 for each specified area.
- step S118 the dimension reduction unit 8 performs dimension reduction on the feature quantity stored in the feature quantity DB 32.
- step S119 the clustering unit 33 clusters the dimension-reduced results.
- step S ⁇ b>120 the display unit 34 displays the dimension reduction result and the clustering result together with the captured images stored in the imaging result DB 29 .
- step S121 the small amount of data identification unit 35 identifies the small amount of data.
- step S122 the recipe creating unit 28 updates the recipe based on the result of the small amount data specifying unit 35.
- step S123 it is determined whether or not the imaging is finished. If it is determined that the imaging is finished (YES), the process is finished. If it is determined that the imaging is not finished (NO), the process returns to step S115. The processing after S115 is executed again.
- FIG. 14 shows an example of storage in the imaging result DB 29. As shown in FIG. 14, the captured image and the specified area described in the specified area list 27 are linked and saved.
- FIG. 15 is a display example of the display unit 34.
- the display unit 34 includes (1) an imaged data selection unit, (2) a dimension reduction/clustering result display unit, (3) an imaged image display unit, (4) a manual clustering unit, and (5) A small amount data specification part, etc. are displayed.
- the imaging data is selected by the imaging data selection unit.
- the captured image stored in the captured image DB 29 is displayed in the captured image display section. At this time, it is displayed in association with the (2) dimension reduction/clustering result display section. For example, (2) when a data point displayed in the dimension reduction/clustering result display section is selected, an image and a specified region corresponding to the data are displayed.
- the (4) manual clustering unit has a function that enables the user to manually cluster the data displayed in the (2) dimension reduction/clustering result display unit. For example, clustering is performed by selecting an area using a pen tablet or the like.
- the small amount of data specification unit has a function that allows the user to manually specify the small amount of data. For example, it is designated by selecting the data displayed in the dimension reduction/clustering result display area (2) using a pen tablet or the like.
- the present invention is not limited to the above-described embodiments, and includes various modifications.
- the above-described embodiments have been described in detail in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the described configurations.
- it is possible to replace part of the configuration of one embodiment with the configuration of another embodiment and it is also possible to add the configuration of another embodiment to the configuration of one embodiment.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
を有することを特徴とする。
Claims (17)
- 学習結果に基づいて、入力画像に対し特徴量を抽出し、前記特徴量から画像処理を行い認識結果を出力する画像認識部と、
前記画像認識部の1つ以上の予測結果もしくは指定領域を入力とし、前記予測結果もしくは前記指定領域毎の該当する前記特徴量を特定する特徴量特定部と、
前記予測結果毎もしくは前記指定領域毎の前記特徴量が保存された検査結果特徴量データベースと、
前記検査結果特徴量データベースに保存された前記特徴量に対し、次元削減を行い低次元空間に射影する次元削減部と、を備え、
前記特徴量特定部は、前記予測結果毎もしくは前記指定領域毎に、前記予測結果の検出領域もしくは前記指定領域を含む周辺の領域情報を保持した重要領域を求める重要領域算出部と、
前記画像認識部が抽出した前記特徴量を前記重要領域で重みづけすることで前記予測結果毎もしくは前記指定領域毎の該当する前記特徴量を抽出する特徴量抽出部と、を有することを特徴とする教師データ作成支援装置。 - 請求項1に記載の教師データ作成支援装置であって、
前記重要領域算出部は、前記予測結果毎もしくは前記指定領域毎に、誤差逆伝搬と前記特徴量から前記重要領域を求めることを特徴とする教師データ作成支援装置。 - 請求項1に記載の教師データ作成支援装置であって、
前記画像認識部の学習に用いた画像に対する前記画像認識部の1つ以上の予測結果もしくは指定領域に対し、前記特徴量特定部によって前記予測結果毎もしくは前記指定領域毎に求められた前記特徴量が前記予測結果毎もしくは前記指定領域毎に保存された学習結果特徴量データベースと、を備え、
前記次元削減部は、前記検査結果特徴量データベースおよび前記学習結果特徴量データベースに保存されている前記特徴量に対し、次元削減を行い低次元空間に射影することを特徴とする教師データ作成支援装置。 - 請求項3に記載の教師データ作成支援装置であって、
前記次元削減部の処理結果を表示する表示部を備え、
前記表示部は、前記検査結果特徴量データベースに保存されている前記特徴量に対応する次元削減後のデータ点と、前記学習結果特徴量データベースに保存されている前記特徴量に対応する次元削減後のデータ点に対し、異なる色または形状で表示することを特徴とする教師データ作成支援装置。 - 請求項4に記載の教師データ作成支援装置であって、
前記表示部は、前記次元削減部の処理結果と、前記画像認識部の予測結果もしくは指定領域とを紐づけて表示する機能と、
前記表示部の表示内容から、新たな教師データを作成可能な教師データ作成部と、を有することを特徴とする教師データ作成支援装置。 - 請求項1に記載の教師データ作成支援装置であって、
前記次元削減部によって低次元空間内に射影された各データ点に対し、複数の領域集合に分割するクラスタリング部と、
前記次元削減部の処理結果および前記クラスタリング部の処理結果を表示する表示部と、
前記クラスタリング部の処理結果から、各領域毎のデータ数を算出することで少量データを特定する少量データ特定部と、を備えることを特徴とする教師データ作成支援装置。 - 請求項6に記載の教師データ作成支援装置であって、
撮像位置または撮像回数を含む撮像条件が記載されたレシピを作成するレシピ作成部と、
前記レシピに基づいて試料に対し撮像を行う撮像装置と、を備え、
前記入力画像は、前記撮像装置によって撮像された画像であり、
前記レシピ作成部は、前記少量データ特定部が特定した少量データに基づき、前記レシピの内容を更新することを特徴とする教師データ作成支援装置。 - 請求項6に記載の教師データ作成支援装置であって、
前記表示部は、前記次元削減部および前記クラスタリング部の処理結果に対し、各データ点に対する領域分割または少量データの指定をユーザーが手動で行える機能を有することを特徴とする教師データ作成支援装置。 - 請求項5に記載の教師データ作成支援装置であって、
前記教師データ作成部は、前記画像認識部の予測結果もしくは指定領域を画像に付与するラベルの候補とすることを特徴とする教師データ作成支援装置。 - 請求項1に記載の教師データ作成支援装置であって、
前記画像認識部は、CNN(Convolution Neural Network)を用いた機械学習により画像処理を行うことを特徴とする教師データ作成支援装置。 - 請求項1に記載の教師データ作成支援装置であって、
前記次元削減部は、t-SNE(t-Distributed Stochastic Neighbor Embedding)を用いて次元削減を行うことを特徴とする教師データ作成支援装置。 - 請求項1に記載の教師データ作成支援装置であって、
前記重要領域算出部は、前記重要領域に加え、前記予測結果毎もしくは前記指定領域毎に誤差逆伝搬と前記特徴量から前記特徴量の重要度を求め、
前記特徴量抽出部は、前記特徴量を前記重要領域および前記重要度で重みづけすることで前記予測結果毎もしくは前記指定領域毎の該当する前記特徴量を抽出することを特徴とする教師データ作成支援装置。 - 請求項1に記載の教師データ作成支援装置であって、
前記重要領域算出部は、前記予測結果の検出領域もしくは前記指定領域に対し、予め設定された周辺範囲を重要領域として求めることを特徴とする教師データ作成支援装置。 - 請求項1に記載の教師データ作成支援装置であって、
前記予測結果は、前記画像認識部が予測した前記入力画像に映る物体の種類と位置であることを特徴とする教師データ作成支援装置。 - 請求項1に記載の教師データ作成支援装置であって、
前記指定領域は、検査対象となる試料を製造するために使用するパターンデータ及び/又は前記試料の撮像条件が記載されたデータに基づいて算出された領域であることを特徴とする教師データ作成支援装置。 - 請求項1に記載の教師データ作成支援装置であって、
前記指定領域は、ユーザーが予め設定した領域であることを特徴とする教師データ作成支援装置。 - 機械学習における教師データの作成を支援する教師データ作成支援方法であって、
(a)検査画像に映る複数の欠陥の種類と位置を、それぞれ個別に検出するステップと、
(b)前記(a)ステップで検出した結果に対し、欠陥毎の特徴量を特定しデータベースに保存するステップと、
(c)前記データベースに保存されている特徴量に対し、次元削減を行い低次元空間に射影し、その結果を表示部に表示するステップと、
を有することを特徴とする教師データ作成支援方法。
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1020247015299A KR20240089449A (ko) | 2021-12-17 | 2021-12-17 | 교사 데이터 작성 지원 장치, 교사 데이터 작성 지원 방법 |
| CN202180104385.4A CN118302790A (zh) | 2021-12-17 | 2021-12-17 | 示教数据作成辅助装置、示教数据作成辅助方法 |
| PCT/JP2021/046717 WO2023112302A1 (ja) | 2021-12-17 | 2021-12-17 | 教師データ作成支援装置、教師データ作成支援方法 |
| US18/718,670 US20250054270A1 (en) | 2021-12-17 | 2021-12-17 | Labeled training data creation assistance device and labeled training data creation assistance method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2021/046717 WO2023112302A1 (ja) | 2021-12-17 | 2021-12-17 | 教師データ作成支援装置、教師データ作成支援方法 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023112302A1 true WO2023112302A1 (ja) | 2023-06-22 |
Family
ID=86773889
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2021/046717 Ceased WO2023112302A1 (ja) | 2021-12-17 | 2021-12-17 | 教師データ作成支援装置、教師データ作成支援方法 |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US20250054270A1 (ja) |
| KR (1) | KR20240089449A (ja) |
| CN (1) | CN118302790A (ja) |
| WO (1) | WO2023112302A1 (ja) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2025238808A1 (ja) * | 2024-05-16 | 2025-11-20 | 株式会社日立ハイテク | 情報処理装置、および情報処理方法 |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CA3222059A1 (en) * | 2023-01-13 | 2025-07-02 | Maya Heat Transfer Technologies Ltd. | SYSTEM FOR GENERING AN IMAGE DATASET FOR TRAINING AN ARTIFICIAL INTELLIGENCE MODEL FOR OBJECT RECOGNITION, AND ITS METHOD OF USE |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH11344450A (ja) * | 1998-06-03 | 1999-12-14 | Hitachi Ltd | 教示用データ作成方法並びに欠陥分類方法およびその装置 |
| WO2010023791A1 (ja) * | 2008-08-28 | 2010-03-04 | 株式会社日立ハイテクノロジーズ | 欠陥検査方法及び装置 |
| JP2011089976A (ja) * | 2009-09-28 | 2011-05-06 | Hitachi High-Technologies Corp | 欠陥検査装置および欠陥検査方法 |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP6971112B2 (ja) | 2017-09-29 | 2021-11-24 | 株式会社Screenホールディングス | 教師データ作成支援装置、分類装置および教師データ作成支援方法 |
-
2021
- 2021-12-17 KR KR1020247015299A patent/KR20240089449A/ko active Pending
- 2021-12-17 US US18/718,670 patent/US20250054270A1/en active Pending
- 2021-12-17 WO PCT/JP2021/046717 patent/WO2023112302A1/ja not_active Ceased
- 2021-12-17 CN CN202180104385.4A patent/CN118302790A/zh active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH11344450A (ja) * | 1998-06-03 | 1999-12-14 | Hitachi Ltd | 教示用データ作成方法並びに欠陥分類方法およびその装置 |
| WO2010023791A1 (ja) * | 2008-08-28 | 2010-03-04 | 株式会社日立ハイテクノロジーズ | 欠陥検査方法及び装置 |
| JP2011089976A (ja) * | 2009-09-28 | 2011-05-06 | Hitachi High-Technologies Corp | 欠陥検査装置および欠陥検査方法 |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2025238808A1 (ja) * | 2024-05-16 | 2025-11-20 | 株式会社日立ハイテク | 情報処理装置、および情報処理方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118302790A (zh) | 2024-07-05 |
| KR20240089449A (ko) | 2024-06-20 |
| US20250054270A1 (en) | 2025-02-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7399235B2 (ja) | パターン検査システム | |
| TWI722599B (zh) | 圖像評價裝置及方法 | |
| JP7004826B2 (ja) | 寸法計測装置、寸法計測方法及び半導体製造システム | |
| TWI755613B (zh) | 基於機器學習之圖案分組方法 | |
| TWI733425B (zh) | 尺寸測量裝置、尺寸測量程式及半導體製造系統 | |
| JP5546317B2 (ja) | 外観検査装置、外観検査用識別器の生成装置及び外観検査用識別器生成方法ならびに外観検査用識別器生成用コンピュータプログラム | |
| KR102740973B1 (ko) | 이미지 데이터 세트 처리 | |
| EP3904866A1 (en) | Defect inspecting device, defect inspecting method, and program for same | |
| US11663713B2 (en) | Image generation system | |
| CN113221956B (zh) | 基于改进的多尺度深度模型的目标识别方法及装置 | |
| JP2014511530A (ja) | ウェブベース材料内の不均一性の検出システム | |
| CN115439458A (zh) | 基于深度图注意力的工业图像缺陷目标检测算法 | |
| KR20220012217A (ko) | 반도체 시편에서의 결함들의 기계 학습 기반 분류 | |
| CN112308854B (zh) | 一种芯片表面瑕疵的自动检测方法、系统及电子设备 | |
| KR102772124B1 (ko) | 결함 검사 시스템 및 결함 검사 방법 | |
| WO2023112302A1 (ja) | 教師データ作成支援装置、教師データ作成支援方法 | |
| CN116342474B (zh) | 晶圆表面缺陷检测方法 | |
| CN115471494B (zh) | 基于图像处理的沃柑质检方法、装置、装备及存储介质 | |
| CN113358042B (zh) | 一种测量膜厚的方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21968217 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 20247015299 Country of ref document: KR Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202180104385.4 Country of ref document: CN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 18718670 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21968217 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: JP |