[go: up one dir, main page]

TWI908645B - Defect detection device and defect detection method - Google Patents

Defect detection device and defect detection method

Info

Publication number
TWI908645B
TWI908645B TW114112656A TW114112656A TWI908645B TW I908645 B TWI908645 B TW I908645B TW 114112656 A TW114112656 A TW 114112656A TW 114112656 A TW114112656 A TW 114112656A TW I908645 B TWI908645 B TW I908645B
Authority
TW
Taiwan
Prior art keywords
image
loss
aforementioned
defect
defect detection
Prior art date
Application number
TW114112656A
Other languages
Chinese (zh)
Other versions
TW202540961A (en
Inventor
原田実
前田健宏
川野源
後藤研吾
Original Assignee
日商日立全球先端科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/JP2024/014012 external-priority patent/WO2025210863A1/en
Application filed by 日商日立全球先端科技股份有限公司 filed Critical 日商日立全球先端科技股份有限公司
Publication of TW202540961A publication Critical patent/TW202540961A/en
Application granted granted Critical
Publication of TWI908645B publication Critical patent/TWI908645B/en

Links

Abstract

提供一種缺陷檢測裝置和缺陷檢測方法,可以兼顧提高缺陷檢測靈敏度和抑制誤報,並且減少使用者作業負擔,具有以下的構成。 一種缺陷檢測裝置和缺陷檢測方法,係具備:影像取得部,用於取得檢測對象的樣品的預定區域的影像;區段分割處理部,其將影像取得部中取得的拍攝影像分割為多個區段;及缺陷候選點抽出部,其針對由區段分割處理部分割出的每個區段抽出缺陷候選;區段分割處理部係以1)相互資訊量損失、2)形狀損失、和3)重建損失中的至少兩個以上的損失變為更小的方式進行區段分割。 A defect detection apparatus and method are provided, which can simultaneously improve defect detection sensitivity and suppress false alarms, while reducing user workload. The apparatus and method have the following components: An image acquisition unit for acquiring an image of a predetermined area of a sample to be inspected; a segmentation processing unit for segmenting the acquired image into multiple segments; and a defect candidate point extraction unit for extracting defect candidates from each segment segmented by the segmentation processing unit. The segmentation processing unit performs segmentation in a manner that minimizes at least two of the following losses: 1) mutual information loss, 2) shape loss, and 3) reconstruction loss.

Description

缺陷檢測裝置、缺陷檢測方法Defect detection device and defect detection method

本發明關於檢測半導體等的缺陷的缺陷檢測裝置和缺陷檢測方法。This invention relates to a defect detection apparatus and a defect detection method for detecting defects in semiconductors and the like.

在半導體晶圓的製造中,為了確保利潤,迅速啟動製造製程、快速過渡到高良品率的量產體係是重要的。為了該目的,各種檢測裝置、觀察裝置和測量裝置被導入生產線。作為主要的檢測裝置之一是缺陷檢測裝置,用於檢測半導體製造製程中產生的缺陷。In semiconductor wafer manufacturing, it is crucial to quickly start the manufacturing process and rapidly transition to a high-yield mass production system to ensure profitability. To this end, various inspection, observation, and measurement devices are introduced into the production line. One of the primary inspection devices is the defect detection device, used to detect defects generated during the semiconductor manufacturing process.

缺陷檢測裝置的重要性能指標之一,是能夠檢測到缺陷而不會忽略它們,同時抑制誤報(將正常部位誤檢測為缺陷的結果)。在半導體製造中,隨著晶圓上形成的電路圖案尺寸微細化的進展,而對部件的動作造成致命影響的缺陷尺寸也趨於微小化。因此,抑制與缺陷檢測中的誤報相關技術變得越來越重要。One of the key performance indicators of a defect detection device is its ability to detect defects without ignoring them, while simultaneously suppressing false alarms (the result of misdetecting normal parts as defects). In semiconductor manufacturing, as the dimensions of circuit patterns formed on wafers become increasingly smaller, the size of defects that can fatally affect component operation also tends to shrink. Therefore, techniques for suppressing false alarms in defect detection are becoming increasingly important.

在專利文獻1中公開了一種缺陷檢測方法和缺陷檢測裝置,其在檢測裝置具有獲取作為檢測對象的樣品上形成的晶片整體的影像的光學系統,及具有顯示晶片整體的影像並根據致命性將其劃分為多個區域的功能,可以變更每個劃分區域的缺陷判定的閾值,從而難以錯誤地檢測缺陷。Patent document 1 discloses a defect detection method and a defect detection device, which has an optical system for acquiring an image of the entire wafer formed on the sample as the object of detection, and a function for displaying the image of the entire wafer and dividing it into multiple regions according to its culpability. The defect judgment threshold of each divided region can be changed, thereby making it difficult to detect defects erroneously.

此外,在專利文獻2中公開了一種缺陷檢測方法,其將檢測對象劃分為多個區域的方法,係以以下特徵量作為基準進行區域的劃分的方法:1)關注像素及其周邊像素的灰階值,以及2)使用方差(Variance)、熵(entropy)和Sobel濾波器來計算的亮度梯度。另外,作為對劃分的區域進行分組的方法,可以使用基於前述特徵量的決策樹(Decision Tree)的分類、基於支援向量機(Support Vector Machine)的分類、基於最近鄰規則(Recent Supplementary Rules)的分類、通常使用的圖案辨識和設計資料,將相同形狀的圖案彼此分割為同一類別(category)。 [先前技術文獻] [專利文獻] Furthermore, Patent 2 discloses a defect detection method that divides the detection object into multiple regions. This method uses the following features as criteria for region division: 1) the grayscale values of the pixel and its surrounding pixels, and 2) the brightness gradient calculated using variance, entropy, and a Sobel filter. Additionally, as a method for grouping the divided regions, classification based on decision trees (using the aforementioned features), support vector machines (SVMs), nearest neighbor rules (NLMs), and commonly used pattern recognition and design data can be used to group patterns of the same shape into the same category. [Previous Artwork] [Patents]

[專利文獻1]日本特開2002-100660號公報 [專利文獻2]日本特開2013-160629號公報 [Patent Document 1] Japanese Patent Application Publication No. 2002-100660 [Patent Document 2] Japanese Patent Application Publication No. 2013-160629

[發明所欲解決的課題] 在專利文獻1中,區域的劃分是由裝置操作員進行的。另外,專利文獻2中列舉了先前已知的特徵量以及區域劃分的方法,但是並未教導在半導體製造的各個工程中應該應用哪一種劃分方法。由於在半導體製造工程中每個工程的電路圖案的外觀均不同,因此在每個工程中優化分割為區段(segment)的處理是重要的事。 [Problem Solved by the Invention] In Patent 1, the division of regions is performed by the device operator. Furthermore, Patent 2 lists previously known feature values and methods for region division, but does not teach which division method should be applied in each stage of semiconductor manufacturing. Since the appearance of the circuit pattern differs for each stage of semiconductor manufacturing, optimizing the segmentation process in each stage is crucial.

本發明的目的在於提供一種缺陷檢測裝置和缺陷檢測方法,透過不需要使用者的教導,即可以針對每個工程進行適當的區段分割處理(Segmentation process),從而容易調整每個區段中的缺陷檢測靈敏度,可以兼顧缺陷檢測靈敏度和抑制誤報,並且減少使用者作業負擔。 [解決課題的手段] The purpose of this invention is to provide a defect detection device and method that, without requiring user instruction, can perform appropriate segmentation processing for each project, thereby easily adjusting the defect detection sensitivity in each segment. This balances defect detection sensitivity and false alarm suppression, while reducing the user's workload. [Solution]

為了達成上述目的,本發明構成為如下。 一種缺陷檢測裝置及缺陷檢測方法,係具備:影像取得部,用於取得檢測對象的樣品的預定區域的影像;區段分割處理部,其將影像取得部中取得的拍攝影像分割為多個區段;及缺陷候選點抽出部,其針對由區段分割處理部分割出的每個區段抽出缺陷候選;區段分割處理部係以以下1)~3)中的至少兩個以上的損失變為更小的方式進行區段分割:1)根據配對的兩幅影像的區段辨識結果來計算相互資訊量(Mutual Information),設定損失使得相互資訊量成為最大的相互資訊量損失,2)形狀損失是基於超像素(Superpixel)區段的校正誤差的損失,該超像素區段是將輸入影像中具有相似顏色或紋理(Texture)的像素分組而得的微小區域,3)重建損失(Reconstruction loss),是基於灰階影像(Grayscale image)重建誤差的損失。 [發明效果] To achieve the above objectives, the present invention is configured as follows: A defect detection apparatus and method, comprising: an image acquisition unit for acquiring an image of a predetermined area of a sample to be inspected; a segmentation processing unit for segmenting the captured image acquired by the image acquisition unit into multiple segments; and a defect candidate point extraction unit for extracting defect candidates from each segment segmented by the segmentation processing unit; the segmentation processing unit performs segmentation in a manner that minimizes at least two of the following 1) to 3): 1) calculating mutual information based on the segment identification results of two matched images. Information), setting the loss to maximize mutual information loss; 2) Shape loss is based on the correction error of superpixel segments, which are tiny regions obtained by grouping pixels with similar colors or textures in the input image; 3) Reconstruction loss is based on the reconstruction error of the grayscale image. [Invention Effects]

根據本發明可以提供一種缺陷檢測裝置和缺陷檢測方法,其構成為不需要使用者的教導,即可以針對每個工程進行適當的區段分割處理,從而容易調整每個區段中的缺陷檢測靈敏度,可以兼顧缺陷檢測靈敏度和抑制誤報,並且減少使用者作業負擔。According to the present invention, a defect detection device and a defect detection method can be provided, which are configured to perform appropriate segmentation processing for each project without user instruction, thereby easily adjusting the defect detection sensitivity in each segment, balancing defect detection sensitivity and suppressing false alarms, and reducing the user's workload.

上述以外的課題、構成及效果,透過以下的實施方式的說明將變得明確。The issues, components, and effects other than those mentioned above will become clear through the following explanation of the implementation methods.

在晶圓樣品上,可以根據所形成的電路圖案的每個種類來劃分區域。例如在記憶體部件中存在著:形成用於記憶資訊的元件的區域(單元區域),以及用於控制記憶元件的周邊電路區域。所形成的電路圖案的尺寸和製造變動的大小在單元區域和周邊電路區域之間大多情況下有所不同,缺陷檢測所需的靈敏度也不同。因此,例如,如果將靈敏度調整到圖案尺寸微細的單元區域,則會增加錯誤地檢測周邊電路區域中的製造變動的事例。相反地,如果在周邊電路區域中調整靈敏度,則存在忽略單元區域的致命缺陷的風險。On a wafer prototype, regions can be divided according to each type of circuit pattern formed. For example, in a memory component, there are regions that form elements used for storing memory information (cell regions) and peripheral circuit regions that control these memory elements. The size of the formed circuit pattern and the magnitude of manufacturing variations often differ between cell regions and peripheral circuit regions, and the sensitivity required for defect detection also differs. Therefore, for example, adjusting the sensitivity to cell regions with fine pattern sizes increases the likelihood of erroneously detecting manufacturing variations in peripheral circuit regions. Conversely, adjusting the sensitivity in peripheral circuit regions carries the risk of overlooking critical defects in the cell regions.

針對這樣的問題,以下參照附圖來說明本發明的實施例,該實施例可以在不需要使用者教導的情況下,即可以針對每個工程進行適當的區段分割處理。另外,以下的記載只是一個實施例,本發明並不限定於以下內容。 [實施例] To address this problem, the following embodiment of the invention is described with reference to the accompanying drawings. This embodiment can perform appropriate segmentation processing for each project without requiring user instruction. Furthermore, the following description is only an embodiment, and the invention is not limited to the following content. [Embodiment]

以下對本發明的缺陷檢測裝置進行說明。在本實施例中,作為影像拍攝裝置係以具備使用電子束的掃描型電子顯微鏡(SEM:Scanning Elecron Microscope)的觀察裝置作為對象進行說明,但是本發明的拍攝裝置也可以是SEM以外的裝置,即使是利用了離子等帶電粒子束的拍攝裝置或具備光學式顯微鏡的拍攝裝置也能夠實現本發明的效果。The defect detection device of the present invention will be described below. In this embodiment, the imaging device is described as an observation device equipped with a scanning electron microscope (SEM) that uses an electron beam. However, the imaging device of the present invention can also be a device other than SEM. Even imaging devices that utilize charged particle beams such as ions or imaging devices equipped with optical microscopes can achieve the effect of the present invention.

圖1示出了本發明的拍攝裝置的構成。該裝置由以下構成:進行影像的拍攝的SEM 101;進行整體控制的控制部102;將資訊記憶在磁碟或半導體記憶體等上的記憶部103;根據程式進行運算的運算部104;外部記憶媒體輸入/輸出部105,用於與連接到裝置的外部記憶媒體進行資訊的輸入/輸出;使用者介面部106,控制與使用者之間的資訊輸入/輸出;及網路介面部107,透過網路與缺陷影像分類裝置等進行通訊。另外,在使用者介面部106連接有由鍵盤、滑鼠、顯示器等的輸入/輸出終端113。Figure 1 illustrates the configuration of the imaging device of the present invention. The device comprises: a SEM 101 for capturing images; a control unit 102 for overall control; a memory unit 103 for storing information on a disk or semiconductor memory; a calculation unit 104 for performing calculations according to a program; an external memory media input/output unit 105 for inputting/outputting information with an external memory media connected to the device; a user interface 106 for controlling information input/output with the user; and a network interface 107 for communicating with a defective image classification device via a network. Additionally, an input/output terminal 113, such as a keyboard, mouse, or display, is connected to the user interface 106.

SEM 101包括:可移動台109,其上搭載有樣品晶圓(又稱半導體晶圓)108;電子源110,用於將電子束114照射到樣品晶圓108;及檢測器111,用於檢測從樣品晶圓108產生的二次電子和反射電子等。另外,SEM 101還由以下構成:用於將電子束114聚焦在樣品上的電子透鏡(未圖示),及用於將電子束114掃描到樣品晶圓108上的偏轉器(112)。SEM 101 includes: a movable stage 109 on which a sample wafer (also known as a semiconductor wafer) 108 is mounted; an electron source 110 for irradiating the sample wafer 108 with an electron beam 114; and a detector 111 for detecting secondary electrons and reflected electrons generated from the sample wafer 108. Additionally, SEM 101 also comprises: an electron microscope (not shown) for focusing the electron beam 114 onto the sample, and a deflector (112) for scanning the electron beam 114 onto the sample wafer 108.

參照圖2來說明本實施例的缺陷檢測的主要流程。假設作為觀察對象的半導體晶圓108被載置(裝載)在可移動台109上,並且預先設定了對象晶圓的拍攝條件等。The main process of defect detection in this embodiment will be explained with reference to Figure 2. Assume that the semiconductor wafer 108, which is the object of observation, is mounted on the movable stage 109, and the imaging conditions of the object wafer are preset.

首先,控制部102控制SEM 101拍攝使用者指定的檢測區域的影像(S201)。接下來,從拍攝的影像中檢測被認為有缺陷的部位作為缺陷候選點(S202)。作為抽出缺陷候選點的方法,可以透過與事先拍攝的設計成形成有相同電路圖案的區域的圖像進行比較,並且將產生差異的部位視為缺陷候選即可。或者,也可以從拍攝的影像中推定與良品影像等同的影像,並與推定的良品影像進行比較檢測。First, the control unit 102 controls the SEM 101 to capture an image of the inspection area specified by the user (S201). Next, areas deemed defective are detected from the captured image as defect candidate points (S202). As a method for extracting defect candidate points, the images can be compared with images of areas with the same circuit pattern in the pre-captured design, and the areas showing differences can be considered defect candidates. Alternatively, an image equivalent to a good product image can be estimated from the captured images, and the estimated good product image can be compared for inspection.

缺陷候選點的抽出處理就是所謂的初次篩選,而得到的缺陷候選點可能包括將正常部位錯誤地檢測為缺陷的“誤報”。因此,在作為二次篩選的後段的處理中,從缺陷候選點中抽出被判定為真正有缺陷的點是重要的。在本實施例中,與缺陷候選點的檢測獨立地(並行地),使用事先設定的區段分割參數(M201)將拍攝影像分割成區段(S203)。處理的結果就是得到了所拍攝影像的每個像素所屬的區段。以下將以區段索引(segment index)作為影像灰階值者稱為區段影像(segment image)。The process of extracting defect candidate points is the so-called initial screening. The resulting defect candidate points may include "false alarms" that incorrectly detect normal areas as defects. Therefore, in the subsequent processing, which is a secondary screening, it is important to extract points that are determined to be truly defective from the defect candidate points. In this embodiment, the captured image is divided into segments (S203) independently (in parallel) with the detection of defect candidate points using pre-set segmentation parameters (M201). The result of this processing is the segment to which each pixel of the captured image belongs. Hereinafter, the image with the segment index as its grayscale value will be referred to as the segment image.

接下來,針對所得到的每個缺陷候選點,參照區段影像來決定每個缺陷候選點所屬的區段(缺陷候選點分類,S204)。由於每個缺陷候選點都具有座標資訊,因此可以透過參照區段影像的座標來取得區段的索引。Next, for each defect candidate point obtained, the segment to which each defect candidate point belongs is determined by referring to the segment image (defect candidate point classification, S204). Since each defect candidate point has coordinate information, the segment index can be obtained by referring to the coordinates of the segment image.

接下來,根據預先設定的每個區段的靈敏度調整係數,對屬於每個區段的每個缺陷候選點的異常程度進行校正(S205)。這裡的異常程度是表示每個缺陷候選點有缺陷的可能性的指標值,是根據比較檢測時的灰階差、面積和其他特徵量計算得出的。最後將校正後的異常程度應用到預先設定的檢測閾值得到最終輸出的缺陷點(S206)。以上為針對影像取得處理S201中拍攝的影像進行的檢測處理,可重複上述處理直至覆蓋使用者指定的檢測區域。Next, based on the pre-set sensitivity adjustment coefficient for each segment, the anomaly level of each defect candidate point belonging to each segment is corrected (S205). The anomaly level here is an index value representing the probability that each defect candidate point is defective, calculated by comparing the grayscale difference, area, and other features during detection. Finally, the corrected anomaly level is applied to the pre-set detection threshold to obtain the final output defect point (S206). The above describes the detection processing performed on the image captured in image acquisition processing S201. This processing can be repeated until the user-specified detection area is covered.

可以構成為,並行地進行利用拍攝硬體的影像取得處理(S201)及利用軟體處理的缺陷檢測處理(S202~S206)的動作,並記憶依序拍攝的影像,讀取所拍攝完成的影像並應用於缺陷檢測處理。It can be configured to perform image acquisition processing using camera hardware (S201) and defect detection processing using software (S202~S206) in parallel, and memorize the images captured in sequence, read the captured images and apply them to defect detection processing.

使用影像來說明前述缺陷檢測處理(S202~S206)。圖3是示意性表示拍攝影像的一例,表示了排列有圓形圖案的單元區域以及與周邊電路區域相鄰的區域的拍攝影像的一例。示出了作為周邊電路區域而存在縱方向圖案區域、橫方向圖案區域以及矩形圖案區域的例子。Images are used to illustrate the aforementioned defect detection process (S202~S206). Figure 3 is a schematic representation of an example of an image, showing an example of an image of a unit area with circular patterns and an area adjacent to the surrounding circuit area. Examples of vertically patterned areas, horizontally patterned areas, and rectangular patterned areas existing as surrounding circuit areas are shown.

圖4示出了區段分割處理(S203)的處理結果。這裡示出了分割為8個區段的結果,被分割為形成各種電路圖案的基板區域(地板區域(floor area))和各地板區域中的電路圖案形成區域。具體來說,作為地板區域被分割為單元區域(區段#1)、縱方向圖案區域(區段#3)、矩形圖案區域(區段#5)和橫方向圖案區域(區段#7)。另外,還示出了作為電路圖案形成區域被分割為圓形圖案(區段#2)、縱方向圖案(區段#4)、矩形圖案(區段#6)和橫方向圖案(區段#8)的結果。另外,以上只是分割結果的一個範例,分割結果不一定非得是這樣。例如,以被各地板區域進行分割可能就足夠了。Figure 4 shows the processing result of the segmentation process (S203). It shows the result of segmenting into 8 segments, divided into substrate areas (floor areas) for forming various circuit patterns and circuit pattern forming areas within each floor area. Specifically, the floor area is segmented into unit areas (segment #1), vertical pattern areas (segment #3), rectangular pattern areas (segment #5), and horizontal pattern areas (segment #7). Additionally, it shows the result of segmenting the circuit pattern forming areas into circular patterns (segment #2), vertical patterns (segment #4), rectangular patterns (segment #6), and horizontal patterns (segment #8). However, the above is only an example of the segmentation result; the segmentation result does not necessarily have to be like this. For example, segmentation by each floor area might be sufficient.

圖5示出了透過缺陷候選點檢測處理(S202)在拍攝影像中檢測出缺陷候選點的結果的範例,並且檢測到的缺陷候選點以十字標記表示。Figure 5 shows an example of the result of detecting defect candidate points in the captured image through defect candidate point detection processing (S202), and the detected defect candidate points are indicated by cross marks.

圖6示出了將所獲得的缺陷候選點的異常程度依每個區段排列顯示的結果。在圖5中的單元區域(區段#1)中,存在用十字標記表示的兩個缺陷候選點601和604。舉例來說,假設601是真實缺陷。在不進行區段分割的情況下,則無法使用一個檢測閾值602將604與真實缺陷601分離,並且誤報的604也會被判定為缺陷。另一方面,透過使用針對每個區段設定的檢測閾值603,可以將誤報的604與真實缺陷601分離。雖然圖6示出了對每個區段設定檢測閾值的方法,但是透過將係數乘上屬於每個區段的缺陷候選點的異常程度來進行校正,也可以獲得相同的效果。例如,將屬於區段1的缺陷候選點的異常程度乘以1/2倍,相當於將區段1的檢測閾值設定為2倍。Figure 6 shows the result of arranging the obtained defect candidate points according to their degree of abnormality for each segment. In the cell region (segment #1) in Figure 5, there are two defect candidate points, 601 and 604, indicated by a crosshair. For example, suppose 601 is a real defect. Without segmentation, it is impossible to use a detection threshold 602 to separate 604 from the real defect 601, and the false alarm 604 will also be judged as a defect. On the other hand, by using a detection threshold 603 set for each segment, the false alarm 604 can be separated from the real defect 601. Although Figure 6 illustrates the method of setting the detection threshold for each segment, the same effect can be achieved by correcting the coefficient by multiplying it by the degree of abnormality of the defect candidate points belonging to each segment. For example, multiplying the degree of abnormality of the defect candidate points belonging to segment 1 by 1/2 is equivalent to setting the detection threshold of segment 1 to 2.

如上所述,透過根據電路圖案、元件等的外觀將拍攝的影像分割為區段,並且在每個區段設定不同的檢測靈敏度,可以容易地調整每個區段的缺陷檢測靈敏度。這使得可以同時提高缺陷檢測靈敏度、抑制誤報並且可以兼顧減少使用者作業負擔。As described above, by dividing the captured image into segments based on the appearance of circuit patterns, components, etc., and setting different detection sensitivities for each segment, the defect detection sensitivity of each segment can be easily adjusted. This allows for simultaneous improvement in defect detection sensitivity, suppression of false alarms, and reduction of user workload.

以下,說明本實施例的影像的區段分割方法。存在多個作為檢測對象的半導體的製造工程,且在每個製造工程中形成的電路圖案的外觀各有不同。此外,根據SEM的拍攝條件,即使是相同結構的圖案,拍攝影像的外觀也可能會發生顯著變化。因此,為了進行適合每個工程的區段分割處理,在檢測之前會自動調整與區段分割處理相關的處理參數。The image segmentation method of this embodiment is explained below. There are multiple semiconductor manufacturing processes that are the objects of inspection, and the appearance of the circuit patterns formed in each manufacturing process is different. Furthermore, depending on the SEM imaging conditions, even patterns with the same structure may show significant variations in the appearance of the captured images. Therefore, in order to perform segmentation processing suitable for each process, the processing parameters related to segmentation processing are automatically adjusted before inspection.

自動調整後的參數儲存在記憶部M201中,並在檢測處理中的區段分割處理(S203)中使用。參數的自動調整處理,是在作成檢測對象工程的檢測配方的段階中進行的,該處理係由操作員的操作來進行。The automatically adjusted parameters are stored in memory unit M201 and used in the segmentation process (S203) during the detection process. The automatic parameter adjustment process is performed in the stage of creating the detection formula for the detection object process, and this process is performed by the operator.

參照圖7說明與區段分割處理相關的參數的自動調整處理。首先,取得用於自動調整的影像集(S701)。可以透過使用對對象工程中的檢測區域的影像進行拍攝來實現。或者,可以使用從拍攝的影像中適當採樣的影像。The automatic adjustment of parameters related to segmentation processing is explained with reference to Figure 7. First, an image set for automatic adjustment is obtained (S701). This can be achieved by capturing images of the detection area in the object project. Alternatively, images appropriately sampled from captured images can be used.

接下來,初始化調整對象的參數(S702)。可以使用事先設定的標準值,或者可以使用針對類似工程自動調整了的參數作為初始值。或者,可以使用隨機數(random number)來設定初始值。然後,使用設定的參數對影像集進行區段分割(S703),使用後述的方法根據區段結果來計算損失(S704),並更新參數(S705)。Next, the parameters of the object to be adjusted are initialized (S702). Pre-set standard values can be used, or parameters automatically adjusted for similar projects can be used as initial values. Alternatively, a random number can be used to set the initial values. Then, the image set is segmented using the set parameters (S703), and the loss is calculated based on the segmentation results using the method described later (S704), and the parameters are updated (S705).

重複進行上述,並在重複過程中進行結束的判定(S706),若滿足期望條件則結束重複,並將得到的參數記憶在記憶部M201中(S707)。作為重複的結束條件,可以是當達到預設的一定的重複次數時可以結束,或者當損失已經充分收斂時可以結束。The above process is repeated, and a termination determination is made during the repetition (S706). If the desired conditions are met, the repetition ends, and the obtained parameters are stored in the memory unit M201 (S707). The termination condition for the repetition can be that a predetermined number of repetitions is reached, or that the repetition ends when the loss has been sufficiently reduced.

在區段分割處理(S703)中,可以採用各種方法來辨識每個像素的區段。這裡以使用多個卷積處理和激活函數的方法,亦即使用深度學習的方法為例進行說明。作為卷積神經網路的構成例,可以使用如圖8所示的具有三層結構的神經網路。這裡,Y是作為輸入的拍攝影像,F1(Y)和F2(Y)是表示中間資料,F(Y)是網路的輸出。中間資料和網路的輸出是使用公式1~公式3計算。這裡,「*」表示卷積運算。最終的區段影像L(Y)由公式4計算。這裡,W1是n1個大小為c0×f1×f1的濾波器,c0是表示輸入影像的通道數(Number of Channels),f1是表示空間濾波器的大小。In the segmentation process (S703), various methods can be used to identify the segment of each pixel. Here, we will illustrate this using a method employing multiple convolutional processing and activation functions, i.e., a deep learning method. As an example of constructing a convolutional neural network, a three-layer neural network as shown in Figure 8 can be used. Here, Y is the captured image as input, F1(Y) and F2(Y) represent intermediate data, and F(Y) is the network output. The intermediate data and the network output are calculated using Equations 1 to 3. Here, "*" indicates a convolution operation. The final segment image L(Y) is calculated using Equation 4. Here, W1 is n1 filters of size c0×f1×f1, where c0 represents the number of input image channels and f1 represents the size of the spatial filter.

透過將大小為c0×f1×f1的濾波器與輸入影像卷積n1次,可以得到n1維度(n1-dimensional)的特徵圖(feature map)。B1是n1維度的向量,是與n1個濾波器對應的偏差分量(Bias Component)。同樣地,W2是大小為n1×f2×f2的濾波器,B2是n2維度的向量,W3是大小為n2×f3×f3的濾波器,B3是c3維度的向量。 其中,c0為拍攝影像的通道數,c3是由進行分割的區段數決定的值。此外,f1,f2,n1和n2是使用者在學習序列之前決定的超參數(Hyperparameters),例如可以設定為f1=9,f2=5,n1=128,n2=64。透過參數更新處理(S705)調整的參數為W1、W2、W3、B1、B2、B3。 By convolving a filter of size c0×f1×f1 with the input image n1 times, an n1-dimensional feature map can be obtained. B1 is an n1-dimensional vector, which is the bias component corresponding to the n1 filters. Similarly, W2 is a filter of size n1×f2×f2, B2 is an n2-dimensional vector, W3 is a filter of size n2×f3×f3, and B3 is a c3-dimensional vector. Where c0 is the number of channels in the captured image, and c3 is a value determined by the number of segments to be segmented. Furthermore, f1, f2, n1, and n2 are hyperparameters determined by the user before learning the sequence; for example, they can be set to f1=9, f2=5, n1=128, and n2=64. The parameters adjusted through parameter update processing (S705) are W1, W2, W3, B1, B2, and B3.

在本調整處理中,可以使用神經網路學習中通常的誤差反向傳播(Backpropagation)。另外,計算估計誤差時,可以使用所有取得的學習影像,也可以使用小批量方法(Mini-batch method)。換句話說,可以從學習影像中隨機萃取幾幅影像,並重複進行更新參數。此外,可以從一幅影像中隨機剪切出補丁影像(Patch Image),並將其用作為神經網路的輸入影像Y。結果,學習能夠有效地進行。In this adjustment process, backpropagation, a common error propagation technique in neural network learning, can be used. Alternatively, when calculating the estimated error, all acquired training images can be used, or a mini-batch method can be employed. In other words, several images can be randomly extracted from the training images and the parameters can be updated repeatedly. Furthermore, a patch image can be randomly cropped from a single image and used as the input image Y of the neural network. As a result, learning can proceed effectively.

另外,可以使用其他構成作為上述所示的卷積神經網路的構成。例如,可以變更層數,可以使用具有四層以上的網絡等,或者可以使用具有跳躍連接(skip connection)的構成(例如U-Net或ResNet(殘差神經網路:Residual Neural Network))。此外,也可以使用Transformer模型。Alternatively, other configurations can be used for the convolutional neural network shown above. For example, the number of layers can be varied, and networks with four or more layers can be used, or configurations with skip connections can be used (such as U-Net or ResNet (Residual Neural Network)). Furthermore, the Transformer model can also be used.

參照圖9對本實施例的與區段分割處理相關的參數自動調整中的損失計算處理(S704)進行說明。在本處理中計算出三種類型的損失:「相互資訊量損失」、「形狀損失」和「重建損失」。將每個損失乘以預設的權重係數(Weight coefficient)並且相加作為最終損失。Referring to Figure 9, the loss calculation process (S704) in the automatic parameter adjustment related to segmentation processing in this embodiment will be explained. This process calculates three types of losses: "mutual information loss," "shape loss," and "reconstruction loss." Each loss is multiplied by a preset weight coefficient and summed to obtain the final loss.

「相互資訊量損失」係根據配對的兩幅影像的區段辨識結果來計算相互資訊量,並設定損失使得相互資訊量成為最大化。首先,以進行影像分群(image clustering)的情況為例來說明「相互資訊量損失」。影像分割(Image Segmentation)可以看作是針對局部區域影像的分群,因此如果可以進行分群(clustering),就可以容易地擴展。給定機率分佈(probability distribution)G,資訊熵H(G)定義為公式5。當分佈G越趨近於均勻分佈時,資訊熵H的值變得越來越大,當分佈G收斂到一點時,資訊熵H變成0。"Mutual information loss" is calculated based on the segment identification results of the paired images, and a loss is set to maximize the mutual information loss. First, let's illustrate "mutual information loss" using image clustering as an example. Image segmentation can be viewed as clustering local image regions; therefore, if clustering is possible, it can be easily expanded. Given a probability distribution G, the information entropy H(G) is defined as Equation 5. As the distribution G approaches a uniform distribution, the value of the information entropy H increases; when the distribution G converges to a point, the information entropy H becomes 0.

相互資訊量I是表示兩個機率分佈K、L相互依賴的尺度的量,是評估資訊熵的集合的一致程度的值。如果兩個機率分佈K、L獨立的情況下,則相互資訊量為零。可以利用資訊熵並透過公式6計算相互資訊量I。 圖10表示「相互資訊量損失」的計算方法。圖中,CNN表示卷積層,FC表示全連接層(a fully connected layer),Softmax表示激活函數,並且對應於前述區段分割處理(S703)。假設輸入影像為x,在分類類別不發生變化的前提下對輸入影像x添加變動,從而產生影像x'。作為影像變動,例如適用「平行移動」、「放大/縮小」、「旋轉」和「添加雜訊」等(圖中之例以變化了字體及位置的字母‘A’的文字影像表現為影像對(an image pair))。 Mutual information I is a measure of the interdependence of two probability distributions K and L, and is a value used to assess the consistency of the set of information entropy. If the two probability distributions K and L are independent, then the mutual information is zero. Mutual information I can be calculated using information entropy and Equation 6. Figure 10 illustrates the calculation method for "mutual information loss". In the figure, CNN represents a convolutional layer, FC represents a fully connected layer, and Softmax represents the activation function, corresponding to the aforementioned segmentation processing (S703). Assuming the input image is x, changes are added to the input image x without changing the classification category, thereby generating image x'. As image changes, such as "parallel movement", "zooming in/out", "rotation", and "adding noise" are applicable (in the example in the figure, the text image of the letter 'A' with changed font and position is represented as an image pair)).

假設將影像x、x'輸入辨識器後所得的輸出分別為Φ(x)、Φ(x')(Φ(x)∈[0,1]^C,其中C是進行分割的區段數)。其中Φ(x)是影像x中的離散機率變數z∈1,...,可以解釋為表示C的機率分佈,且P(z=c|x)=Φc(x)。針對批量(batch)中的影像分別計算出Φ(x)、Φ(x'),並且計算C×C矩陣的P作為Φ(x)、Φ(x')的聯合機率(jointprobability)(公式(7)。其中n表示批量大小)。接下來,透過對P進行對稱化,得到矩陣S=(P+P^T)/2。Suppose that the outputs obtained after inputting images x and x' into the identifier are Φ(x) and Φ(x') respectively (Φ(x)∈[0,1]^C, where C is the number of segments to be segmented). Φ(x) is the discrete probability variable z∈1,... in image x, which can be interpreted as representing the probability distribution of C, and P(z=c|x)=Φc(x). Φ(x) and Φ(x') are calculated for the images in the batch, and the joint probability of the C×C matrix P is calculated as Φ(x) and Φ(x') (Formula (7). Where n represents the batch size). Next, by symmetricizing P, the matrix S=(P+P^T)/2 is obtained.

透過最大化矩陣S的邊際機率(marginal probability)Sx、Sy的相互資訊量(公式8)來進行學習。將矩陣S在行和列方向上相加,可以得到邊際機率Sx=S(z=x)和Sy=S(z=y)。Learning is achieved by maximizing the mutual information between the marginal probabilities Sx and Sy of matrix S (Equation 8). By adding matrix S in the row and column directions, we can obtain the marginal probabilities Sx = S(z = x) and Sy = S(z = y).

最大化相互資訊量意味著最大化Sx、Sy的資訊熵,同時使Sx、Sy的分佈更接近。這意味著當給定多幅影像時,它們被均等地分類到各個類別(Each class)中,同時進行學習使得影像x、x'被分類到同一類別中。Maximizing mutual information means maximizing the information entropy of Sx and Sy, while making the distributions of Sx and Sy closer. This means that when multiple images are given, they are equally classified into each class, and learning is performed to classify images x and x' into the same class.

回到圖9,進一步說明本發明的「相互資訊量損失」的計算。如上所述,在計算相互資訊量時,需要能夠產生相同的區段辨識結果(區段影像)的影像對。如果沒有使用者的指導,從輸入影像集中採樣影像對是很困難的。因此,從輸入影像x產生配對影像x'(S801)並用於計算損失。Returning to Figure 9, the calculation of "mutual information loss" in this invention will be further explained. As mentioned above, when calculating mutual information, image pairs that can produce the same segment identification results (segment images) are required. Without user guidance, it is difficult to sample image pairs from the input image set. Therefore, a paired image x' is generated from the input image x (S801) and used to calculate the loss.

產生影像對之後,對輸入影像x及其配對影像x'進行區段的辨識(S802),對配對影像的辨識結果L(x')進行逆變換(S803),根據相互資訊量計算損失(S804)。具體來說,將輸入影像x和對輸入影像x添加了變形的影像x'的辨識結果L(x)、L(x')分別看作離散機率變數,計算為最大化相互資訊量的損失。After generating the image pair, segment identification is performed on the input image x and its paired image x' (S802). The identification result L(x') of the paired image is inversely transformed (S803), and the loss is calculated based on the mutual information (S804). Specifically, the identification results L(x) and L(x') of the input image x and the image x' with added distortion to the input image x are treated as discrete probability variables, and the loss is calculated to maximize the mutual information.

產生配對影像的方法,可以是透過對影像x進行放大、旋轉、反轉、扭曲、對比改變等處理的組合來產生影像x'。當進行放大、旋轉或反轉來產生影像對時,在逆變換處理(S803)中,為了使得輸入影像的辨識結果L(x)與配對影像的辨識結果L(x')相互對應,對L(x')進行縮小、逆旋轉、逆反轉。從而,能夠獲得尺度不變性(Scale Invariance)、旋轉不變性(Rotational invariance)和反轉不變性(Inversion invariance)作為辨識性能。The method for generating paired images can be a combination of processing such as magnification, rotation, inversion, distortion, and contrast modification on image x to generate image x'. When generating image pairs by magnification, rotation, or inversion, in the inverse transformation process (S803), in order to make the identification result L(x) of the input image correspond to the identification result L(x') of the paired image, L(x') is reduced, inversely rotated, or inversely inverted. Thus, scale invariance, rotational invariance, and inversion invariance can be obtained as identification properties.

「形狀損失」是一種關注於區段形狀的損失。對輸入影像計算超像素(S805),利用計算出的超像素來校正輸入影像的區段辨識結果F(x)或L(x)(S806),計算校正前後的形狀誤差作為損失(S807)。圖11示出了使用超像素進行區段的校正結果的一例。1402示出了輸入影像1401的區段辨識的可視化結果。當僅使用前述的「相互資訊量損失」來進行與區段分割處理相關的參數自動調整時,電路圖案的輪廓與區段的輪廓可能不一致。1403表示根據輸入影像1401計算出6×6個的超像素的結果。超像素是指將具有相似顏色或紋理的像素進行分組而成的微小區域。因此,超像素的邊界與電路圖案的邊界具有強烈的一致趨勢。1404表示針對每個超像素計算與屬於該超像素的像素相對應的代表性的區段,並將其可視化的結果。在形狀損失計算(S807)中,計算1402的與1404的交叉熵和平方誤差作為校正前後的形狀誤差。"Shape loss" is a loss that focuses on the shape of segments. Superpixels are calculated for the input image (S805), and the calculated superpixels are used to correct the segment identification results F(x) or L(x) of the input image (S806). The shape error before and after correction is calculated as the loss (S807). Figure 11 shows an example of the segment correction result using superpixels. 1402 shows a visualization of the segment identification result of the input image 1401. When only the aforementioned "mutual information loss" is used for automatic adjustment of parameters related to segmentation processing, the outline of the circuit pattern may not match the outline of the segment. 1403 shows the result of calculating 6×6 superpixels based on the input image 1401. A superpixel is a tiny region formed by grouping pixels with similar colors or textures. Therefore, the boundaries of superpixels strongly correlate with the boundaries of circuit patterns. 1404 represents the result of calculating and visualizing the representative segment corresponding to the pixels belonging to that superpixel for each superpixel. In the shape loss calculation (S807), the cross-entropy and squared error of 1402 and 1404 are calculated as the shape error before and after correction.

透過導入這種損失,可以決定參數調整的方向,使得區段影像中的區段的邊界與電路圖案的邊界成為一致。By importing this loss, the direction of parameter adjustment can be determined so that the boundaries of the segments in the segment image are consistent with the boundaries of the circuit pattern.

其結果,能夠得到如圖4的區段#2、#4、#6、#8所示的與電路圖案形狀相符的區段。另外,超像素的計算可以使用影像處理技術進行,也可以使用卷積神經網路進行超像素的計算。The result is that segments #2, #4, #6, and #8 in Figure 4 correspond to the circuit pattern shape. Furthermore, superpixel calculation can be performed using image processing techniques or convolutional neural networks.

「重建損失」是指關注從區段影像恢復輸入影像時的誤差的損失。如果區段辨識結果正確反映了輸入影像的結構的結果時,則認為可以從區段影像重建輸入影像。因此,將輸入影像的區段辨識結果F(x)或L(x)作為輸入,重建輸入影像的灰階(S808),並計算重建結果與輸入影像之間的誤差作為損失(S809)。"Reconstruction loss" refers to the loss due to errors in recovering the input image from the segment image. If the segment identification result accurately reflects the structure of the input image, it is considered that the input image can be reconstructed from the segment image. Therefore, the segment identification result F(x) or L(x) of the input image is used as input to reconstruct the grayscale of the input image (S808), and the error between the reconstruction result and the input image is calculated as the loss (S809).

可以使用灰階(Grayscale)的絕對值誤差或平方誤差等作為誤差。此外,可以利用卷積神經網路來進行從區段辨識結果重建灰階影像(Grayscale image)的處理。在這種情況下,與灰階影像重建處理相關的參數的調整,可以與區段辨識處理的參數調整同時進行。可以使用灰階的絕對值誤差或平方誤差等作為與該灰階影像重建處理相關的損失。透過導入這種損失,可以決定參數調整的方向,使得區段辨識處理輸出可用於重建灰階影像的辨識結果。The absolute error or squared error of the grayscale can be used as the error. Furthermore, convolutional neural networks can be used to reconstruct the grayscale image from the segment identification results. In this case, the parameter adjustments related to the grayscale image reconstruction process can be performed simultaneously with the parameter adjustments for the segment identification process. The absolute error or squared error of the grayscale can be used as the loss associated with the grayscale image reconstruction process. By incorporating this loss, the direction of parameter adjustments can be determined, making the output of the segment identification process usable for reconstructing the grayscale image identification results.

將利用上述說明的損失調整後的區段辨識處理(S802)的參數作為區段分割處理的參數並記憶在記憶部M201中(圖7中的S707)。The parameters of the segment identification process (S802) after loss adjustment as described above are used as parameters for the segment segmentation process and stored in the memory unit M201 (S707 in Figure 7).

雖然以上說明了使用卷積神經網路進行區段辨識處理的方法,但也可以使用基於規則的處理。在這種情況下,判定閾值等是成為調整對象的參數。關於基於規則(Rule-based)的處理的最佳化,最簡單的方法是使用網格搜尋(Grid Search)。或者,可以使用實驗設計(Experimental Design)或貝葉斯優化方法(Bayesian Optimization Methods)。無論哪種情況,都可以使用「相互資訊量損失」、「形狀損失」和「重建損失」進行調整。While the above describes the method of using convolutional neural networks for segment recognition, rule-based processing can also be used. In this case, parameters such as the decision threshold become the objects of adjustment. The simplest method for optimizing rule-based processing is to use grid search. Alternatively, experimental design or Bayesian optimization methods can be used. In either case, adjustments can be made using "mutual information loss," "shape loss," and "reconstruction loss."

此外,還可以對區段辨識結果追加應用區段整形處理,其中利用基於規則對區段進行整形。例如,如圖12所示,在透過區段辨識處理得到區段1001和1002之後,依照預先設定的規則(rule),例如,可以透過將區段1002縮退(Δx,Δy)的大小來產生新的區段1003。Furthermore, segment shaping processing can be applied to the segment identification results, which utilizes rule-based segment shaping. For example, as shown in Figure 12, after obtaining segments 1001 and 1002 through segment identification processing, a new segment 1003 can be generated according to a pre-set rule, such as shrinking segment 1002 by a size of (Δx, Δy).

作為本實施例的GUI,圖13示出了用於確認和調整區段分割結果的GUI。具備:介面1101,其表示拍攝影像集的清單,並允許使用者進行選擇;介面1102,其顯示從清單中選擇的影像;介面1103,其顯示與影像對應的區段分割結果;及介面1104,其調整區段的整形規則。As a GUI of this embodiment, Figure 13 shows a GUI for confirming and adjusting segmentation results. It includes: an interface 1101 that represents a list of captured image sets and allows the user to make selections; an interface 1102 that displays the image selected from the list; an interface 1103 that displays the segmentation results corresponding to the image; and an interface 1104 that adjusts the shaping rules of the segments.

圖14表示用於調整靈敏度的GUI。具備:介面1201,用於顯示每個區段的異常程度的分佈;介面1202,用於顯示檢測到的缺陷候選的補丁影像(以檢測點為中心切出的局部影像);及介面1203,用於調整每個區段的檢測靈敏度。這裡,可以與補丁影像相關聯地顯示異常程度分佈中選擇出的缺陷檢測點。例如,強調顯示與異常程度分佈上選擇出的缺陷候選點相對應的補丁影像。從而,能夠容易地判定缺陷是應該檢測到的缺陷還是誤報,並且能夠減少靈敏度調整的作業負擔。Figure 14 illustrates the GUI for adjusting sensitivity. It includes: interface 1201 for displaying the distribution of anomaly severity in each segment; interface 1202 for displaying patch images (local images cropped out from the detection point) of detected defect candidates; and interface 1203 for adjusting the detection sensitivity of each segment. Here, the defect detection points selected in the anomaly severity distribution can be displayed in association with the patch images. For example, the patch images corresponding to the selected defect candidate points in the anomaly severity distribution are highlighted. This makes it easy to determine whether a defect should be detected or is a false alarm, and reduces the workload of sensitivity adjustment.

此外,在補丁影像顯示介面中,還具備縮小顯示所指定的區段內包含的缺陷候選點的補丁影像的功能。在靈敏度調整介面1203中,具備允許為每個區段指定與靈敏度調整相關的係數的介面。此時,具備:異常程度分佈顯示介面1202的分佈顯示會根據靈敏度調整係數的變更而即時更新的功能。Furthermore, the patch image display interface also has the function of zooming in to display the patch image containing defect candidate points within a specified segment. The sensitivity adjustment interface 1203 has an interface that allows specifying a coefficient related to sensitivity adjustment for each segment. At this time, it has the function that the distribution display of the abnormality degree distribution display interface 1202 is updated in real time according to changes in the sensitivity adjustment coefficient.

透過具備上述說明的手段,可以構成為在沒有使用者教導的情況下,也可以針對每個工程進行適當的區段分割處理。結果,使得可以容易地調整每個區段中的缺陷檢測靈敏度。結果,使得可以同時提高缺陷檢測靈敏度、抑制誤報,並且減少使用者作業負擔。By employing the methods described above, it is possible to perform appropriate segmentation processing for each project even without user instruction. This allows for easy adjustment of the defect detection sensitivity within each segment. Consequently, it enables simultaneous improvement in defect detection sensitivity, suppression of false alarms, and reduction of user workload.

101:影像拍攝裝置 S201:影像取得處理 S202:缺陷候選點檢測處理 S203:區段分割處理 S204:缺陷候選點分類處理 S205:異常程度校正處理 S206:缺陷點抽出處理 M201:參數記憶部 S703:區段分割處理 S704:損失計算處理 S705:參數更新處理 S706:參數調整結束判定處理 S707:參數保存處理 S801:影像對產生處理 S802:區段辨識處理 S803:逆變換處理 S804:相互資訊量損失計算處理 S805:超像素計算處理 S806:區段校正處理 S807:形狀損失計算處理 S808:灰階影像重建處理 S809:重建損失計算處理 1101:影像清單表示介面 1102:輸入影像表示介面 1103:區段分割結果表示介面 1104:區段整形規則輸入介面 1201:異常程度表示介面 1202:補丁影像介面 1203:靈敏度調整介面 101: Image Capture Device S201: Image Acquisition Processing S202: Defect Candidate Point Detection Processing S203: Segmentation Processing S204: Defect Candidate Point Classification Processing S205: Anomaly Level Correction Processing S206: Defect Point Extraction Processing M201: Parameter Memory Unit S703: Segmentation Processing S704: Loss Calculation Processing S705: Parameter Update Processing S706: Parameter Adjustment Completion Judgment Processing S707: Parameter Storage Processing S801: Image Pair Generation Processing S802: Segment Identification Processing S803: Inverse Transformation Processing S804: Mutual Information Loss Calculation Processing S805: Superpixel Calculation Processing S806: Segment Correction Processing S807: Shape Loss Calculation and Processing S808: Gray-Scale Image Reconstruction Processing S809: Reconstruction Loss Calculation and Processing 1101: Image List Display Interface 1102: Input Image Display Interface 1103: Segmentation Result Display Interface 1104: Segment Shaping Rule Input Interface 1201: Anomaly Level Display Interface 1202: Patch Image Interface 1203: Sensitivity Adjustment Interface

[圖1]是影像拍攝裝置的構成圖。 [圖2]是表示本發明的缺陷檢測處理的處理流程的圖。 [圖3]是示意性表示拍攝影像例的圖。 [圖4]是表示對拍攝影像進行區段分割處理的處理結果例的圖。 [圖5]是表示對拍攝影像進行缺陷候選點檢測處理的處理結果例的圖。 [圖6]是表示按每個區段顯示缺陷候選點的異常程度的例的圖。 [圖7]是表示與本發明的區段分割處理相關的參數自動調整處理的處理流程的圖。 [圖8]是表示在本發明的區段分割處理中,用於辨識每個像素的區段的卷積神經網路(Convolutional Neural Network)的構成例的圖。 [圖9]是表示本發明的區段分割處理的參數自動調整中的損失的計算處理流程的圖。 [圖10]是說明「相互資訊量損失」的計算方法的圖。 [圖11]是表示使用了超像素的區段的校正結果例的圖。 [圖12]是對於區段分割處理所得到的區段分割結果,追加應用基於規則的整形處理(Rule-based formatting)的結果例。 [圖13]是表示本發明中用於確認、調整區段分割結果的GUI的圖。 [圖14]是表示本發明的靈敏度調整的GUI的圖。 [Figure 1] is a structural diagram of the image capturing device. [Figure 2] is a diagram illustrating the processing flow of the defect detection process of the present invention. [Figure 3] is a diagram schematically showing an example of an captured image. [Figure 4] is a diagram showing an example of the processing result of segmenting the captured image. [Figure 5] is a diagram showing an example of the processing result of detecting candidate defects in the captured image. [Figure 6] is a diagram showing an example of displaying the degree of abnormality of candidate defects for each segment. [Figure 7] is a diagram illustrating the processing flow of automatic parameter adjustment processing related to the segmentation process of the present invention. [Figure 8] is a diagram showing an example of the structure of the convolutional neural network used to identify segments of each pixel in the segmentation process of the present invention. [Figure 9] is a diagram illustrating the calculation process for losses during automatic parameter adjustment in the segmentation processing of the present invention. [Figure 10] is a diagram explaining the calculation method for "mutual information loss". [Figure 11] is a diagram showing an example of the correction result for a segment using superpixels. [Figure 12] is an example of the result after applying rule-based formatting to the segmentation result obtained from the segmentation processing. [Figure 13] is a diagram showing the GUI used in the present invention for confirming and adjusting the segmentation result. [Figure 14] is a diagram showing the GUI for sensitivity adjustment in the present invention.

S701:取得影像集 S701: Obtain image set

S702:初始化參數 S702: Initialization parameters

S703:進行區段分割 S703: Perform segmentation

S704:計算損失 S704: Calculate Losses

S705:更新參數 S705: Update parameters

S706:進行結束的判定 S706: Determining if a process has ended

S707:保存參數 S707: Save Parameters

M201:記憶部 M201: Memory Unit

Claims (14)

一種缺陷檢測裝置,其特徵為: 具備: 影像取得部,用於取得檢測對象的樣品的預定區域的影像; 區段分割處理部,其將前述影像取得部中取得的拍攝影像分割為多個區段;及 缺陷候選點抽出部,其針對由前述區段分割處理部分割出的每個區段抽出缺陷候選; 前述區段分割處理部係以以下1)~3)中的至少兩個以上的損失變為更小的方式進行區段分割:1)根據配對的兩幅影像的區段辨識結果來計算相互資訊量,並且設定損失使得相互資訊量成為最大的相互資訊量損失,2)形狀損失是基於超像素區段的校正誤差的損失,該超像素區段是將輸入影像中具有相似顏色或紋理的像素分組而得的微小區域,3)重建損失,是基於灰階影像重建誤差的損失。 A defect detection apparatus, characterized by: comprising: an image acquisition unit for acquiring an image of a predetermined area of a sample to be inspected; a segmentation processing unit for segmenting the image acquired by the image acquisition unit into multiple segments; and a defect candidate point extraction unit for extracting defect candidates from each segment segmented by the segmentation processing unit; The aforementioned segmentation processing unit performs segmentation by minimizing at least two of the following losses (1) to (3): 1) calculating mutual information based on the segmentation identification results of the paired two images, and setting a loss such that the mutual information loss is maximized; 2) shape loss is based on the correction error of superpixel segments, which are small regions obtained by grouping pixels with similar colors or textures in the input image; 3) reconstruction loss is based on the grayscale image reconstruction error. 如請求項1之缺陷檢測裝置,其中 作為基於前述相互資訊量的相互資訊量損失,係進行以下的處理:將輸入影像x和對輸入影像x添加了變形的影像x'的辨識結果看作離散機率變數,計算最大化該相互資訊量的損失的處理。 As in the defect detection device of claim 1, the following processing is performed to address the mutual information loss based on the aforementioned mutual information: The recognition results of the input image x and the image x' to which the input image x has been distorted are treated as discrete probability variables, and a process is calculated to maximize the mutual information loss. 如請求項1之缺陷檢測裝置,其中 作為基於前述區段的校正誤差的損失,係進行以下的處理:從輸入影像中計算超像素,根據計算出的超像素對辨識結果進行整形,並且以整形前後的誤差作為損失的處理。 As in the defect detection apparatus of claim 1, the loss due to correction error based on the aforementioned section is handled as follows: superpixels are calculated from the input image; the recognition result is shaped based on the calculated superpixels; and the error before and after shaping is treated as the loss. 如請求項1之缺陷檢測裝置,其中 作為基於前述灰階影像重建誤差的損失,係進行以下的處理:從辨識結果重建輸入影像,並將輸入影像與重建影像之間的誤差視為損失的處理。 As in the defect detection apparatus of claim 1, the loss due to the aforementioned grayscale image reconstruction error is handled as follows: The input image is reconstructed from the identification results, and the error between the input image and the reconstructed image is considered as a loss. 如請求項1至4項中任一項之缺陷檢測裝置,其中 前述區段分割處理部中的區段分割處理和前述缺陷候選點抽出部中的缺陷候選抽出處理,係在兩個以上的運算器中並行地進行處理。 For any of the defect detection devices described in claims 1 to 4, the segmentation processing in the aforementioned segmentation processing unit and the defect candidate extraction processing in the aforementioned defect candidate point extraction unit are performed in parallel in two or more computers. 如請求項1至4項中任一項之缺陷檢測裝置,其中 還具備:區段整形處理部,其對於前述區段分割處理部所得到的區段分割結果,追加應用基於規則的整形處理,來定義新的區段。 The defect detection apparatus according to any one of claims 1 to 4, further comprises: a segment shaping processing unit, which applies rule-based shaping processing to the segmentation result obtained by the aforementioned segmentation processing unit to define new segments. 如請求項1至4項中任一項之缺陷檢測裝置,其中 還具備:顯示部,其針對由前述缺陷候選點抽出部中抽出的缺陷候選,顯示每個區段的異常程度的分佈。 The defect detection apparatus according to any one of claims 1 to 4, further comprises: a display unit that displays the distribution of the degree of abnormality in each segment for defect candidates extracted from the aforementioned defect candidate extraction unit. 一種缺陷檢測方法,其特徵為 具備: 取得檢測對象的樣品的預定區域的影像的影像取得步驟; 將前述影像取得步驟中取得的拍攝影像分割為多個區段的區段分割處理步驟;及 從前述區段分割處理步驟中分割出的每個區段中抽出缺陷候選的缺陷候選點抽出步驟; 在前述區段分割處理步驟中,係以以下1)~3)中的至少兩個以上的損失變為更小的方式進行區段分割:1)根據配對的兩幅影像的區段辨識結果來計算相互資訊量,設定損失使得相互資訊量成為最大的相互資訊量損失,2)形狀損失是基於超像素區段的校正誤差的損失,該超像素區段是將輸入影像中具有相似顏色或紋理的像素分組而得的微小區域,3)重建損失,是基於灰階影像重建誤差的損失。 A defect detection method, characterized by: components including: an image acquisition step of obtaining an image of a predetermined area of a sample to be inspected; a segmentation processing step of dividing the captured image obtained in the aforementioned image acquisition step into multiple segments; and a defect candidate point extraction step of extracting defect candidates from each segment segmented in the aforementioned segmentation processing step; In the aforementioned segmentation processing steps, segmentation is performed by minimizing at least two of the following losses (1) to (3): 1) Calculating mutual information based on the segment identification results of the paired images, setting a loss to maximize the mutual information loss; 2) Shape loss is based on the correction error of superpixel segments, which are small regions obtained by grouping pixels with similar colors or textures in the input image; 3) Reconstruction loss is based on the grayscale image reconstruction error. 如請求項8之缺陷檢測方法,其中 作為基於前述相互資訊量的相互資訊量損失,係進行以下的處理:將輸入影像x和對輸入影像x添加了變形的影像x'的辨識結果看作離散機率變數,計算最大化該相互資訊量的損失的處理。 As in the defect detection method of claim 8, the following processing is performed to address the mutual information loss based on the aforementioned mutual information: The identification results of the input image x and the image x' to which the input image x has been distorted are treated as discrete probability variables, and a process that maximizes the mutual information loss is calculated. 如請求項8之缺陷檢測方法,其中 作為基於前述區段的校正誤差的損失,係進行以下的處理:從輸入影像中計算超像素,根據計算出的超像素對辨識結果進行整形,並且以整形前後的誤差作為損失的處理。 As in the defect detection method of claim 8, the loss due to correction error based on the aforementioned section is handled as follows: superpixels are calculated from the input image; the recognition result is shaped based on the calculated superpixels; and the error before and after shaping is treated as the loss. 如請求項8之缺陷檢測方法,其中 作為基於前述灰階影像重建誤差的損失,係進行以下的處理:從辨識結果重建輸入影像,並將輸入影像與重建影像之間的誤差視為損失的處理。 As in the defect detection method of claim 8, the loss based on the aforementioned grayscale image reconstruction error is handled as follows: the input image is reconstructed from the identification results, and the error between the input image and the reconstructed image is treated as a loss. 如請求項8至11項中任一項之缺陷檢測方法,其中 前述區段分割處理步驟中的區段分割處理和前述缺陷候選點抽出步驟中的缺陷候選抽出處理,係並行地進行處理。 For any of the defect detection methods in claims 8 to 11, the segmentation process in the aforementioned segmentation step and the defect candidate extraction process in the aforementioned defect candidate point extraction step are performed in parallel. 如請求項8至11項中任一項之缺陷檢測方法,其中 還進行區段整形處理,其對於前述區段分割處理步驟中得到的區段分割結果,追加應用基於規則的整形處理,來定義新的區段。 For example, the defect detection method described in any of claims 8 to 11, also includes segment shaping processing, which applies rule-based shaping to the segmentation results obtained in the aforementioned segmentation processing steps to define new segments. 如請求項8至11項中任一項之缺陷檢測方法,其中 還具備:顯示步驟,其針對由前述缺陷候選點抽出步驟中抽出的缺陷候選,顯示每個區段的異常程度的分佈。 The defect detection method according to any one of claims 8 to 11, also includes: a display step that displays the distribution of the degree of abnormality in each segment for the defect candidates extracted from the aforementioned defect candidate point extraction step.
TW114112656A 2024-04-05 2025-04-01 Defect detection device and defect detection method TWI908645B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
WOPCT/JP2024/014012 2024-04-05
PCT/JP2024/014012 WO2025210863A1 (en) 2024-04-05 2024-04-05 Defect inspection device and defect inspection method

Publications (2)

Publication Number Publication Date
TW202540961A TW202540961A (en) 2025-10-16
TWI908645B true TWI908645B (en) 2025-12-11

Family

ID=

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240053287A1 (en) 2022-08-12 2024-02-15 Saudi Arabian Oil Company Probability of detection of lifecycle phases of corrosion under insulation using artificial intelligence and temporal thermography

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240053287A1 (en) 2022-08-12 2024-02-15 Saudi Arabian Oil Company Probability of detection of lifecycle phases of corrosion under insulation using artificial intelligence and temporal thermography

Similar Documents

Publication Publication Date Title
US12007335B2 (en) Automatic optimization of an examination recipe
TWI857227B (en) Generating training data usable for examination of a semiconductor specimen
CN111443094B (en) Sample inspection method and system
US8331651B2 (en) Method and apparatus for inspecting defect of pattern formed on semiconductor device
JP4997351B2 (en) Pattern inspection apparatus and method
JP7169393B2 (en) Generating training sets that can be used to inspect semiconductor specimens
JP5543872B2 (en) Pattern inspection method and pattern inspection apparatus
JP7530330B2 (en) Segmentation of images of semiconductor samples
CN114365183A (en) Wafer inspection method and system
KR102900789B1 (en) Matching based defect examination for semiconductor specimens
KR20200057650A (en) Parameter Estimation for Metrology of Features in an Image
CN113919276A (en) Identification of arrays in semiconductor samples
US20230052350A1 (en) Defect inspecting system and defect inspecting method
TWI908645B (en) Defect detection device and defect detection method
TW202540961A (en) Defect detection device and defect detection method
TWI909872B (en) System, method, and non-transitory machine-readable storage medium of defect detection on semiconductor specimens
JP2025068279A (en) Defect inspection system and defect inspection method