TWI819698B - Method for determining defect and electronic apparatus - Google Patents
Method for determining defect and electronic apparatus Download PDFInfo
- Publication number
- TWI819698B TWI819698B TW111126380A TW111126380A TWI819698B TW I819698 B TWI819698 B TW I819698B TW 111126380 A TW111126380 A TW 111126380A TW 111126380 A TW111126380 A TW 111126380A TW I819698 B TWI819698 B TW I819698B
- Authority
- TW
- Taiwan
- Prior art keywords
- fused
- coefficient set
- modulation transfer
- point spread
- transfer function
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000007547 defect Effects 0.000 title claims abstract description 31
- 238000012546 transfer Methods 0.000 claims abstract description 37
- 238000003062 neural network model Methods 0.000 claims abstract description 25
- 230000002950 deficient Effects 0.000 claims abstract description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 14
- 238000007689 inspection Methods 0.000 claims description 14
- 238000003860 storage Methods 0.000 claims description 9
- 238000012360 testing method Methods 0.000 claims description 5
- 230000007246 mechanism Effects 0.000 claims description 4
- 230000006870 function Effects 0.000 description 59
- 230000004927 fusion Effects 0.000 description 7
- 210000001747 pupil Anatomy 0.000 description 7
- 238000012549 training Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 230000004075 alteration Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013145 classification model Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
- Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)
- Testing Electric Properties And Detecting Electric Faults (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
Description
本發明是有關於一種影像識別技術,且特別是有關於一種基於影像辨識來判斷瑕疵的方法及電子裝置。The present invention relates to an image recognition technology, and in particular, to a method and electronic device for determining defects based on image recognition.
干涉儀廣泛應用於科學研究和工業生產中對微小位移、折射率以及表面平整度的測量。目前在鏡片光學檢測大都是透過干涉儀產生干涉影像,再由人眼觀察干涉影像之態樣來判斷鏡片是否有瑕疵。人眼直接判斷可能會因為每片鏡片拍攝狀況不同、時間壓力、眼睛疲勞或人為判定標準的不一致造成判定不夠精確。每一干涉儀皆需配置一位檢測員,存在人力成本。此外,干涉影像之態樣可能由多重光學因素堆疊而成,難以透過單一門檻值或判斷規則自動判定。再者,判定標準若因客戶之標準而有所不同,亦難以精準動態調整門檻值或判斷規則。Interferometers are widely used in scientific research and industrial production to measure small displacements, refractive index and surface flatness. Currently, most optical inspections of lenses use an interferometer to generate an interference image, and then the human eye observes the pattern of the interference image to determine whether the lens is defective. Direct judgment by the human eye may not be accurate enough due to different shooting conditions of each lens, time pressure, eye fatigue, or inconsistencies in human judgment standards. Each interferometer requires an inspector, which involves labor costs. In addition, the shape of the interference image may be composed of multiple optical factors, which is difficult to automatically determine through a single threshold or judgment rule. Furthermore, if the judgment criteria differ depending on the customer's standards, it will be difficult to accurately and dynamically adjust the threshold value or judgment rules.
本發明提供一種判斷瑕疵的方法及電子裝置,提供一種科學方法以達到精準自動檢測的目的。The present invention provides a method and electronic device for determining defects, and provides a scientific method to achieve the purpose of accurate automatic detection.
本發明的基於影像辨識來判斷瑕疵的方法,其是利用處理器來執行多個步驟,包括:輸入對應待測物的干涉影像至已訓練的多個神經網路模型,以獲得多組澤爾尼克係數(Zernike coefficient);基於所述多組澤爾尼克係數獲得融合後係數組;利用融合後係數組,計算點擴散函數(Point Spread Function,PSF)與調制轉換函數(Modulation Transfer Function,MTF);以及基於融合後係數組、點擴散函數與調制轉換函數,判斷待測物是否有瑕疵。The method of determining defects based on image recognition of the present invention uses a processor to perform multiple steps, including: inputting interference images corresponding to the object to be tested into multiple trained neural network models to obtain multiple sets of ZERO Zernike coefficient; obtain a fused coefficient set based on the multiple sets of Zernike coefficients; use the fused coefficient set to calculate a Point Spread Function (PSF) and a Modulation Transfer Function (MTF) ; and based on the fused coefficient group, point spread function and modulation transfer function, determine whether the object to be tested is defective.
在本發明的一實施例中,所述神經網路模型包括採用卷積神經網路(Convolutional Neural Network,CNN)、自動編碼器(AutoEncoder)以及生成對抗網路(Generative Adversarial Network,GAN)的模型。In an embodiment of the present invention, the neural network model includes a model using a convolutional neural network (CNN), an autoencoder (AutoEncoder), and a generative adversarial network (GAN). .
在本發明的一實施例中,基於所述多組澤爾尼克係數獲得融合後係數組的步驟包括:採用平均法、中位數、最大值、最小值以及加權平均其中一者以在所述多組澤爾尼克係數中獲得融合後係數組。In an embodiment of the present invention, the step of obtaining the fused coefficient set based on the multiple sets of Zernike coefficients includes: using one of the average method, the median, the maximum value, the minimum value and the weighted average to calculate the Obtain the fused coefficient group from multiple groups of Zernike coefficients.
在本發明的一實施例中,基於所述多組澤爾尼克係數獲得融合後係數組的步驟包括:採用投票機制以在所述多組澤爾尼克係數中獲得融合後係數組。In an embodiment of the present invention, the step of obtaining the fused coefficient set based on the multiple sets of Zernike coefficients includes: using a voting mechanism to obtain the fused coefficient set from the multiple sets of Zernike coefficients.
在本發明的一實施例中,基於融合後係數組、點擴散函數與調制轉換函數,判斷待測物是否有瑕疵的步驟包括:將融合後係數組、點擴散函數以及調制轉換函數至少其中一者與其對應的一或多個閾值進行比對,以判定瑕疵程度或待測物等級。In an embodiment of the present invention, based on the fused coefficient group, point spread function and modulation transfer function, the step of determining whether the object to be tested is defective includes: combining at least one of the fused coefficient group, point spread function and modulation transfer function. It is compared with one or more corresponding thresholds to determine the degree of defects or the grade of the object to be tested.
在本發明的一實施例中,基於融合後係數組、點擴散函數與調制轉換函數,判斷待測物是否有瑕疵的步驟包括:將融合後係數組、點擴散函數以及調制轉換函數至少其中一者輸入至其對應的檢查模型,以判定瑕疵程度或待測物等級。In an embodiment of the present invention, based on the fused coefficient group, point spread function and modulation transfer function, the step of determining whether the object to be tested is defective includes: combining at least one of the fused coefficient group, point spread function and modulation transfer function. or input into its corresponding inspection model to determine the degree of defects or the grade of the object to be tested.
本發明的電子裝置,包括:儲存設備,包括已訓練的多個神經網路模型;處理器,耦接至儲存設備,且經配置以:輸入對應於待測物的干涉影像至所述神經網路模型,以獲得多組澤爾尼克係數;基於所述多組澤爾尼克係數獲得融合後係數組;利用融合後係數組,計算點擴散函數與調制轉換函數;以及基於融合後係數組、點擴散函數與調制轉換函數,判斷待測物是否有瑕疵。The electronic device of the present invention includes: a storage device including a plurality of trained neural network models; a processor coupled to the storage device and configured to: input interference images corresponding to the object to be measured to the neural network path model to obtain multiple sets of Zernike coefficients; obtain a fused coefficient set based on the multiple sets of Zernike coefficients; use the fused coefficient set to calculate the point spread function and modulation transfer function; and based on the fused coefficient set, point Diffusion function and modulation transfer function are used to determine whether the object to be tested is defective.
基於上述,本揭露從光學角度來獲得一個科學方法,即,從干涉影像回推出澤爾尼克係數,再由澤爾尼克係數導出點擴散函數與調制轉換函數,進而來判斷待測物的品質。藉此,可達到精準自動檢測的效果。Based on the above, this disclosure obtains a scientific method from an optical perspective, that is, deducing the Zernike coefficient from the interference image, and then deriving the point spread function and modulation transfer function from the Zernike coefficient to judge the quality of the object to be tested. In this way, accurate automatic detection can be achieved.
圖1是依照本發明一實施例的電子裝置的方塊圖。請參照圖1,電子裝置100包括處理器110以及儲存設備120。處理器110耦接至儲存設備120。處理器110例如為中央處理單元(Central Processing Unit,CPU)、物理處理單元(Physics Processing Unit,PPU)、可程式化之微處理器(Microprocessor)、嵌入式控制晶片、數位訊號處理器(Digital Signal Processor,DSP)、特殊應用積體電路(Application Specific Integrated Circuits,ASIC)、可程式化邏輯控制器(programmable logic controller,PLC)或其他類似裝置。FIG. 1 is a block diagram of an electronic device according to an embodiment of the present invention. Referring to FIG. 1 , the electronic device 100 includes a processor 110 and a storage device 120 . Processor 110 is coupled to storage device 120 . The processor 110 is, for example, a central processing unit (CPU), a physical processing unit (PPU), a programmable microprocessor (Microprocessor), an embedded control chip, or a digital signal processor (Digital Signal). Processor (DSP), Application Specific Integrated Circuits (ASIC), programmable logic controller (PLC) or other similar devices.
儲存設備120例如是任意型式的固定式或可移動式隨機存取記憶體(Random Access Memory,RAM)、唯讀記憶體(Read-Only Memory,ROM)、快閃記憶體(Flash memory)、硬碟或其他類似裝置或這些裝置的組合。儲存設備120包括一或多個程式碼片段,上述程式碼片段在被安裝後,會由處理器110來執行下述判斷瑕疵的方法各步驟。The storage device 120 is, for example, any type of fixed or removable random access memory (Random Access Memory, RAM), read-only memory (Read-Only Memory, ROM), flash memory (Flash memory), hardware disc or other similar device or a combination of these devices. The storage device 120 includes one or more program code fragments. After the program code fragments are installed, the processor 110 will execute each step of the method for determining defects described below.
圖2是依照本發明一實施例的基於影像辨識來判斷瑕疵的方法流程圖。請參照圖2,在步驟S205中,輸入對應待測物的干涉影像至已訓練的多個神經網路模型,以獲得多組澤爾尼克係數(Zernike coefficient,底下稱為Zernike係數)。待測物例如為要檢測的鏡片。可將要檢測的鏡片放在干涉儀的平台上,接著打光,再由干涉儀上的相機進行拍攝而獲得干涉影像。FIG. 2 is a flow chart of a method for determining defects based on image recognition according to an embodiment of the present invention. Please refer to Figure 2. In step S205, interference images corresponding to the object to be measured are input to multiple trained neural network models to obtain multiple sets of Zernike coefficients (Zernike coefficients, hereinafter referred to as Zernike coefficients). The object to be tested is, for example, a lens to be tested. The lens to be detected can be placed on the platform of the interferometer, then illuminated, and then photographed by the camera on the interferometer to obtain the interference image.
在此,通過神經網路模型的影像識別功能來預測Zernike係數,可以減少一般用於干涉儀的數學運算,進而提高測量精度。在一實施例中,所述神經網路模型包括採用卷積神經網路(Convolutional Neural Network,CNN)、自動編碼器(AutoEncoder)以及生成對抗網路(Generative Adversarial Network,GAN)的模型。Here, predicting the Zernike coefficient through the image recognition function of the neural network model can reduce the mathematical operations commonly used in interferometers, thereby improving measurement accuracy. In one embodiment, the neural network model includes a model using a convolutional neural network (CNN), an autoencoder (AutoEncoder), and a generative adversarial network (GAN).
例如,CNN是目前最常用的一種神經網路,其由幾個層組成。這些層包括卷積層(convolution layer)、池化層(pooling layer)和全連接層(fully connected layer)。卷積層、池化層和全連接層的數量可以配置為一或多個,視不同架構而有不同的配置。在卷積層中從輸入的影像(干涉影像)中提取特徵。在池化層中將減少所提取特徵的數據大小但保留主要功能。全連接層會增加網路的深度並預測答案(Zernike係數)。例如,最後一個全連接層可設置為8個神經元,以對應至8個Zernike係數。然,在此僅為舉例說明,並不以此為限,可視需求來調整最後輸出的Zernike係數的數量。For example, CNN is currently the most commonly used neural network, which consists of several layers. These layers include convolution layer, pooling layer and fully connected layer. The number of convolutional layers, pooling layers and fully connected layers can be configured as one or more, with different configurations depending on the architecture. Features are extracted from the input image (interference image) in the convolutional layer. In the pooling layer the data size of the extracted features will be reduced but the main functionality will be retained. The fully connected layer increases the depth of the network and predicts the answer (Zernike coefficient). For example, the last fully connected layer can be set to 8 neurons to correspond to 8 Zernike coefficients. However, this is only an example and is not limited to this. The number of Zernike coefficients finally output can be adjusted as required.
事先訓練好CNN模型,之後將干涉儀所獲得的干涉影像輸入至已訓練的CNN模型,使得CNN模型輸出對應的Zernike係數。CNN模型的訓練是通過對大量的干涉影像和其對應的已知Zernike係數進行訓練,使得CNN模型得以在輸入的干涉影像中自動提取特徵,並調整CNN模型中的參數值,以獲得最佳的輸出(即,一組Zernike係數)。The CNN model is trained in advance, and then the interference image obtained by the interferometer is input to the trained CNN model, so that the CNN model outputs the corresponding Zernike coefficient. The CNN model is trained by training a large number of interference images and their corresponding known Zernike coefficients, so that the CNN model can automatically extract features from the input interference images and adjust the parameter values in the CNN model to obtain the best output (i.e., a set of Zernike coefficients).
自動編碼器是多層神經網路的一種非監督式學習算法,將一筆原始資料輸入至自動編碼器,自動編碼器可輸出與原始資料相同的重建資料。自動編碼器的架構包括編碼器(Encoder)和解碼器(Decoder),分別執行壓縮與解壓縮的動作,讓輸出資料和輸入資料表示相同意義。編碼器中包括一個或多個隱藏層,其作用在於將輸入的高維向量編碼為低維向量。解碼器的作用是把隱藏層的低維向量還原到初始的高維向量。通過訓練最後自動編碼器會在隱藏層中得到一個代表輸入資料的低維向量。當訓練完成以後將解碼器移除,只剩下編碼器,輸入干涉影像至訓練好的自動編碼器,便可獲得對應的一組Zernike係數。透過同時訓練一個編碼器及一個解碼器的網路,讓解碼器以在學習還原原始資料的過程中,幫助編碼器學習,最後得到一個預測能力更佳的編碼器來做為用以預測係數的神經網路模型。Autoencoder is an unsupervised learning algorithm of multi-layer neural network. A raw data is input to the autoencoder, and the autoencoder can output reconstructed data that is the same as the original data. The architecture of the autoencoder includes an encoder (Encoder) and a decoder (Decoder), which perform compression and decompression operations respectively, so that the output data and the input data represent the same meaning. The encoder includes one or more hidden layers, whose function is to encode the input high-dimensional vector into a low-dimensional vector. The function of the decoder is to restore the low-dimensional vector of the hidden layer to the initial high-dimensional vector. After training, the autoencoder will finally obtain a low-dimensional vector representing the input data in the hidden layer. After the training is completed, the decoder is removed, leaving only the encoder. By inputting the interference image to the trained autoencoder, the corresponding set of Zernike coefficients can be obtained. By training an encoder and a decoder network at the same time, the decoder can help the encoder learn in the process of learning to restore the original data, and finally an encoder with better prediction ability can be obtained as the prediction coefficient. Neural network model.
GAN包括生成網路(Generator Network)與判別網路(Discriminator Network)。生成網路用以接收一張干涉影像而輸出一張標記為偽影像(fake image)的馬賽克影像。馬賽克影像的每一塊與Zernike係數相關。例如,可將馬賽克影像設計為包括32個大小相同的區塊,每個區塊對應至一個Zernike係數。而判別網路用以接收一張標記為基準真相(ground truth)的另一馬賽克影像,並且接收生成網路輸出之標記為偽影像(fake image)的馬賽克影像,判別網路透過兩者相互對抗來產生結果。判別網路用以學習將生成網路的輸出分類為真實(real)或虛假(fake)。GAN includes Generator Network and Discriminator Network. The generation network receives an interference image and outputs a mosaic image labeled as a fake image. Each piece of the mosaic image is associated with a Zernike coefficient. For example, a mosaic image can be designed to include 32 blocks of the same size, each block corresponding to a Zernike coefficient. The discriminating network is used to receive another mosaic image marked as ground truth, and to receive the mosaic image output by the generating network marked as a fake image, and the discriminating network competes with each other through the two. to produce results. The discriminative network learns to classify the output of the generative network as real or fake.
在一實施例中,利用採用CNN、自動編碼器以及GNN的三種已訓練的神經網路模型,可獲得三組Zernike係數。然,在此僅為舉例說明,並不限定神經網路模型的數量及其使用的演算法。In one embodiment, three sets of Zernike coefficients can be obtained using three trained neural network models using CNN, autoencoder and GNN. However, this is only an example and does not limit the number of neural network models and the algorithms used.
接著,在步驟S210中,基於所述多組Zernike係數獲得融合後係數組。例如,可採用平均法、中位數、最大值、最小值、加權平均等在所述多組Zernike係數中獲得融合後係數組。例如,以平均法而言,將所有神經網路模型所獲得的對應Zernike係數累加後平均,求得的平均值作為融合後係數組。假設採用三個神經網路模型,所獲得的每一組Zernike係數的數量為8個(N1~N8),因此,將三個神經網路模型所獲得的N1取平均值、N2取平均值、…、N8取平均值,而獲得8個平均後的Zernike係數。或者,以中位數而言,將三個神經網路模型所獲得的N1取中位數、N2取中位數、…、N8取中位數,而獲得8個Zernike係數來作為融合後係數組。最大值、最小值、加權平均亦以此類推。Next, in step S210, a fused coefficient set is obtained based on the multiple sets of Zernike coefficients. For example, the average method, median, maximum value, minimum value, weighted average, etc. can be used to obtain the fused coefficient group from the multiple groups of Zernike coefficients. For example, using the averaging method, the corresponding Zernike coefficients obtained from all neural network models are accumulated and averaged, and the average value is used as the fusion coefficient group. Assuming that three neural network models are used, the number of each set of Zernike coefficients obtained is 8 (N1~N8). Therefore, the N1 and N2 obtained by the three neural network models are averaged. ..., N8 is averaged, and the Zernike coefficient after 8 averages is obtained. Or, in terms of the median, take the median of N1, the median of N2,..., the median of N8 obtained by the three neural network models, and obtain 8 Zernike coefficients as the fusion coefficients group. The same goes for maximum value, minimum value, and weighted average.
此外,可針對各個Zernike係數分別建立迴歸模型。例如如,假設三個神經網路模型所獲得的第一個係數分別為[0.2,0.3,0.5],將其作為迴歸模型的輸入特徵,藉此來訓練此迴歸模型,並將迴歸模型的輸出作為融合係數組的第一個融合係數。迴歸模型可以採用深度神經網路(Deep Neural Network,DNN)、支持向量回歸(Support Vector Regression,SVR)、高斯過程(Gaussian process)等方法。In addition, regression models can be established separately for each Zernike coefficient. For example, assume that the first coefficients obtained by the three neural network models are [0.2, 0.3, 0.5] respectively, and use them as input features of the regression model to train the regression model and use the output of the regression model As the first fusion coefficient of the fusion coefficient group. Regression models can use methods such as Deep Neural Network (DNN), Support Vector Regression (SVR), and Gaussian process.
另外,也可採用投票機制以在所述多組Zernike係數中獲得一組融合後係數組。將不同的神經網路模型所獲得的Zernike係數加權合成。例如,假設採用M個已訓練的神經網路模型,其各自獲得的每一組Zernike係數的數量為8個,最終可獲得8×M個Zernike係數(N 1,1~N 1,M、N 2,1~N 2,M、…、N 8,1~N 8,M)。融合後係數組中第i個融合後係數N i為: N i= W i,1×N i,1+ W i,2×N i,2+...+ W i,M×N i,M)/M, 其中,W i,j代表第j個神經網路模型所預測的第i個Zernike係數對應的權重。N i,1~N i,M分別代表第1~第M個神經網路模型所預測的第i個Zernike係數。 In addition, a voting mechanism can also be used to obtain a set of fused coefficients among the multiple sets of Zernike coefficients. The Zernike coefficients obtained from different neural network models are weighted and synthesized. For example, assuming that M trained neural network models are used, the number of each set of Zernike coefficients obtained is 8, and 8×M Zernike coefficients (N 1,1 ~ N 1,M , N 2,1 ~N 2,M ,...,N 8,1 ~N 8,M ). The i-th fused coefficient N i in the fused coefficient group is: N i = W i,1 ×N i,1 + W i,2 ×N i ,2 +...+ W i,M ×N i, M )/M, where W i,j represents the weight corresponding to the i-th Zernike coefficient predicted by the j-th neural network model. N i,1 ~N i,M respectively represent the i-th Zernike coefficient predicted by the 1st-Mth neural network model.
在步驟S215中,利用融合後係數組,計算點擴散函數(Point Spread Function,PSF)與調制轉換函數(Modulation Transfer Function,MTF)。In step S215, a point spread function (Point Spread Function, PSF) and a modulation transfer function (Modulation Transfer Function, MTF) are calculated using the fused coefficient group.
在光學領域中,由於Zernike多項式所具有的正交性和對經典像差的平衡表示,使其在圓型光瞳上具有最小的方差,為此在光學領域中得到了廣泛的應用。在光學系統中,主光線與光軸所在的平面為子午面(meridional plane),主光線與子午面垂直的平面為縱切面(sagittal plane)。任意一點P在單位圓平面中的出瞳(exit pupil)坐標( x p , y p )和極坐標( ρ, θ)。其中 x p 位於縱切面, y p 位於子午面, ρ為徑向分量(radial component), θ為角坐標(angular coordinate)。 In the field of optics, due to the orthogonality and balanced expression of classical aberrations of Zernike polynomials, it has the smallest variance on the circular pupil, and has been widely used in the field of optics. In an optical system, the plane where the chief ray and the optical axis are located is the meridional plane, and the plane where the chief ray is perpendicular to the meridional plane is the sagittal plane. The exit pupil coordinates ( x p , y p ) and polar coordinates ( ρ , θ ) of any point P in the unit circle plane. Where x p is located on the longitudinal plane, y p is located on the meridian plane, ρ is the radial component, and θ is the angular coordinate.
波前(wavefront)函數 的描述如下: 其中,n和m分別代表標準像差函數中徑向坐標(radial coordinate)和像點高度坐標(image point height coordinate)的冪(power),s代表加總的註標,例如加總n個項或n-m個項。 為平均波前, , , , 、 、 為Zernike係數。 wavefront function The description is as follows: Among them, n and m represent the power of the radial coordinate (radial coordinate) and image point height coordinate (image point height coordinate) in the standard aberration function respectively, and s represents the summed annotation, such as summing n items. Or nm items. is the average wavefront, , , , , , is the Zernike coefficient.
接著,基於波前函數 來獲得像差光瞳函數(aberrated pupil function) ,再基於像差光瞳函數 來獲得同調轉換函數(coherent transfer function) 。 Next, based on the wavefront function To obtain the aberrated pupil function , and then based on the aberration pupil function to obtain the coherent transfer function .
像差光瞳函數 為: , 其中, 為出瞳半徑,k為波數, i為虛數。 Aberration pupil function for: , in, is the exit pupil radius, k is the wave number, and i is an imaginary number.
同調轉換函數 為: , 其中, f U 為U/ λz , f V 為V/ λz, λ為波長, z為光瞳距離(pupil distance)。 homology transfer function for: , where f U is U/ λ z , f V is V/ λ z , λ is the wavelength, and z is the pupil distance.
之後,基於同調轉換函數 來計算點擴散函數 h(u, v)與調制轉換函數 MTF(f U, f V) 。 After that, based on the homology conversion function To calculate the point spread function h(u, v) and the modulation transfer function MTF(f U , f V ) .
點擴散函數 h(u, v)描述了一個成像系統對一個點光源(物體)的響應,可透過下述方程式來獲得: , 其中, 為傅立葉轉換函數, u, v為座標。 The point spread function h(u, v) describes the response of an imaging system to a point light source (object) and can be obtained through the following equation: , in, is the Fourier transform function, u and v are coordinates.
調制轉換函數 MTF(f U, f V) 為: 。 The modulation transfer function MTF (f U , f V ) is: .
在步驟S220中,基於融合後係數組、點擴散函數與調制轉換函數,判斷待測物是否有瑕疵。例如,可將融合後係數組、點擴散函數與調制轉換函數至少其中一者與其對應的一或多個閾值進行比對,以判定瑕疵程度或待測物等級。In step S220, it is determined whether the object to be tested is defective based on the fused coefficient group, point spread function and modulation transfer function. For example, at least one of the fused coefficient set, point spread function, and modulation transfer function can be compared with one or more corresponding thresholds to determine the degree of defects or the grade of the object to be tested.
針對調制轉換函數(MTF)設定一或多個閾值代入調制轉換函數判定鏡片等級或瑕疵程度。利用MTF的判定方式如下。以調制轉換函數而言,依產品品質要求製定一個或多個閾值,再由閾值在MTF數值空間切割出各品質級別(level)的區間,待檢之產品在取得MTF值之後,依其所在區間判定品質級別。在模型訓練階段:可對各品質級別的產品進行抽樣取得各品質級別產品之樣本,針對各樣本透過上述方式取得MTF值,再將MTF值設定為特徵X,品質級別設定為Y,進而訓練或導出一個迴歸模型。在實用階段:對待測物的干涉影像使用上述方法來預測MTF值,之後將MTF值設定為迴歸模型的X,而可自迴歸模型來獲得品質級別Y。在此,設定多個品質級別為連續數值,可視為品質好壞的分數,例如品質級別可設定為0~100。Set one or more thresholds for the modulation transfer function (MTF) and substitute them into the modulation transfer function to determine the lens grade or defect level. The determination method using MTF is as follows. In terms of the modulation transfer function, one or more thresholds are set according to product quality requirements, and then the thresholds are used to cut out intervals of each quality level (level) in the MTF value space. After the product to be inspected obtains the MTF value, it is determined according to its interval. Determine the quality level. In the model training stage: products of each quality level can be sampled to obtain samples of products of each quality level. For each sample, the MTF value is obtained through the above method, and then the MTF value is set to feature X and the quality level is set to Y, and then training or Export a regression model. In the practical stage: use the above method to predict the MTF value of the interference image of the object to be measured, and then set the MTF value to the X of the regression model, and the quality level Y can be obtained by the autoregressive model. Here, multiple quality levels are set as continuous values, which can be regarded as scores of quality. For example, the quality levels can be set from 0 to 100.
另外,針對點擴散函數(PSF)設定一或多個閾值代入點擴散函數判定鏡片等級或瑕疵程度。或者,針對融合後係數組設定一或多個閾值代入判定Lens等級或瑕疵程度。利用PSF或融合後係數組的判定方式可參照上述利用MTF的判定方式,在此不再贅述。In addition, one or more thresholds are set for the point spread function (PSF) and substituted into the point spread function to determine the lens grade or the degree of defects. Alternatively, one or more thresholds can be set for the fused coefficient group to determine the Lens level or defect level. The determination method using PSF or the fused coefficient group can refer to the above-mentioned determination method using MTF, which will not be described again here.
在其他實施例中,還可以透過設定三個方法的權重方式來綜合考量評判品質之依據,例如設定為: Q( I)= w 1 × Statistics( MTF( I))+ w 2 × Statistics( PSF( I))+ w 3 × Statistics( N i ); 其中, Statistics為統計函數,可以為最大值、最小值、平均值、眾數、變異數、標準差或加總等統計方法, w 1 、 w 2 、 w 3 為權重, I為受檢的干涉影像, N i 為由干涉影像 I中求得的Zernike係數。 In other embodiments, the basis for judging quality can also be comprehensively considered by setting the weights of the three methods, for example, setting it as: Q( I) = w 1 × Statistics ( MTF ( I ))+ w 2 × Statistics ( PSF ( I ))+ w 3 × Statistics ( N i ); Among them, Statistics is a statistical function, which can be a statistical method such as maximum value, minimum value, average value, mode, variation, standard deviation or summation, w 1 , w 2 and w 3 are weights, I is the examined interference image, and Ni is the Zernike coefficient obtained from the interference image I.
或者,可針對調制轉換函數、點擴散函數以及融合後係數組分別預先建立並訓練好對應的檢查模型。將融合後係數組、點擴散函數與調制轉換函數至少其中一者輸入至其對應的檢查模型,以判定瑕疵程度或待測物等級。所述檢查模型可採用支持向量機(Support Vector Machine,SVM)、DNN、極限梯度提升(eXtreme Gradient Boosting,XGBoost)等所訓練的分類模型,或者採用SVR、高斯過程等所訓練的回歸模型。Alternatively, corresponding inspection models can be pre-established and trained for the modulation transfer function, point spread function and fused coefficient group respectively. Input at least one of the fused coefficient group, point spread function and modulation transfer function into its corresponding inspection model to determine the degree of defects or the grade of the object to be tested. The inspection model may use a classification model trained by Support Vector Machine (SVM), DNN, eXtreme Gradient Boosting (XGBoost), etc., or a regression model trained by SVR, Gaussian process, etc.
以對應於MTF的檢查模型而言,在檢查模型的訓練階段:對各品質級別的產品進行抽樣取得各品質級別產品之樣本,針對各樣本透過上述方式取得MTF值,再將MTF值設定為特徵X,品質級別設定為Y,進而訓練或導出一個檢查模型(分類模型)。在檢查模型的實用階段:對待測物的干涉影像使用上述方法來預測MTF值,之後將MTF值設定為檢測模型的X,代入檢測模型而獲得品質級別Y。在此,品質級別包括“ 優良”、“良好”、“一般”、“小瑕疵”、“大瑕疵”。對應於PSF或融合後係數組的檢查模型亦可參照上述說明來推得。 For the inspection model corresponding to MTF, in the training stage of the inspection model: Sampling products of each quality level to obtain samples of products of each quality level, obtaining the MTF value for each sample through the above method, and then setting the MTF value as a feature X, the quality level is set to Y, and then an inspection model (classification model) is trained or exported. In the practical stage of the inspection model: use the above method to predict the MTF value of the interference image of the object to be measured, and then set the MTF value to the X of the inspection model and substitute it into the inspection model to obtain the quality level Y. Here, quality levels include " excellent ", "good", "average", "minor defects", and "major defects". The inspection model corresponding to the PSF or the fused coefficient group can also be derived by referring to the above description.
圖3是依照本發明一實施例的判斷瑕疵的架構圖。如圖3所示,干涉影像輸入至神經網路模型1~N而獲得N組Zernike係數。之後,對這些Zernike係數進行融合而獲得最終的融合係數組,再利用融合係數組計算出點擴散函數與調制轉換函數後,便可利用融合係數組、點擴散函數與調制轉換函數來判斷鏡片(待測物)是否有瑕疵。FIG. 3 is an architecture diagram for determining defects according to an embodiment of the present invention. As shown in Figure 3, the interference image is input to the neural network models 1 to N to obtain N sets of Zernike coefficients. Afterwards, these Zernike coefficients are fused to obtain the final fusion coefficient group. After the point spread function and modulation transfer function are calculated using the fusion coefficient group, the fusion coefficient group, point spread function and modulation transfer function can be used to determine the lens ( The object under test) is defective.
綜上所述,本揭露從光學角度來獲得一個科學方法,即,從干涉影像回推出Zernike係數,再由Zernike係數導出點擴散函數與調制轉換函數,進而來判斷待測物的品質。To sum up, this disclosure obtains a scientific method from an optical perspective, that is, deducing the Zernike coefficient from the interference image, and then deriving the point spread function and modulation transfer function from the Zernike coefficient to judge the quality of the object to be tested.
100:電子裝置 110:處理器 120:儲存設備 S205~S220:基於影像辨識來判斷瑕疵的方法各步驟 100: Electronic devices 110: Processor 120:Storage device S205~S220: Each step of the method to determine defects based on image recognition
圖1是依照本發明一實施例的電子裝置的方塊圖。 圖2是依照本發明一實施例的基於影像辨識來判斷瑕疵的方法流程圖。 圖3是依照本發明一實施例的判斷瑕疵的架構圖。 FIG. 1 is a block diagram of an electronic device according to an embodiment of the present invention. FIG. 2 is a flow chart of a method for determining defects based on image recognition according to an embodiment of the present invention. FIG. 3 is an architecture diagram for determining defects according to an embodiment of the present invention.
S205~S220:基於影像辨識來判斷瑕疵的方法各步驟 S205~S220: Each step of the method to determine defects based on image recognition
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW111126380A TWI819698B (en) | 2022-07-14 | 2022-07-14 | Method for determining defect and electronic apparatus |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW111126380A TWI819698B (en) | 2022-07-14 | 2022-07-14 | Method for determining defect and electronic apparatus |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI819698B true TWI819698B (en) | 2023-10-21 |
| TW202403605A TW202403605A (en) | 2024-01-16 |
Family
ID=89857541
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW111126380A TWI819698B (en) | 2022-07-14 | 2022-07-14 | Method for determining defect and electronic apparatus |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI819698B (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120019529A1 (en) * | 2006-11-19 | 2012-01-26 | Tom Kimpe | Display assemblies and computer programs and methods for defect compensation |
| US20200191751A1 (en) * | 2018-12-17 | 2020-06-18 | Shimadzu Corporation | Inspection apparatus and inspection method |
| US20210295485A1 (en) * | 2016-12-06 | 2021-09-23 | Mitsubishi Electric Corporation | Inspection device and inspection method |
-
2022
- 2022-07-14 TW TW111126380A patent/TWI819698B/en active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120019529A1 (en) * | 2006-11-19 | 2012-01-26 | Tom Kimpe | Display assemblies and computer programs and methods for defect compensation |
| US20210295485A1 (en) * | 2016-12-06 | 2021-09-23 | Mitsubishi Electric Corporation | Inspection device and inspection method |
| US20200191751A1 (en) * | 2018-12-17 | 2020-06-18 | Shimadzu Corporation | Inspection apparatus and inspection method |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202403605A (en) | 2024-01-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110619618B (en) | Surface defect detection method and device and electronic equipment | |
| US10769761B2 (en) | Generating high resolution images from low resolution images for semiconductor applications | |
| CN112382382B (en) | Cost-sensitive integrated learning classification method and system | |
| CN115908842A (en) | Transformer Partial Discharge Data Enhancement and Recognition Method | |
| CN112132086B (en) | Multi-scale martensite microstructure aging and damage grading method | |
| TWI819698B (en) | Method for determining defect and electronic apparatus | |
| Ke et al. | Green coffee bean defect detection using shift-invariant features and non-local block | |
| CN120410558B (en) | Machine Learning-Based Method for Detecting Forming Characteristics and Assessing Quality of Titanium Alloy Forgings | |
| CN110750876A (en) | A bearing data model training and use method | |
| CN117617888B (en) | System and method for predicting myopic diopter | |
| Choudhary et al. | Determination of rate of degradation of iron plates due to rust using image processing | |
| CN120070401A (en) | Progressive industrial product surface defect detection method based on decoupling characterization | |
| Hu et al. | Detection of moldy cores in apples with near-infrared transmission spectroscopy based on wavelet and BP network | |
| KR102906864B1 (en) | Artificial intelligence apparatus for determining deterioration level based on image of transmission tower and method thereof | |
| CN111680741B (en) | Automatic debugging method of computer-aided interferometer based on deep learning | |
| CN212460580U (en) | Boiling phenomenon judgment device based on deep learning and optical reflection structure | |
| Taspinar | Classification of biscuit defect states and foreign objects using CNN-based features | |
| Venneti et al. | Amdnet: Age-related macular degeneration diagnosis through retinal fundus images using lightweight convolutional neural network | |
| JPH06235619A (en) | Measuring device for wavefront aberration | |
| Nurmalasari et al. | Retinal Fundus Images Classification to Diagnose the Severity of Diabetic Retinopathy using CNN | |
| Daigle et al. | Automatic detection of expanding HI shells in the Canadian galactic plane survey data | |
| CN116664928B (en) | Diabetic retinopathy grading method and system based on CNN and transducer | |
| CN120992888B (en) | Titanium plate performance analysis method and system | |
| Pedram et al. | Fault diagnosis of rotating machines based on the enhanced multi-scale convolutional neural network approach | |
| CN120495742A (en) | Weight metering automatic classification and calibration method and system based on image recognition |