Disclosure of Invention
The present invention has been made in view of the above-described problems occurring in the prior art.
Therefore, the invention provides a stainless steel surface defect detection method based on machine vision, which solves the problem that the stainless steel surface defect detection efficiency and accuracy are insufficient in the traditional scheme.
In order to solve the technical problems, the invention provides the following technical scheme:
The invention provides a machine vision-based stainless steel surface defect detection method, which comprises the steps of collecting a stainless steel surface image, carrying out image fusion on the collected multispectral surface image to generate a multispectral composite image, carrying out detail enhancement on a low-resolution image through a super-resolution reconstruction algorithm based on the multispectral composite image to obtain a high-resolution image set, locating a defect area position from the high-resolution image set through a defect detection algorithm to obtain a defect position data set, extracting a history and a real-time image defect feature set from the defect position data set through a feature extraction algorithm, constructing a defect classification model according to the history image defect feature set, inputting the real-time image defect feature set into the defect classification model to carry out defect classification and evaluation to obtain a defect classification data set, and generating and distributing a defect detection report based on the defect classification data set.
As a preferable scheme of the machine vision-based stainless steel surface defect detection method, the multispectral surface image comprises stainless steel surface images of visible light wave bands, near infrared wave bands and short-wave infrared wave bands.
The method for detecting the stainless steel surface defects based on machine vision is characterized by comprising the steps of collecting the stainless steel surface images, performing image fusion on the collected multispectral surface images to generate multispectral composite images,
Using a multispectral imaging sensor to acquire images of the stainless steel surface and acquiring multispectral surface images;
Denoising and artifact removal processing is carried out on the multispectral surface image through a median filtering algorithm;
registering the processed multispectral surface images by a characteristic point matching and mutual information maximization method;
And synthesizing the registered multispectral surface image into a multispectral composite image by adopting a multispectral image fusion algorithm based on guided filtering, wherein the expression is as follows:
Wherein I (x, y) is a pixel value of the fused multispectral composite image at a position (x, y), N is a spectrum number participating in fusion, I is a spectrum number index coefficient participating in fusion, e is a natural base number, λ is an image influence weighting parameter, I i (x, y) is a pixel value of the ith spectrum image at the position (x, y), μ i is a local mean value of the ith spectrum image, Z (x, y) is a normalization coefficient, x is a spatial abscissa of the image, and y is a spatial ordinate of the image.
The method for detecting the stainless steel surface defects based on the machine vision is used as a preferable scheme, wherein the multispectral composite image is used for carrying out detail enhancement on a low-resolution image through a super-resolution reconstruction algorithm to obtain a high-resolution image set,
Decomposing the multispectral composite image into a low-resolution base layer tensor and a low-resolution texture layer tensor by adopting a nonlinear sparse tensor-based decomposition method, wherein the expression is as follows:
Wherein T is the tensor representation of the multispectral composite image, B is the low resolution base layer tensor, S is the low resolution texture layer tensor, E is the noise term, B TV is the total variation regularization of the base layer tensor, S 1,2 is the mixed sparse constraint of the texture layer tensor, The coefficient is the Frobenius norm of a noise term, alpha is a low-resolution texture layer balance parameter, beta is a noise term balance parameter, and s.t. is a constrained symbol;
Constructing a multi-scale residual error dense network model based on a low-resolution texture layer tensor, performing super-resolution reconstruction on the low-resolution texture layer to obtain a high-resolution texture layer, wherein the expression is as follows:
SHR=f1(f2(f3(S)));
Wherein S HR is a high-resolution texture layer after super-resolution reconstruction, f 3 is a multi-scale convolution operation, f 2 is a residual dense network operation, and f 1 is a sub-pixel convolution upsampling operation;
performing detail interpolation based on non-parameterized kernel regression on the low-resolution base layer tensor to obtain a high-resolution base layer, wherein the expression is as follows:
Wherein B HR (x, y) is the pixel value of the high-resolution base layer, For the neighborhood range of the target pixel, x 'is the abscissa of the neighborhood pixel, y' is the ordinate of the neighborhood pixel, K (x-x ', y-y') is a Gaussian kernel function, and B (x ', y') is the pixel value of the low-resolution base layer of the neighborhood pixel;
And carrying out weighted fusion on the high-resolution base layer and the high-resolution texture layer to generate a high-resolution image set.
The method for detecting the defects on the stainless steel surface based on the machine vision is used for obtaining a defect position data set by locating the positions of the defect areas from a high-resolution image set through a defect detection algorithm, and comprises the following specific steps of,
Enhancing the image contrast of the high-resolution image set by a histogram equalization method;
detecting edges in the high-resolution image set by using a Canny edge detection algorithm, and segmenting by using an Otsu threshold selection algorithm to highlight a defect area;
Optimizing an edge detection result by using an expansion and corrosion method in morphological operation, and smoothing a defect region through an open operation and a closed operation;
And extracting the outline of the defect area of the processed image to obtain a defect position data set.
The method for detecting the stainless steel surface defects based on machine vision is a preferable scheme, wherein the method uses a feature extraction algorithm to extract historical and real-time image defect feature sets from defect position data sets, constructs a defect classification model according to the historical image defect feature sets, comprises the following specific steps of,
Calculating geometric features of the defect region based on the defect location dataset;
extracting surface texture features of the defect area by using a gray level co-occurrence matrix and a local binary pattern;
Acquiring color characteristics of the defect area through a color histogram and calculating a color mean variance;
Extracting frequency domain features of the defect area through discrete Fourier transform and discrete wavelet transform;
based on the geometric features, the surface texture features, the color features and the frequency domain features of the defect region, performing feature dimension reduction by using a principal component analysis algorithm to obtain an image defect feature set;
dividing the image defect feature set into a historical image defect feature set and a real-time image defect feature set through time sequence cutting;
selecting RBF kernel functions, optimizing super parameters through grid search, and constructing a defect classification model based on a support vector machine;
Dividing the historical image defect characteristic set into a training set and a testing set, training a support vector machine classifier by using the training set, and verifying the performance of the defect classification model by using a cross verification algorithm.
The method for detecting the defects on the stainless steel surface based on machine vision is characterized by comprising the following steps of inputting a real-time image defect feature set into a defect classification model, performing defect classification and evaluation to obtain a defect classification data set,
Loading a trained defect classification model, and verifying by using a test set;
Inputting the real-time image defect feature set into the verified defect classification model, performing defect classification prediction, and outputting a category label and a confidence score;
and integrating the defect region classification result based on the category label and the confidence score to obtain a defect classification data set.
As a preferable scheme of the stainless steel surface defect detection method based on machine vision, the method comprises the steps of generating and distributing a defect detection report based on a defect classification data set,
Collecting defect data information from a defect classification data set, and sorting the defect data information into a structured data set;
Defining a defect detection report structure, and generating a defect detection report by combining the structured data set and filling a report template through an automation tool;
The defect detection report is stored in a local server and distributed by email.
In a second aspect, the invention provides a computer device comprising a memory and a processor, the memory storing a computer program, wherein the computer program when executed by the processor implements any of the steps of the machine vision based stainless steel surface defect detection method according to the first aspect of the invention.
In a third aspect, the present invention provides a computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements any step of the machine vision based stainless steel surface defect detection method according to the first aspect of the present invention.
The method has the beneficial effects that the method is high-efficiency and accurate by combining a multispectral imaging algorithm, a super-resolution reconstruction algorithm, a defect detection algorithm, a feature extraction algorithm and a defect classification model. The method has the advantages that the quality of an image is improved by utilizing a multispectral image fusion algorithm, high-quality input is provided for subsequent detail enhancement, more details are recovered from a low-resolution image by utilizing a super-resolution reconstruction algorithm, the image definition is improved, a defect area is precisely positioned by utilizing a defect detection algorithm, a reliable basis is provided for subsequent feature extraction and classification, a more precise defect classification model is constructed by deeply analyzing defects by utilizing a feature extraction algorithm, continuous optimization is realized by combining history and real-time data, a detailed defect detection report is generated by defect classification and evaluation, and real-time and accurate data support is provided for production and quality control. The invention improves the automation, precision and efficiency of stainless steel surface defect detection and provides further technical guarantee for quality control in industrial production.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Embodiment 1, referring to fig. 1 and 2, a first embodiment of the present invention provides a machine vision-based stainless steel surface defect detection method, comprising,
S1, collecting a stainless steel surface image, and performing image fusion on the collected multispectral surface image to generate a multispectral composite image.
Specifically, the method comprises the following steps:
S1.1, acquiring images of the stainless steel surface by using a multispectral imaging sensor to acquire multispectral surface images.
It should be appreciated that the multispectral imaging sensor is a high resolution near infrared/visible light camera and short wave infrared camera combination.
S1.1.1. the multispectral surface image comprises stainless steel surface images of visible light wave bands, near infrared wave bands and short wave infrared wave bands.
Specifically, the multispectral imaging sensor needs to capture images of the stainless steel surface in different spectral ranges at the same time, and mainly comprises a visible light band (400-700 nm), a near infrared band (700-1500 nm) and a short wave infrared band (1500-2500 nm).
And S1.2, denoising and artifact removal processing is carried out on the multispectral surface image through a median filtering algorithm.
Specifically, in the multispectral imaging process, noise or artifacts may occur due to factors such as uneven illumination, local defects of the sensor, and the like. The median filtering algorithm is a nonlinear filtering algorithm, and can effectively remove salt and pepper noise and other types of random noise, and meanwhile, the definition of the image edge is maintained. The filtering operation will select the median value in the neighborhood of the image pixel by pixel and replace the pixel value in the original image with this value, thus effectively smoothing the image and reducing noise.
And S1.3, registering the processed multispectral surface image by a characteristic point matching and mutual information maximization method.
Specifically, the feature point matching method finds out the alignment mode between the images by detecting the salient feature points (such as corner points, edges and the like) in the images and utilizing the geometric transformation relation between the feature points. The mutual information maximization is a registration method based on image gray distribution statistics, and can automatically calculate the similarity between two images, so that the mutual information is maximized through an optimization algorithm, and the optimal registration effect is achieved. By the method, multispectral images can be precisely aligned, and image deviation caused by different angles of imaging equipment or surface texture changes can be eliminated.
S1.4, adopting a multispectral image fusion algorithm based on guided filtering to synthesize the multispectral surface image after registration into a multispectral composite image, wherein the expression is as follows:
Wherein I (x, y) is a pixel value of the fused multispectral composite image at a position (x, y), N is a spectrum number participating in fusion, I is a spectrum number index coefficient participating in fusion, e is a natural base number, λ is an image influence weighting parameter, I i (x, y) is a pixel value of the ith spectrum image at the position (x, y), μ i is a local mean value of the ith spectrum image, Z (x, y) is a normalization coefficient, x is a spatial abscissa of the image, and f is a spatial ordinate of the image.
Preferably, the guided filtering is an efficient image fusion algorithm capable of effectively suppressing inter-spectrum interference, and is suitable for multi-spectrum image fusion. Compared with the traditional image fusion method, the method can better keep the edge information and details of the image, has lower operation complexity and can meet the real-time requirement. The weighting item can dynamically adjust the weight according to the quality of different spectrum images, so that the quality of the fused images is ensured, the comprehensive performance of the multispectral images is improved, and especially the identification of micro defects is realized.
Preferably, images of the stainless steel surface in visible light, near infrared and short wave infrared bands are obtained through a multispectral imaging algorithm, noise and artifacts are removed through a median filtering algorithm, and image quality is guaranteed. Image registration is carried out through a characteristic point matching and mutual information maximization method, so that accurate alignment of different spectrum images is ensured, and high-quality input is provided for subsequent processing. And a multispectral image fusion algorithm based on guide filtering is adopted to synthesize a multispectral composite image with high quality, so that the details and the edge information of the image are effectively reserved, and the comprehensive performance of the image is improved by dynamically adjusting a weighting item.
S2, carrying out detail enhancement on the low-resolution image through a super-resolution reconstruction algorithm based on the multispectral composite image to obtain a high-resolution image set.
Specifically, the method comprises the following steps:
s2.1, decomposing the multispectral composite image into a low-resolution base layer tensor and a low-resolution texture layer tensor by adopting a nonlinear sparse tensor-based decomposition method, wherein the expression is as follows:
Wherein T is the tensor representation of the multispectral composite image, B is the low resolution base layer tensor, S is the low resolution texture layer tensor, E is the noise term, B TV is the total variation regularization of the base layer tensor, S 1,2 is the mixed sparse constraint of the texture layer tensor, The Frobenius norm of the noise term, α is the low resolution texture layer balance parameter, β is the noise term balance parameter, and s.t. is the constrained symbol.
Preferably, the multispectral composite image T is decomposed by a tensor decomposition method to obtain a low-resolution base layer tensor B and a texture layer tensor S, and a noise term e. The B TV preserves the smoothness of the image and reduces noise, and the S 1,2 aims to extract detailed information of the image. By minimizing the loss function, the base layer and the texture layer of the low-resolution image are optimized, so that image decomposition is realized, and the low-resolution image component which is more suitable for super-resolution reconstruction is obtained.
S2.2, constructing a multi-scale residual error dense network model based on a low-resolution texture layer tensor, and carrying out super-resolution reconstruction on the low-resolution texture layer to obtain a high-resolution texture layer, wherein the expression is as follows:
SHR=f1(f2(f3(S)));
Wherein S HR is a high-resolution texture layer after super-resolution reconstruction, f 3 is a multi-scale convolution operation, f 2 is a residual dense network operation, and f 1 is a sub-pixel convolution upsampling operation.
Preferably, using the low resolution texture layer tensor S as input, a Multi-scale residual dense network (MDRN, multi-Scale Residual Dense Network) is constructed for super resolution reconstruction. The network comprises a plurality of convolution layers and a residual error module, and can effectively learn multi-scale characteristics of a texture layer and strengthen details, wherein f 3 is multi-scale convolution operation, different levels of details in an image are captured by adopting convolution kernels of different scales, f 2 is residual error dense network operation, the deep-level characteristics are extracted in a dense connection mode, convergence is accelerated, and f 1 is sub-pixel convolution up-sampling operation, and a low-resolution image is converted into a high-resolution image through convolution up-sampling. And carrying out super-resolution reconstruction on the texture layer through the network, thereby obtaining a high-resolution texture layer S HR and providing high-resolution texture information for subsequent fusion.
Further, the f 3 multi-scale convolution operation specifically comprises the following steps:
The incoming low resolution texture layer S is processed through a series of convolution layers, each employing a different size convolution kernel.
Each convolution layer passes the low resolution texture layer tensor through the convolution operation while preserving its multi-scale features.
For each scale convolution kernel, different stride and kernel sizes are used to accommodate different sized details. For example:
small-sized convolution kernels (e.g., 3x3, 5x 5) capture local detail and texture features;
Large size convolution kernels (e.g., 7x7, 9x 9) capture global information, such as a wide range of textures and shapes.
The output of each scale convolution is subjected to an activation function process (e.g., reLU) to enhance feature expression.
The f 2 residual dense network operation specifically comprises the following steps:
In each residual module, input information is directly transferred to a subsequent layer through a jump connection (skip connections), so that the gradient vanishing problem is reduced and convergence is accelerated. Each residual module contains convolution operations, activation functions (e.g., reLU), and batch normalization (Batch Normalization) operations.
The output of each layer is not only passed on to the next layer, but is also spliced with the output of all previous layers, thereby enhancing the information flow. In this way, the network is able to extract more features at each layer.
And adopting a multi-layer residual dense module, and constructing a depth network by densely connecting each layer with other layers. The output of each residual module is passed to the subsequent layer, and finally the final feature map is output by an activation function (ReLU or leak ReLU).
The f 1 sub-pixel convolution up-sampling operation specifically comprises the following steps:
in the convolution operation, a plurality of convolution cores are adopted to carry out convolution on the low-resolution image, so that a characteristic diagram of a plurality of channels is obtained.
The convolved multi-channel feature map is converted to a high resolution image using a pixel rearrangement algorithm (PixelShuffle).
The multiple channel information for each pixel in the image is rearranged into a new high resolution output.
S2.3, carrying out detail interpolation based on non-parameterized kernel regression on the low-resolution base layer tensor to obtain a high-resolution base layer, wherein the expression is as follows:
Wherein B HR (x, y) is the pixel value of the high-resolution base layer, For the neighborhood range of the target pixel, x 'is the abscissa of the neighborhood pixel, y' is the ordinate of the neighborhood pixel, K (x-x ', y-y') is the gaussian kernel function, and B (x ', y') is the pixel value of the low resolution base layer of the neighborhood pixel.
And S2.4, carrying out weighted fusion on the high-resolution base layer and the high-resolution texture layer to generate a high-resolution image set.
Specifically, the high-resolution base layer and the high-resolution texture layer are weighted and fused based on the local texture complexity, and the expression is:
Wherein, For the gradient modulus of the high resolution texture layer,I HR (x, y) is a high resolution image, integrated as a high resolution image set I HR, which is the gradient modulus of the high resolution base layer.
Preferably, a method based on nonlinear sparse tensor decomposition is adopted to decompose the low-resolution image into a base layer and a texture layer, and global structure and local detail of the image are optimized. And (3) performing super-resolution reconstruction on the texture layer by using a multi-scale residual error dense network, extracting deep features by multi-scale convolution and dense connection, and enhancing details. And the base layer carries out detail interpolation through non-parameterized kernel regression, so that the resolution ratio of the base layer is improved. By a weighted fusion method based on local texture complexity, the contributions of the base layer and the texture layer are dynamically balanced, so that details are effectively enhanced and smoothness is maintained.
S3, positioning the position of the defect area from the high-resolution image set through a defect detection algorithm, and obtaining a defect position data set.
Specifically, the method comprises the following steps:
And S3.1, enhancing the image contrast of the high-resolution image set by a histogram equalization method.
Specifically, first, a gray level histogram of a high resolution image set is calculated, representing the frequency of occurrence of different gray levels. By accumulating the histograms, the mapping relation of gray levels is calculated, so that the gray level distribution of the image tends to be uniform, and the visibility of a low-contrast area is improved. And (3) applying the equalized gray value to the original image to generate an image with enhanced contrast.
S3.2, detecting edges in the high-resolution image set by using a Canny edge detection algorithm, and segmenting by using an Otsu threshold selection algorithm to highlight a defect area.
Specifically, a Canny edge detection algorithm is used to calculate gradient values of the image and detect edges in the image. The Canny algorithm identifies edges in the image by gaussian filtering, gradient computation, non-maximum suppression, and dual-threshold segmentation. Based on the edge detection result, an Otsu threshold segmentation method is applied, an optimal threshold is automatically calculated, and the image is segmented into a foreground and a background. The Otsu method automatically selects a threshold by maximizing the inter-class variance, and effectively distinguishes a defective region. And highlighting defect areas which are potential defect positions through the segmented binary image.
And S3.3, optimizing an edge detection result by using an expansion and corrosion method in morphological operation, and smoothing a defect region through an open operation and a closed operation.
Specifically, the expansion operation is to expand the edge area through the expansion operation, fill the small gap on the edge, and enhance the visibility of the defect area. The expansion operation uses one structural element (e.g., a 3x3 matrix) to expand the pixel values near the edge to the surrounding area of the structural element.
Etching operation, namely shrinking the edge area through etching operation, removing small noise and irregular edges, and refining the defect area. The pixel values around the edge are shrunk using the structural elements.
And (3) performing an operation of combining the corrosion and the expansion, namely performing the corrosion and then the expansion to remove small noise points and smooth the edge of the defect area.
And (3) performing closed operation, namely expanding and then corroding to fill small gaps in the defect area and further smoothing the defect area.
And S3.4, carrying out contour extraction on the defect area of the processed image to obtain a defect position data set.
Specifically, a contour extraction algorithm (such as the findContours functions of OpenCV) is used to extract the contour of the defect region from the binary image. The contour extraction algorithm divides connected regions in the image into contours based on edge continuity. And screening out the defect outline according to the characteristics of the outline such as the size, the shape and the like. The minimum and maximum area thresholds are set to remove too small or too large areas. Coordinate information of each defect contour is extracted, including its location (e.g., center coordinates, bounding box location) and other relevant features (e.g., area, perimeter, shape, etc.). And storing the extracted defect position data in a JSON format, so that the subsequent defect analysis, statistics or report generation is facilitated.
The method comprises the steps of firstly enhancing the image contrast of a high-resolution image set through histogram equalization to enable a low-contrast area to be more obvious, automatically identifying and highlighting a defect area by adopting Canny edge detection and Otsu threshold segmentation, optimizing an edge detection result by utilizing morphological operation (expansion, corrosion and opening and closing operation), refining the shape of the defect area, acquiring an accurate defect position data set through a contour extraction algorithm, and providing reliable data for subsequent analysis and repair. The process is automatic, accurate and efficient, and provides powerful technical support for the fields of industrial detection, quality control and the like.
And S4, extracting historical and real-time image defect feature sets from the defect position data set by using a feature extraction algorithm, and constructing a defect classification model according to the historical image defect feature sets.
Specifically, the method comprises the following steps:
s4.1, calculating geometric features of the defect area based on the defect position data set.
Specifically, based on the acquired defect location data set, geometric features of each defect region are calculated, including area, perimeter, aspect ratio, shape factor, circularity, and the like.
The area is calculated as the total number of pixels in the defect area, and the size of the defect is reflected.
Perimeter, namely calculating the boundary length of the defect area through a contour extraction algorithm, and reflecting the appearance complexity of the defect.
Aspect ratio-aspect ratio of defect regions helps to identify the shape of the defect, e.g., whether it is bar, circular, etc.
Shape factor-the shape regularity of the defect is calculated from the ratio of perimeter to area, and is commonly used to distinguish between different types of defects.
Circularity, a measure of whether the shape of a defect area is nearly circular, a defect with high circularity may represent a different defect type.
And S4.2, extracting the surface texture characteristics of the defect area by using the gray level co-occurrence matrix and the local binary pattern.
Specifically, the gray level co-occurrence matrix (GLCM) is used for describing the spatial relation of gray values of an image by calculating the gray level co-occurrence matrix of a defect area and extracting common texture features such as contrast, correlation, energy, uniformity and the like.
Local Binary Pattern (LBP) is used for carrying out binary pattern coding of local area on each pixel, extracting texture information of the area, and the LBP can effectively capture texture features, and is particularly suitable for images with complex contrast and texture changes.
S4.3, obtaining the color characteristics of the defect area through the color histogram and the calculated color mean variance.
Specifically, the color histogram is calculated in the defect area to describe the distribution of different colors in the image. Color distribution analysis is performed using the RGB color space or other color space (e.g., HSV).
And calculating the color mean and variance of the defect area, wherein the color mean and variance are used for quantifying the color uniformity and color distribution change of the defect area and are suitable for defect identification with higher color contrast.
And S4.4, extracting the frequency domain characteristics of the defect area through discrete Fourier transform and discrete wavelet transform.
Specifically, discrete Fourier Transform (DFT) converts an image from a spatial domain to a frequency domain, analyzes frequency components in the image, and captures high-frequency and low-frequency features of the image.
Discrete Wavelet Transform (DWT) is to use a multi-scale mode to carry out wavelet transform on a defect area and extract frequency domain features under different scales. The DWT can effectively capture local frequency information of the image, and has good adaptability to different types of defect identification.
And S4.5, performing feature dimension reduction by using a principal component analysis algorithm based on the geometric features, the surface texture features, the color features and the frequency domain features of the defect region to obtain an image defect feature set.
Specifically, the geometric features, the surface texture features, the color features and the frequency domain features of the extracted defect region are combined into a high-dimensional feature vector through feature addition, feature dimension reduction is performed by using a Principal Component Analysis (PCA) algorithm, main features are extracted, and core information of data is reserved while the data dimension is reduced. The PCA maps the original feature space to a new space through linear transformation, reserves the main component with the maximum data variance, and removes redundant information, thereby improving the efficiency and accuracy of the defect classification model.
S4.6, dividing the image defect feature set into a historical image defect feature set and a real-time image defect feature set through time sequence cutting.
Specifically, the defect feature set is divided into historical data and real-time data according to the time sequence. The historical image defect feature set comprises defect feature data in a certain past time period and is used for training and constructing a defect classification model, and the real-time image defect feature set comprises data of a latest image and is used for predicting the defect classification model in real time.
S4.7, selecting RBF kernel functions, optimizing super parameters through grid search, and constructing a defect classification model based on a support vector machine.
Specifically, a Support Vector Machine (SVM) algorithm is used to classify defects, and a Radial Basis Function (RBF) kernel function is selected, because the RBF kernel can effectively process nonlinear data. The grid search method is used to optimize the superparameter of the defect classification model, such as parameters C (penalty parameters) and γ (width of kernel) of the kernel. Through grid search, traversing multiple super-parameter combinations, and selecting the parameter combination with optimal classification effect.
Further, the penalty parameter C controls the tolerance of the classifier to misclassification. A smaller C value means high tolerance to misclassification, and a defect classification model may be simpler (high variance), while a larger C value means low tolerance to misclassification, and a defect classification model is more complex (low variance, high variance).
The width γ of the kernel determines the "crowding" of the high-dimensional feature space. If γ is too large, the defect classification model may be over-fitted (low bias, high variance), and if γ is too small, the defect classification model may be under-fitted.
And S4.8, dividing the historical image defect characteristic set into a training set and a testing set, training a support vector machine classifier by using the training set, and verifying the performance of the defect classification model by using a cross verification algorithm.
Specifically, the historical image defect feature set is divided into a training set and a test set, 80% of data is used for training, and 20% of data is used for testing. And performing performance evaluation on the defect classification model through indexes such as accuracy, precision, recall rate, F1 value and the like, and evaluating the stability and generalization capability of the defect classification model by using k-fold cross validation. By repeating the training and verification process, the risk of overfitting of the defect classification model can be reduced.
Preferably, the shape, surface texture, color distribution and frequency information of the defects are comprehensively described by calculating the geometric features, texture features, color features and frequency domain features of the defects, so that multidimensional feature support is provided for the defect classification model. And then, performing feature dimension reduction by using principal component analysis, reducing redundant information and improving the efficiency of the defect classification model. The time sequence cutting method divides the historical data and the real-time data, so that the system has dynamic adaptability. And carrying out defect classification by adopting a support vector machine, optimizing super parameters through grid search, and ensuring the stability and high performance of a defect classification model. The generalization capability of the defect classification model is verified through cross verification, so that the detection precision and reliability are further improved.
S5, inputting the real-time image defect feature set into a defect classification model, performing defect classification and evaluation, and obtaining a defect classification data set.
Specifically, the method comprises the following steps:
s5.1, loading a trained defect classification model, and verifying by using a test set.
Specifically, the trained defect classification model is loaded through PyTorch, the defect classification model is rapidly verified by using the pre-prepared test set data, the performance of the defect classification model on the test data is evaluated, and the effectiveness and the stability of the defect classification model in a real scene are ensured.
S5.2, inputting the real-time image defect feature set into the verified defect classification model, performing defect classification prediction, and outputting a category label and a confidence score.
Specifically, the defect classification model predicts the type of defect based on the input features and generates corresponding type labels. Category labels include "cracks", "blemishes", "corrosion", and the like. In addition, the defect classification model outputs a confidence score that indicates the confidence level of the classification result. The higher the confidence score, the more confident the defect classification model is in the predicted outcome.
And S5.3, integrating the defect region classification result based on the category label and the confidence score to obtain a defect classification data set.
Specifically, the class label and the confidence score are associated with the ID or the unique identifier of the image in the defect region classification result, and the classification result is combined with the defect region to be stored as a JSON format file.
Preferably, through the steps, the defect classification model classifies and evaluates on the basis of the real-time image defect feature set, and the effectiveness and stability of the defect classification model on the test set are ensured. The real-time defect classification prediction can output class labels and confidence scores, and provides quick feedback for practical application. By associating the classification result with the image ID and storing the classification result in a JSON format, the standardization and systemization of data are ensured, and the subsequent analysis, report generation and fault tracking are facilitated.
And S6, generating and distributing a defect detection report based on the defect classification data set.
Specifically, the method comprises the following steps:
S6.1, collecting defect data information from the defect classification data set and sorting the defect data information into a structured data set.
Specifically, defect data information including the type, location, confidence score, detection time, etc. of the defect is extracted from the generated defect classification dataset. The record for each defect should include a defect ID, a category label, a defect location area, a confidence score, an image ID or file name, a detection timestamp.
And S6.2, defining a defect detection report structure, and generating a defect detection report by combining the structured data set and filling a report template through an automation tool.
Specifically, a defect detection report template is defined according to service requirements, and the report comprises the following core parts:
Report titles such as "defect inspection report" and associated item names or image set numbers.
Detection summary detection tasks, purposes, image and defect types involved, etc. are briefly described.
Defect analysis, namely summarizing the detection results according to categories, and listing detailed information of each defect, including defect categories, positions, confidence scores and the like.
And (3) defect statistics, namely summarizing the data of the total number of detected defects, the quantity and the severity distributed by category and the like.
Defect example map a defect image is embedded in the report, marking the defect area, as required.
Advice and processing, providing processing advice or follow-up work instruction.
The automation tool refers to Jinja template engine or ReportLab library in Python, and the script comprises automatically reading the structured dataset, automatically filling each part in the report according to the classification and statistical data of the defects, dynamically generating charts, tables and example diagrams, and enhancing the visual effect of the report.
And S6.3, storing the defect detection report in a local server, and distributing the defect detection report through an E-mail.
Specifically, defect detection reports are stored in a database and emails are sent to the relevant personnel or team via the smtplib library of Python.
Preferably, by extracting defect data information from the defect classification data set and sorting it into a structured data set, traceability and integrity of the defect data is ensured. And defining and filling a report template, and generating a defect detection report by an automation tool, so that errors and time delay in manual processing are eliminated, and the generation efficiency and consistency of the report are improved. The report template is automatically filled, the chart and the table are dynamically generated, the visual effect of the report is improved, and the user can conveniently understand and analyze the detection result. By storing the report at a local server and distributing it using email, timely sharing and efficient distribution of the report is ensured.
The embodiment also provides computer equipment, which is suitable for the situation of the stainless steel surface defect detection method based on machine vision, and comprises a memory and a processor, wherein the memory is used for storing computer executable instructions, and the processor is used for executing the computer executable instructions to realize the stainless steel surface defect detection method based on machine vision, which is provided by the embodiment.
The computer device may be a terminal comprising a processor, a memory, a communication interface, a display screen and input means connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
The present embodiment also provides a storage medium having a computer program stored thereon, which when executed by a processor implements the machine vision-based stainless steel surface defect detection method as set forth in the above embodiments, and the storage medium may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as a static random access Memory (Static Random Access Memory, SRAM for short), an electrically erasable Programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM for short), an erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM for short), a Programmable Read-Only Memory (PROM for short), a Read-Only Memory (ROM for short), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In summary, the invention provides an efficient and accurate stainless steel surface defect detection method by combining a multispectral imaging algorithm, a super-resolution reconstruction algorithm, a defect detection algorithm, a feature extraction algorithm and a defect classification model. The method has the advantages that the quality of an image is improved by utilizing a multispectral image fusion algorithm, high-quality input is provided for subsequent detail enhancement, more details are recovered from a low-resolution image by utilizing a super-resolution reconstruction algorithm, the image definition is improved, a defect area is precisely positioned by utilizing a defect detection algorithm, a reliable basis is provided for subsequent feature extraction and classification, a more precise defect classification model is constructed by deeply analyzing defects by utilizing a feature extraction algorithm, continuous optimization is realized by combining history and real-time data, a detailed defect detection report is generated by defect classification and evaluation, and real-time and accurate data support is provided for production and quality control. The invention improves the automation, precision and efficiency of stainless steel surface defect detection and provides further technical guarantee for quality control in industrial production.
It should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered in the scope of the claims of the present invention.