[go: up one dir, main page]

CN114445365B - Banknote printing quality inspection method based on deep learning algorithm - Google Patents

Banknote printing quality inspection method based on deep learning algorithm Download PDF

Info

Publication number
CN114445365B
CN114445365B CN202210089577.9A CN202210089577A CN114445365B CN 114445365 B CN114445365 B CN 114445365B CN 202210089577 A CN202210089577 A CN 202210089577A CN 114445365 B CN114445365 B CN 114445365B
Authority
CN
China
Prior art keywords
defect
image
defects
waste
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210089577.9A
Other languages
Chinese (zh)
Other versions
CN114445365A (en
Inventor
张绍兵
付茂栗
吴俊�
张洋
王斌
夏小东
赵伟君
李腾蛟
祝文培
魏麟
王觅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongchaokexin Co ltd
Original Assignee
Shenzhen Zhongchaokexin Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongchaokexin Co ltd filed Critical Shenzhen Zhongchaokexin Co ltd
Priority to CN202210089577.9A priority Critical patent/CN114445365B/en
Publication of CN114445365A publication Critical patent/CN114445365A/en
Application granted granted Critical
Publication of CN114445365B publication Critical patent/CN114445365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30144Printing quality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A banknote printing quality inspection method based on a deep learning algorithm comprises the steps of dividing defects in an image through a defect dividing module so as to detect the defects from a background image to obtain a defect binary image b, inputting the defect binary image b into a defect judging and discarding module based on blob analysis to conduct connected domain analysis, extracting characteristics of each connected domain, conducting defect judging and discarding according to set conditions to judge whether the defects are false defects or real defects, and transmitting the real waste images into a real waste image identification and classification module to identify and classify defect types. The invention detects printing defects by dividing the network model, combines the blob analysis technology to divide mechanically detected waste into good products and waste products, effectively reduces a large number of false alarms caused by factors such as unstable imaging, and utilizes the classification network model to count waste images, automatically divides waste into a waste working procedure and a plurality of waste types, effectively reduces personnel configuration, improves production quality and reduces production cost.

Description

Banknote printing quality inspection method based on deep learning algorithm
Technical Field
The invention relates to the field of artificial intelligence and defect detection and identification, in particular to a banknote printing quality inspection method based on a deep learning algorithm.
Background
During the printing process of the banknote, various defects such as pattern color shade distortion, ink stain, omission, scratch, overprinting deviation and the like can occur due to the influences of printing process and mechanical precision and some random factors. At present, banknote printing quality detection is affected by factors such as imaging conditions, overprinting deviation and the like, a large number of false alarms exist, various procedures are performed in a plurality of modes, printing defects are complex in form and variable in position, the classification and grading effects of the defects are poor by using a traditional algorithm, and banknote printing enterprises need to rely on a large number of experienced inspectors to check manually through naked eyes.
Disclosure of Invention
The invention provides a banknote printing quality inspection method based on a deep learning algorithm, which aims to solve at least one technical problem.
To solve the above problems, as one aspect of the present invention, there is provided a banknote print quality inspection method based on a deep learning algorithm, comprising:
Step 1, dividing defects in an image through a defect dividing module, so that the defects are detected from a background image, and a defect binary image b is obtained;
Step 2, inputting the defect binary image b into a defect judging and discarding module based on blob analysis to analyze connected domains, extracting the characteristics of each connected domain, and judging whether the defect judging and discarding is false or real according to the set conditions;
and step 3, transmitting the actual waste image into an actual waste image identification and classification module to identify and classify the defect types.
Preferably, step 1 includes that if no defect exists in the background image, the final output defect binary image b is a full-0 image, the process is finished, if the defect exists in the background image, the module divides the defective area, positions the defect information, the output binary image b has 1 pixel at the position corresponding to the defect, and other non-defective area pixels have 0 pixel.
Preferably, the defect segmentation module segments defects in the image by:
Step a1, collecting data, namely collecting defect sample data sets of different machine stations and different product lines through a traditional detection system;
Step a2, data cleaning and labeling, namely, firstly, removing noise data from collected data, then, labeling the positions of defects at pixel level to generate a label image, and performing enhancement transformation processing on each sample containing the defects to achieve the purpose of expanding a sample data set;
step a3, constructing and training a network model, namely training an established deep learning model by using a training sample data set after enhancement transformation processing, wherein the model comprises a convolution layer and a full connection layer and can be described by the following formula:
y=f(x;θ)
y(u,v)=p(x(u,v))
Where f represents the built network model, x represents the input image with defects, θ represents the model parameters, y represents the output probability map, (u, v) represents the pixel position index in the image, p represents the probability that a certain pixel is a defect, then p (x (u, v)) represents the probability that pixel (u, v) is a defect;
The training process can be regarded as an iterative process for solving an optimization problem, and can be expressed by the following formula:
wherein gt is the label number image in the step a 2;
the optimal parameter theta is learned through training to establish the relation between the defective pixels and the defect probability in the training sample data set, so that the probability that each pixel in the current image is defective is calculated through the optimal parameter theta in the forward reasoning process, and then a probability map with the same resolution as the input image is output;
Step a4, model deployment and post-processing, namely deploying the optimal model parameters trained in the step a3 on a field detection server, inputting an image with defects, obtaining a corresponding defect probability map after model forward reasoning, and then thresholding the probability map through a set threshold t to obtain a final defect binary map b, wherein the process can be described by the following formula:
Preferably, the model comprises 36 convolution units, 6 deconvolution units and a plurality of basic connection units, wherein each unit is formed by combining a convolution layer, a deconvolution layer, a batch normalization layer, a nonlinear activation layer, a full connection layer, a characteristic aggregation layer and a pooling layer in different forms according to different functions.
Preferably, the defect judging and discarding module based on the blob analysis performs connected domain analysis by the following steps:
Step b1, analyzing relevant information of a defective blob, namely firstly carrying out connected domain analysis on the b graph, extracting relevant information of each connected domain and relevant information of a corresponding defect position of the blob, wherein the relevant information comprises an area, a perimeter and a mass center;
And b2, setting conditions on the data in the feature vectors according to the site situation and the customer requirements, wherein the specific means is realized mainly by setting a threshold value, the process can be operated aiming at each feature in the feature vectors, and finally, the conditions of all the features are comprehensively judged, namely, the conditions are real waste, and the conditions are not satisfied, namely, the false waste is judged.
Preferably, in the step b2, the area feature in the feature vector is set to be a threshold value, where greater than t1 indicates that the condition 1 is satisfied, between t1 and t2 indicates that the condition 2 is satisfied, and less than t2 indicates that the condition 3 is satisfied, where t2< t1.
Preferably, the actual waste image recognition and classification module performs recognition and classification of defect types according to the following steps:
step c1, collecting data by adopting the same method as the step a 1;
step c2, data cleaning and marking, wherein the data cleaning process adopts the same method as the step a2, but the marking method needs to identify the defects in the images through position boxes, then corresponding categories are set, a corresponding text file is generated after each image is marked, and the positions and the category information of the defects in the image are far away;
Step c3, constructing and training a classification network model, namely training an established deep learning model by using the enhanced training sample data set, wherein the model comprises a convolution layer and a full connection layer and can be described by the following formula:
c=g(x;ω)
Wherein g represents the established classification network model, x represents the input image with defects, ω represents model parameters, and c represents an n-dimensional vector, wherein:
n=c1+c2
c1 and c2 respectively represent the total category number of the waste type and the total category number of the defect type;
The training process can be regarded as an iterative process for solving an optimization problem, and can be expressed by the following formula:
Wherein c [ i ] and c [ j ] represent elements of the c vector, pos represents a real tag set of the current image, neg represents other non-real tag sets;
The optimal parameter omega is learned through training so as to establish the relationship between the defect image in the training sample data set and the process category and defect type;
and c4, model deployment and post-processing, namely deploying the trained optimal model parameters in the step c3 on a server for field detection, inputting an image with defects, outputting an n-dimensional vector, taking the index of the largest element in the previous c1 dimension as a process waste counting label, and taking the index of the largest element in the remaining c2 dimension as a defect type label.
Preferably, the categories in c2 include process waste label and defect category label, wherein the process waste label comprises gravure, offset, silk screen, white paper and the like, and the defect category label comprises dirty, dot, lack, light, cross color and the like.
Preferably, the model in c3 comprises 24 convolution units, and each unit is formed by combining a convolution layer, a batch normalization layer, a nonlinear activation layer, a full connection layer, a characteristic aggregation layer and a pooling layer in different forms according to different functions.
By adopting the technical scheme, the invention can utilize a deep learning algorithm to detect and analyze banknote images, on one hand, the printing defects are detected by dividing the network model, the mechanically inspected waste is divided into good products and waste products by combining the blob analysis technology, so that a large number of false alarms caused by factors such as unstable imaging are effectively reduced, and on the other hand, the waste images are subjected to waste counting by utilizing the classification network model, and the waste is automatically divided into a waste working procedure and a plurality of waste types, so that personnel configuration can be effectively reduced, the production quality is improved, and the production cost is reduced.
Drawings
FIG. 1 schematically shows a block diagram of the workflow of the method of the invention;
Fig. 2 schematically shows a flow chart of the method of the invention.
Detailed Description
The following describes embodiments of the invention in detail, but the invention may be practiced in a variety of different ways, as defined and covered by the claims.
With the rapid development of deep learning algorithms in recent years, identification technology has been developed rapidly. The invention introduces the deep learning method into the detection and recognition of the banknote printing defects, and can greatly improve the recognition accuracy, reduce the omission ratio and improve the robustness.
Aiming at the problems of more false alarms and poor classification effect of on-line detection equipment in the existing banknote production process and depending on a large amount of manual auditing work, the invention provides a method for detecting and classifying defects by using a deep learning algorithm.
The invention mainly comprises three modules, namely (1) a defect segmentation module based on deep learning, (2) a defect waste judging module based on blob analysis and (3) a real waste image identification and classification module. Firstly, dividing the defects in the image by a defect dividing module, wherein the function of the step is to detect the defects from the background image, if the defects do not exist in the background image, the finally output defect binary image b is a full-0 image, the process is finished, if the defects exist in the background image, the module divides the defective area, positions the defect information, the pixel of the output binary image b at the position corresponding to the defects is 1, and the pixels of other non-defective areas are 0. Then inputting the binary image b with the defects into a defect judging and discarding module based on blob analysis to analyze connected domains, extracting the characteristics of each connected domain, judging the defects according to the set conditions, if the defects are judged to be false, ending the process, and if the defects are judged to be true, transmitting the true waste images into a third module to identify and classify the defect types.
Wherein each module comprises several steps, which are described separately below.
The first module is a defect segmentation module based on deep learning.
The method comprises the following steps:
1) And (5) data collection.
Through traditional detecting system, can collect the defect sample dataset of different machines, different product lines.
2) Data cleaning and labeling.
For the collected data, noise data such as an imaging abnormal image, a machine background image, a printing paper image and the like need to be removed, then pixel-level labeling is carried out on a defect position to generate a label image, for example, a defective image is generated, the generated label image is a binary image after labeling, wherein the defect pixel position is 1, and other defect-free positions are 0.
For banknote printing defects, there are very many defective samples and very few defective samples, so that enhancement transformation processing is required for each sample containing defects, and the transformation includes affine transformation, color transformation, distortion deformation, brightness adjustment and the like, so as to achieve the purpose of expanding a sample data set.
3) And constructing a network model and training.
The model is a deep learning model comprising a convolution layer and a full connection layer, wherein the deep learning model comprises 36 convolution units, 6 deconvolution units and basic connection units such as characteristic aggregation units, hyperlink units and the like, and each unit is formed by combining the convolution layer, the deconvolution layer, the batch normalization layer, a nonlinear activation layer, the full connection layer, the characteristic aggregation layer and a pooling layer in different forms according to different functions.
For example, the convolution unit is composed of a convolution layer, a batch normalization layer and a nonlinear activation layer, the deconvolution unit is composed of a deconvolution layer, a batch normalization layer and a nonlinear activation layer, and the feature aggregation unit is composed of a batch normalization layer and a feature aggregation layer.
Training an established deep learning model with the enhanced training sample data set, the model can be described by the following formula:
y=f(x;θ)
y(u,v)=p(x(u,v))
where f represents the built network model, x represents the input image with defects, θ represents the model parameters, y represents the output probability map, (u, v) represents the index of pixel locations in the image, p represents the probability that a certain pixel is a defect, and p (x (u, v)) represents the probability that pixel (u, v) is a defect.
The training process can be regarded as an iterative process for solving an optimization problem, and can be expressed by the following formula:
wherein gt is the label number image in the step 2).
Through training, the optimal parameter theta can be learned to establish the relation between the defective pixels and the defect probability in the training sample data set, so that the probability that each pixel in the current image is defective can be calculated through the optimal parameter theta in the forward reasoning process, and then a probability map with the same resolution as the input image is output.
4) Model deployment and post-processing.
Deploying the trained optimal model parameters in the step 3) on a server for field detection, inputting images with defects, and obtaining a corresponding defect probability map after model forward reasoning. Subsequently, the probability map is thresholded by a set threshold t to obtain a final defect binary map b, which can be described by the following formula:
and the second module is a defect waste judging module based on blob analysis.
The method mainly comprises the following steps:
1) Analyzing relevant information of a defect blob (connected domain), wherein the blob refers to the connected domain with 1 pixel in the defect binary image b, so that the connected domain analysis is firstly carried out on the image b, relevant information of each connected domain is extracted, wherein the relevant information comprises area, circumference and centroid, and relevant information of a corresponding defect position of the blob comprises energy, color distribution, defect position information and the like, and the characteristics are combined to form an n-dimensional feature vector.
2) According to the field condition and the customer demand, the condition setting can be carried out on the data in the feature vector:
The specific means is realized mainly by setting a threshold value, for example, the area characteristics in the characteristic vector are set to be greater than t1, the condition 1 is met, the condition 2 is met between t1 and t2, the condition 3 is met when t2 is less than t2, wherein t2 is less than t1, the process can be performed on each characteristic operation in the characteristic vector, the conditions of all the characteristics are finally integrated to judge, the condition is real waste, and the condition is not met, and the condition is mistaken waste.
And the third module is an identification and classification module of the actual waste defects.
The method comprises the following steps:
1) And (5) data collection.
The data collected in the first module 1) step may be extended.
2) Data cleaning and labeling.
The data cleaning process is identical to that in the first module step 2), but the labeling method is slightly different, and defects in the image need to be identified through a position frame.
Then, setting corresponding categories, wherein the categories comprise two labels, one is a process waste counting label mainly comprising gravure, offset printing, silk screen printing, white paper and the like, and the other is a defect category label mainly comprising dirty, dot, short, light, cross color and the like. And after each image is marked, a corresponding text file is generated, and the defect position and the category information in the image are separated.
3) And constructing a classification network model and training.
The model is a deep learning model comprising a convolution layer and a full connection layer, wherein 24 convolution units are included, and each unit is formed by combining the convolution layer, a batch normalization layer, a nonlinear activation layer, the full connection layer, a characteristic aggregation layer and a pooling layer in different forms according to different functions.
Training an established deep learning model with the enhanced training sample data set, the model can be described by the following formula:
c=g(x;ω)
Wherein g represents the established classification network model, x represents the input image with defects, ω represents model parameters, and c represents an n-dimensional vector, wherein:
n=c1+c2
c1 and c2 represent the total number of categories of the waste type and the total number of categories of the defect type, respectively.
The training process can be regarded as an iterative process for solving an optimization problem, and can be expressed by the following formula:
Where c [ i ] and c [ j ] represent elements of the c vector, pos represents the true set of labels for the current image, neg represents the other non-true set of labels.
Through training, the optimal parameter omega can be learned to establish the relationship between the defect image in the training sample data set and the process category and defect type.
4) Model deployment and post-processing.
Deploying the trained optimal model parameters in the step 3) on a server for field detection, inputting an image with defects, outputting an n-dimensional vector, taking the index of the largest element in the previous c1 dimension as a process waste counting label, and taking the index of the largest element in the remaining c2 dimension as a defect type label.
By adopting the technical scheme, the invention can utilize a deep learning algorithm to detect and analyze banknote images, on one hand, the printing defects are detected by dividing the network model, the mechanically inspected waste is divided into good products and waste products by combining the blob analysis technology, so that a large number of false alarms caused by factors such as unstable imaging are effectively reduced, and on the other hand, the waste images are subjected to waste counting by utilizing the classification network model, and the waste is automatically divided into a waste working procedure and a plurality of waste types, so that personnel configuration can be effectively reduced, the production quality is improved, and the production cost is reduced.
The invention will now be described in further detail with reference to the drawings and the preferred embodiments. Fig. 1 is a schematic workflow diagram of a banknote printing quality inspection method based on a deep learning algorithm according to an embodiment of the present invention.
As shown in fig. 1, the method includes:
s1, collecting a stored defect picture acquired by an online detection system of banknote printing equipment according to banknote printing defects to be identified and classified and the process type of a production line;
S2, carrying out position location, rotation correction and size normalization preprocessing on the image collected in the S1.
S3, marking and classifying the images in the S2, and respectively generating a training sample set and a testing sample set of a defect segmentation model based on deep learning and a training sample set and a testing sample set of a real waste defect identification classification model;
S4, designing a defect segmentation network model and a real waste defect identification classification network model based on deep learning according to the types and characteristics of banknote printing defects;
s5, training the deep learning network model according to the training sample set and the training process of the deep learning network model;
S6, according to the verification sample set and the evaluation method of the deep learning network model, evaluating various indexes of the deep learning network model and optimizing the deep learning network model;
and S7, carrying out deployment and application of the deep learning network model according to different equipment conditions.
Further, in the step S3, the labeling and classifying are performed on the images, and the specific method is as follows:
The method comprises the steps of firstly marking corresponding defect positions and image number points on collected defect images by marking software to obtain mask images with the same corresponding defect image size, wherein the mask images are used for a defect segmentation module based on deep learning, and the defect images are marked in a classified mode according to production processes and defect types, wherein the marked process types comprise white paper, offset printing, gravure printing, code printing, coating and seal checking, and the marked defect types comprise shallow patterns, short marks, ink points, ink dirt, dirt scraping, dirt removing, cross color, paper diseases, oil dirt, ruffles, plate running and watermark dirt rubbing, and the marked defect types are used for a recognition classification module of real waste defects.
To ensure recognition effect, a sufficient and comprehensive data source is provided for deep learning training and verification, each picture contains a defect, and at least more than 1000 pictures are collected for each defect. And then randomly taking a part of the defect pictures and the corresponding labeling information as a training sample set according to a certain proportion, and taking the rest as a test sample set.
Specifically, 70% -80% of the banknote defect pictures and the corresponding labeling information can be used as the training sample set, and the rest 20% -30% can be used as the test sample set.
Furthermore, in the step S4, two network models of a fault segmentation network model and a real-time defect identification classification network model based on deep learning are designed, the fault segmentation model uses a deep learning model based on a semantic segmentation method, and the fault segmentation model is specific to a pixel level when an image is processed, so that the number of images with defects in the image can be marked.
And then, judging the image to be good and waste according to the blob (connected domain) information and set conditions by a defect judging and waste module based on blob analysis, wherein the real waste defect identification and classification model is designed into a deep learning model comprising a convolution layer and a full connection layer according to the defect type number and defect characteristics, and comprises 24 convolution units.
Further, in the training process of the model in S5, the training processes of the two network models of the deep learning-based defect segmentation network model and the actual waste defect identification classification network model are consistent, which specifically comprises the following steps:
a. inputting the combination of the defect image and the defect-free standard image in the training set into a built deep learning network for forward propagation to obtain a predicted value;
b. Calculating an error value loss of the predicted value and the expected value through an error function;
c. Determining gradient vectors by means of back propagation;
d. according to the gradient vector, parameters and weights of the network are adjusted, so that loss gradually drops;
e. and (c) repeating the steps b-e until the set times or loss converges.
Further, the step S6 of evaluating all indexes of the deep learning network model and optimizing the deep learning network model specifically comprises the steps of taking image data of a test sample set as input to obtain accuracy and recall rate of an identification result through the deep learning network model, optimizing the learning rate in the network model, retraining the network model and evaluating all indexes until the accuracy and recall rate are optimal and higher than a set threshold value, and guaranteeing that the trained network model is available.
Furthermore, in the deployment and application of the network model in S7, in the embodiment of the present invention, an upper computer with matched performance needs to be provided, and an online system image of a banknote printing detection device sorter or a post-stacking large-sheet inspection machine is accessed through a network connection mode, and a specific application flow is shown in fig. 2:
a. acquiring images acquired by banknote quality detection equipment in a network mode;
b. firstly, obtaining a defect binarization mask image through the defect segmentation module based on deep learning, wherein the mask image is consistent with the banknote image in size, the gray level of a defect area image is 1, and the gray level of a non-defect area image is 0;
c. Based on a blob analysis defect judging and discarding module, classifying the severity of the defect into two categories of good and bad according to blob (connected domain) information in the defect binarized mask image and by combining the conditions set by rules such as the size of the defect, the degree of color difference, whether the pattern is a key area or not, and the like, and classifying the image into two categories of good and bad;
d. The result of the waste judging module is that the image of the waste is a production defective product with the defect exceeding the quality waste standard of the bank note, and the defective product cannot be delivered and circulated, and the partial image then passes through the real waste defect identifying and classifying module to obtain the production procedure and defect type information causing the defect;
e. and storing the corresponding identification result information, and carrying out statistics and analysis on corresponding data to guide production and improve the production quality.
The invention is innovative in that (1) the deep learning technology is combined with the banknote printing image quality inspection, and (2) the whole system comprises the AI related technology and the waste judging expert system, thereby realizing the manual adjustment of the waste judging process and improving the interception rate of waste.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A banknote print quality inspection method based on a deep learning algorithm, comprising:
Step 1, dividing defects in an image through a defect dividing module, so that the defects are detected from a background image, and a defect binary image b is obtained;
Step2, inputting the defect binary image b into a defect judging and discarding module based on blob analysis to conduct connected domain analysis, extracting the characteristics of each connected domain, and conducting defect judging and discarding according to set conditions so as to judge whether the defect judging and discarding is false or real waste, wherein the defect judging and discarding module based on blob analysis conducts connected domain analysis through the following steps:
Step b1, analyzing relevant information of a defective blob, namely firstly carrying out connected domain analysis on the b graph, extracting relevant information of each connected domain and relevant information of a corresponding defect position of the blob, wherein the relevant information of each connected domain comprises an area, a perimeter and a mass center;
Step b2, setting conditions for the data in the feature vectors according to the site situation and the customer requirements, wherein the specific means is realized by setting a threshold value, the process can be operated aiming at each feature in the feature vectors, and finally, the conditions of all the features are comprehensively judged, namely, the conditions are real waste, and the conditions are not met, namely, the false waste;
and step 3, transmitting the actual waste image into an actual waste image identification and classification module to identify and classify the defect types, wherein the steps are as follows:
Step c1, collecting data, namely collecting defect sample data sets of different machine stations and different product lines through a traditional detection system;
Step c2, data cleaning and marking, namely firstly, removing noise data from collected data, then, marking the defect positions at pixel level to generate a label image, and carrying out enhancement conversion treatment on each sample containing defects to achieve the purpose of expanding a sample data set, wherein the defects in the image are required to be identified through a position frame during marking, then, corresponding categories are set, and a corresponding text file is generated after each image is marked, and the distance from the defect positions and the category information in the image;
Step c3, constructing and training a classification network model, namely training an established deep learning model by using the enhanced training sample data set, wherein the model comprises a convolution layer and a full connection layer and can be described by the following formula:
,
Wherein g represents the established classification network model, x represents the input image with defects, ω represents model parameters, and c represents an n-dimensional vector, wherein:
,
c1 and c2 respectively represent the total category number of the waste type and the total category number of the defect type;
The training process can be regarded as an iterative process for solving an optimization problem, and can be expressed by the following formula:
,
Wherein c [ i ] and c [ j ] represent elements of the c vector, pos represents a real tag set of the current image, neg represents other non-real tag sets;
The optimal parameter omega is learned through training so as to establish the relationship between the defect image in the training sample data set and the process category and defect type;
and c4, model deployment and post-processing, namely deploying the trained optimal model parameters in the step c3 on a server for field detection, inputting an image with defects, outputting an n-dimensional vector, taking the index of the largest element in the previous c1 dimension as a process waste counting label, and taking the index of the largest element in the remaining c2 dimension as a defect type label.
2. The banknote printing quality inspection method based on the deep learning algorithm according to claim 1, wherein step 1 includes that if no defect exists in the background image, the final output defect binary image b is a full-0 image, the process is finished, if a defect exists in the background image, the module divides the defective area, locates defect information, the output binary image b has 1 pixel at the position corresponding to the defect, and other non-defective area pixels have 0 pixel.
3. The banknote print quality inspection method based on the deep learning algorithm according to claim 1, wherein the defect segmentation module segments defects in images by:
step a1, collecting data by adopting the same method as step c 1;
Step a2, data cleaning and labeling, namely, firstly, removing noise data from collected data, then, labeling the positions of defects at pixel level to generate a label image, and performing enhancement transformation processing on each sample containing the defects to achieve the purpose of expanding a sample data set;
step a3, constructing and training a network model, namely training an established deep learning model by using a training sample data set after enhancement transformation processing, wherein the model comprises a convolution layer and a full connection layer and can be described by the following formula:
,
,
Where f represents the built network model, x represents the input image with defects, θ represents the model parameters, y represents the output probability map, (u, v) represents the pixel position index in the image, p represents the probability that a certain pixel is a defect, then p (x (u, v)) represents the probability that pixel (u, v) is a defect;
The training process can be regarded as an iterative process for solving an optimization problem, and can be expressed by the following formula:
,
wherein gt is the label number image in the step a 2;
the optimal parameter theta is learned through training to establish the relation between the defective pixels and the defect probability in the training sample data set, so that the probability that each pixel in the current image is defective is calculated through the optimal parameter theta in the forward reasoning process, and then a probability map with the same resolution as the input image is output;
Step a4, model deployment and post-processing, namely deploying the optimal model parameters trained in the step a3 on a field detection server, inputting an image with defects, obtaining a corresponding defect probability map after model forward reasoning, and then thresholding the probability map through a set threshold t to obtain a final defect binary map b, wherein the process can be described by the following formula:
4. A banknote print quality inspection method based on a deep learning algorithm according to claim 3, wherein the model comprises 36 convolution units, 6 deconvolution units and a plurality of basic connection units, and each unit is formed by combining a convolution layer, a deconvolution layer, a batch normalization layer, a nonlinear activation layer, a full connection layer, a feature aggregation layer and a pooling layer in different forms according to different functions.
5. The method according to claim 1, wherein in the step b2, the area feature in the feature vector is thresholded, wherein a value greater than t1 indicates that condition 1 is satisfied, a value between t1 and t2 indicates that condition 2 is satisfied, and a value less than t2 indicates that condition 3 is satisfied, wherein t2< t1.
6. The method for checking banknote printing quality based on deep learning algorithm according to claim 1, wherein the categories in c2 include process waste labels and defect category labels, the process waste labels include gravure, offset, silk screen, white paper, and the defect category labels include dirty, dot, short, light, cross color.
7. The method for checking banknote printing quality based on deep learning algorithm according to claim 6, wherein the model in c3 comprises 24 convolution units, each unit is formed by combining a convolution layer, a batch normalization layer, a nonlinear activation layer, a full connection layer, a feature aggregation layer and a pooling layer in different forms according to different functions.
CN202210089577.9A 2022-01-25 2022-01-25 Banknote printing quality inspection method based on deep learning algorithm Active CN114445365B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210089577.9A CN114445365B (en) 2022-01-25 2022-01-25 Banknote printing quality inspection method based on deep learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210089577.9A CN114445365B (en) 2022-01-25 2022-01-25 Banknote printing quality inspection method based on deep learning algorithm

Publications (2)

Publication Number Publication Date
CN114445365A CN114445365A (en) 2022-05-06
CN114445365B true CN114445365B (en) 2025-04-25

Family

ID=81368911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210089577.9A Active CN114445365B (en) 2022-01-25 2022-01-25 Banknote printing quality inspection method based on deep learning algorithm

Country Status (1)

Country Link
CN (1) CN114445365B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114580579A (en) * 2022-05-07 2022-06-03 北京中科慧眼科技有限公司 Image data rechecking method and system based on neural network classifier
CN114782431B (en) * 2022-06-20 2022-10-14 苏州康代智能科技股份有限公司 Printed circuit board defect detection model training method and defect detection method
CN116482104B (en) * 2023-02-10 2023-12-05 中恒永创(北京)科技有限公司 Thermal transfer film detection method
CN117252851B (en) * 2023-10-16 2024-06-07 北京石栎科技有限公司 Standard quality detection management platform based on image detection and identification
CN118864426B (en) * 2024-07-30 2025-02-11 武汉银采天纸业股份有限公司 Laser transfer printing method and system based on deep learning image recognition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844621A (en) * 2016-03-17 2016-08-10 阜阳市飞扬印务有限公司 Method for detecting quality of printed matter
CN108986086A (en) * 2018-07-05 2018-12-11 福州大学 The detection of typographical display panel inkjet printing picture element flaw and classification method and its device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886954B (en) * 2019-02-28 2023-04-07 湖南大学 A method for detecting defects in printed matter
CN110503654B (en) * 2019-08-01 2022-04-26 中国科学院深圳先进技术研究院 A method, system and electronic device for medical image segmentation based on generative adversarial network
CN112070716A (en) * 2020-08-03 2020-12-11 西安理工大学 Printing defect intelligent identification method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844621A (en) * 2016-03-17 2016-08-10 阜阳市飞扬印务有限公司 Method for detecting quality of printed matter
CN108986086A (en) * 2018-07-05 2018-12-11 福州大学 The detection of typographical display panel inkjet printing picture element flaw and classification method and its device

Also Published As

Publication number Publication date
CN114445365A (en) 2022-05-06

Similar Documents

Publication Publication Date Title
CN114445365B (en) Banknote printing quality inspection method based on deep learning algorithm
CN109816644B (en) An automatic detection system for bearing defects based on multi-angle light source images
US6507670B1 (en) System and process for removing a background pattern from a binary image
CN118608504A (en) A method and system for detecting component surface quality based on machine vision
US6363162B1 (en) System and process for assessing the quality of a signature within a binary image
RU2708422C1 (en) Atm management system and method
CN113646801B (en) Defect detection method, device and computer-readable storage medium for defect images
CN117541588A (en) A printing defect detection method for paper products
CN117333467B (en) Image processing-based glass bottle body flaw identification and detection method and system
CN117764970B (en) Printed matter quality detection method and system based on image features
CN113850749A (en) Method for training defect detector
CN110349125A (en) A kind of LED chip open defect detection method and system based on machine vision
CN119290896A (en) A method and system for detecting defects in textile products
CN115170501A (en) Defect detection method, system, electronic device and storage medium
CN102236925B (en) System and method for offline secondary detection and checking of machine detected data of large-piece checker
US6415062B1 (en) System and process for repairing a binary image containing discontinuous segments of a character
CN118429347A (en) Textile defect flaw detection system
CN118657780B (en) Printing bottle defect detection method and system
CN112287898B (en) Method and system for evaluating text detection quality of image
CN118154605A (en) Textile AI flaw detection method, system and equipment
CN113822836B (en) Method for marking an image
Sayed Robust fabric defect detection algorithm using entropy filtering and minimum error thresholding
CN116664540A (en) Surface defect detection method of rubber sealing ring based on Gaussian line detection
CN102214305A (en) Method for taking evidence for source of printing paper sheet by using grain characteristic
CN115131353A (en) Flat screen printing textile production abnormity identification and positioning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant