CN111814850B - Defect detection model training method, defect detection method and related device - Google Patents
Defect detection model training method, defect detection method and related device Download PDFInfo
- Publication number
- CN111814850B CN111814850B CN202010573557.XA CN202010573557A CN111814850B CN 111814850 B CN111814850 B CN 111814850B CN 202010573557 A CN202010573557 A CN 202010573557A CN 111814850 B CN111814850 B CN 111814850B
- Authority
- CN
- China
- Prior art keywords
- detection
- defect
- frame
- truth
- sample image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a defect detection model training method, a defect detection method and a related device, comprising the following steps: acquiring at least one training sample image, wherein the training sample image is marked with at least one first truth box corresponding to at least one type of defect respectively; detecting the training sample image by using a target detection network to obtain a first detection result, wherein the first detection result comprises a first detection frame corresponding to the defect; selecting at least part of the first detection frames as target detection frames by utilizing the distance between the first truth frames and the first detection frames; and determining a network loss value based on the target detection frame and the first truth frame, and updating parameters of the target detection network by using the network loss value to obtain a final defect detection model. The technical scheme provided by the application can be used for quickly and accurately training to obtain the defect detection model for simultaneously detecting various defects.
Description
Technical Field
The present application relates to the field of defect detection, and in particular, to a defect detection model training method, a defect detection method, and a related apparatus.
Background
With the rapid development of the intelligent manufacturing industry, computer vision and image processing technology become one of the main means for detecting post-process of products. Such as: in the manufacturing process of the bottle cap, various problems such as breakage, scratch, slipping, code spraying abnormality, labeling abnormality and the like of the bottle cap exist, if the problem bottle cap cannot be screened out in time, the light person influences the attractive appearance of the finished bottle cap, and the serious person influences the preservation of the product sealed by the bottle cap. For bottle cap detection problems, most of the monitoring methods now adopt manual identification, wine bottles are identified and positioned by naked eyes of staff, and some wine bottles can be screened by adopting a traditional image processing method. However, these methods have obvious drawbacks, such as fatigue and reduced attention caused by long-term object recognition by the staff. The accuracy of the detection result is affected to a certain extent, and the recognition rate of the problem bottle cap is reduced; moreover, the speed of the staff to recognize by naked eyes is slow, so a technical scheme capable of solving the problems is needed.
Disclosure of Invention
The application mainly solves the technical problem of providing a defect detection model training method, a defect detection method and a related device, which can quickly train to obtain a defect detection model for detecting various defects.
In order to solve the technical problems, the application adopts a technical scheme that: a method of defect detection model training is provided, the method comprising:
Acquiring at least one training sample image, wherein the training sample image is marked with at least one first truth box corresponding to at least one type of defect respectively;
detecting the training sample image by using a target detection network to obtain a first detection result, wherein the first detection result comprises a first detection frame corresponding to the defect;
Selecting at least part of the first detection frames as target detection frames by utilizing the distance between the first truth frames and the first detection frames;
And determining a network loss value based on the target detection frame and the first truth frame, and updating parameters of the target detection network by using the network loss value to obtain a final defect detection model.
In order to solve the technical problems, the application adopts another technical scheme that: there is provided a defect detection method, the method comprising:
Acquiring an image to be detected, which is obtained by shooting an object to be detected;
Performing defect detection on the image to be detected by using a defect detection model to obtain a defect detection result;
Wherein the object to be detected is a bottle cap, and/or the defect detection model is a model obtained by training the method according to any one of the above.
In order to solve the technical problems, the application adopts another technical scheme that: providing a defect detection model training apparatus comprising a memory and a processor coupled, wherein the memory comprises a local store and stores a computer program;
The processor is configured to run the computer program to perform the defect detection model training method as described above.
In order to solve the technical problems, the application adopts another technical scheme that: there is provided a defect detection apparatus comprising a memory and a processor coupled, wherein,
The memory includes local storage and stores a computer program;
the processor is configured to run the computer program to perform the defect detection method as described above.
In order to solve the technical problems, the application adopts another technical scheme that: there is provided a computer storage medium storing a computer program executable by a processor for implementing the defect detection model training method or the defect detection method as described above.
Compared with the scheme in the prior art, the technical scheme provided by the application has the advantages that at least one training sample image is obtained, the target detection network is utilized to detect the training sample image so as to obtain the first detection result of the first detection frame comprising the corresponding defects, then at least part of the first detection frame is selected as the target detection frame by utilizing the distance between the first truth frame and the first detection frame, the network loss value is determined based on the target detection frame and the first truth frame, and the parameters of the target detection network are updated by utilizing the network loss value so as to obtain the final defect detection model.
Drawings
FIG. 1 is a flow chart illustrating a method for training a defect detection model according to an embodiment of the present application;
FIG. 2 is a flow chart of another embodiment of a defect detection model training method according to the present application;
FIG. 3 is a flow chart illustrating a method for training a defect detection model according to another embodiment of the present application;
FIG. 4 is a flow chart of a defect detection model training method according to another embodiment of the present application;
FIG. 5 is a flow chart illustrating a method for training a defect detection model according to another embodiment of the present application;
FIG. 6 is a flow chart of a defect detection model training method according to another embodiment of the present application;
FIG. 7 is a flow chart of a defect detection model training method according to another embodiment of the present application;
FIG. 8 is a flow chart illustrating a defect detection method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a defect detection model training apparatus according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a defect detecting apparatus according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a computer storage medium according to an embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a flowchart illustrating a defect detection model training method according to an embodiment of the application. In the current embodiment, the method provided by the application comprises the following steps:
S110: at least one training sample image is acquired.
The training sample image is marked with at least one first truth box corresponding to at least one type of defect respectively.
When a defect detection model capable of detecting defects of a certain type of target is trained, a sample image comprising the target for detecting the defects is firstly required to be acquired, and at least one type of defects is correspondingly included in the target included in the sample image. The sample image including the target may be captured by an external capturing device and transmitted to the defect detection model training device, or may be acquired by the defect detection model training device by using an image acquisition unit of the defect detection model training device, which is not limited herein, and is based on the specific configuration of the defect detection model training device.
In the present embodiment, the sample image includes at least an original RGB image, and it should be understood that the sample image is not limited to include only one of the original RGB images. When the sample image is an original RGB image, a training sample image can be obtained after at least one type of defect in the original RGB image is marked. Specifically, at least one first truth box corresponding to at least one type of defect respectively may be manually marked in advance in the training sample image. In the present embodiment, one type of defect may correspond to a first truth box. In another embodiment, a plurality of first truth boxes can be labeled in advance for the same type of defect correspondence in the training sample image.
Further, in another embodiment, in order to train a defect detection model that can detect multiple different types of defects on a certain target at the same time, multiple sample images including the target for defect detection need to be acquired first. In this embodiment, a plurality of types of defects to be detected are included in a plurality of sample images, specifically, a plurality of sample images include: a sample image including only one defect, while a sample image including a plurality of different types of defects.
Further, referring to fig. 2, fig. 2 is a flow chart illustrating a defect detection model training method according to another embodiment of the application. In the present embodiment, the steps involved in acquiring at least one training sample image in the step S110 are highlighted. In the embodiment corresponding to fig. 2, step S110 further includes steps S201 to S202.
S201: an original training sample image is acquired. Wherein the training sample image is labeled with at least one truth box corresponding to at least one type of defect, respectively.
The original training sample image is an acquired sample image marked with at least one truth box corresponding to at least one type of defect respectively and before data enhancement processing is performed. In one embodiment, the process of acquiring the original training sample image is: and shooting the target to be detected including the defect to be detected in multiple directions by using an image acquisition unit to acquire an original image, and then manually marking the defect included in the target to be detected in the acquired original image on a defect detection model training device by a user to acquire an original training sample image so as to execute the following step S202. When the user marks the defects included in the targets in the original image, at least marking the defects by using the frame and marking the types of the defects corresponding to the frame to obtain a first truth box.
In another embodiment, the process of acquiring the original training sample image is: the method comprises the steps of obtaining an original image obtained by multi-azimuth shooting of a target to be detected including a defect to be detected by external shooting equipment, manually marking the obtained original image by a user to obtain a first truth box, further obtaining at least one original training sample image, sending the obtained original training sample image to a defect detection model training device, enabling the defect detection model training device to obtain the original training sample image, and executing the following step S202.
S202: and carrying out data enhancement processing on the original training sample image to obtain a new training sample image.
After the original training sample image is acquired, further performing data enhancement processing on the acquired original training sample image to acquire a new training sample image. In the current embodiment, the data enhancement processing is performed on the obtained original training sample images, so that a large number of new training sample images can be obtained, and after the new training sample images are obtained, the obtained new training sample images and the original training sample images are combined and output to serve as a training set for training the defect detection model. In the present embodiment, the obtained new training sample image is also labeled with the first truth box and the corresponding defect type, and in the present embodiment, the total number of the training sample images can be better enlarged after the original training sample image is subjected to data enhancement processing. The data enhancement processing mode at least comprises the following steps: one of flipping, rotation, color transformation, adding noise, radiometric transformation, panning and random cropping, and style migration networks.
Further, the step S202 performs data enhancement processing on the original training sample image to obtain a new training sample image, and further includes: and acquiring a non-defective image, and migrating defective features in the original training sample image into the non-defective image by using a style migration network to obtain a new training sample image. In another embodiment, the steps before and after acquiring the defect-free image and acquiring the original training sample image of step 201 are not limited. The defect-free image can be an image marked and confirmed by a user, and in order to train to obtain a defect detection model with high accuracy, defect-free images of a plurality of different angles of an object to be detected are obtained. In the current embodiment, the style migration network is utilized to perform data enhancement processing on the training sample images so as to obtain new training sample images, so that the number of the training sample images is increased, the utilization rate of a data set can be better improved, and the investment of manual labeling by a user is reduced.
Specifically, since the defect feature in the original training sample image may be in other forms in different samples, the defect feature in the original training sample image is migrated to other non-defective images by using the style migration network, and in the process of migrating the defect feature, the position of the defect, the size of the defect and the relative fusion effect between the defect and the surrounding images are adaptively changed, so that a new training sample image including the defect and having different defect forms can be obtained, and the original training sample image and the new training sample image are output as the training sample image, so as to execute step S120.
S120: and detecting the training sample image by using the target detection network to obtain a first detection result.
After at least one training sample image is acquired, the training sample image is further detected by using the target detection network so as to acquire a first detection result. Specifically, the obtained training sample is input into the target detection network, so that the target detection network detects defects of the training sample image, and a first detection result is obtained. The first detection result at least comprises a first detection frame corresponding to the defect.
Further, when the target detection network is used for detecting the training sample image, the type of the defect obtained by detection is further judged by the target detection network, and the defect type corresponding to the first detection frame is marked at the obtained first detection frame. In the current embodiment, the defect type corresponding to the first detection frame is marked at the first detection frame.
Further, the marked defect type is marked according to a preset defect identification code. In the current embodiment, defect identification codes are set for the corresponding defects of each type in advance, and when the first detection frame is obtained by detecting the training sample image by using the target detection network, the defect corresponding to the first detection frame is marked by the defect identification codes correspondingly. Wherein, each defect identification code has uniqueness, namely, different defect identification codes are only used for marking one type of defect in the technical scheme provided by the application.
Further, when training is performed for the first time, the target detection network utilized in the step S120 may be an initial target detection network, where each parameter in the initial target detection network is an initial parameter, and it should be noted that, in the process of obtaining the defect detection model through training, parameters of the target detection network may be further adjusted according to the obtained network loss value between the target detection frame and the first truth frame, so as to perform training optimization on the target detection network, thereby obtaining the defect detection model with more accurate detection. The selection of the target detection frame may be described in step S130 below, and the selection of the target detection frame may be related to the target detection network.
Further, the target detection network comprises an SSD network, and the corresponding first detection frame is a detection frame obtained by detection of a plurality of convolution layers in the target detection network. Wherein the SSD network includes a base network and a pyramid network. Where the underlying network is the first 4-tier network of VGG-16, the pyramid network is a simple convolutional network with a feature map that tapers down (convolutional networks may also be defined as convolutional tiers in other embodiments) consisting of 5 segments. Each part of the pyramid structure is provided with 3*3 convolutions for prediction, a first detection frame is detected at each position of the feature map, and each first detection frame is respectively corresponding to a set number of defect classification scores and 4 position offsets relative to the truth frame. The defect classification score is used for determining the type of the defect corresponding to the first detection frame.
S130: and selecting at least part of the first detection frames as target detection frames by utilizing the distance between the first truth frames and the first detection frames.
After a first detection result comprising a first detection frame is obtained, further calculating and obtaining the distance between the first truth frame and the first detection frame, and selecting part of the first detection frame from the first detection frames as a target detection frame by utilizing the calculated distance between the first truth frame and the first detection frame. The target detection frame is a part of the first detection frame for adjusting the target detection network parameters.
Further, in another embodiment, after the first detection frame is acquired, a loss distance between the first truth frame and the first detection frame is calculated, that is, a loss value between the first truth frame and the first detection frame is calculated, and according to the calculated loss value, a part of the first detection frame is selected from the first detection frames as a target detection frame.
In still another embodiment, after calculating the loss value between the first truth box and the first detection box, the obtained loss value is further processed to determine the target detection box according to the obtained processed loss value.
S140: and determining a network loss value based on the target detection frame and the first truth frame, and updating parameters of the target detection network by using the network loss value to obtain a final defect detection model.
After the target detection frame is selected from the first detection frames, further calculating a network loss value between the target detection frame and the first truth frame, and updating parameters of the target detection network by using the obtained network loss value to obtain a final defect detection model.
Further, when the number of the target detection frames is multiple, network loss values between each target detection frame and the corresponding first truth frame are calculated respectively, the network loss values between each target detection frame and the corresponding truth frame are further weighted and summed to obtain a total network loss value, and parameters of the target detection network are updated according to the obtained total loss value to obtain the final defect detection model. The final defect detection model is a defect detection model which is subjected to training stopping optimization and is finally output.
In the embodiment corresponding to fig. 1, at least one training sample image is acquired, and the training sample image is detected by using a target detection network, so as to acquire a first detection result including a first detection frame corresponding to the defect, then, at least part of the first detection frame is selected as the target detection frame by using the distance between the first truth frame and the first detection frame, a network loss value is determined based on the target detection frame and the first truth frame, and parameters of the target detection network are updated by using the network loss value to acquire a final defect detection model.
Specifically, compared with the prior art, in the embodiment corresponding to fig. 1, by calculating the cross ratios between all the detection frames and the truth frames in a large amount, in the embodiment corresponding to fig. 1, based on the distance between the first detection frame and the first truth frame, a part of the first detection frame is selected as the target detection frame from the detected first detection frames, so that the calculated amount can be reduced better than the prior art, and the training speed of the defect detection model is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a defect detection model training method according to another embodiment of the application. In the present embodiment, the method provided by the present application focuses on the step S130, which includes selecting at least a part of the first detection frame as the target detection frame by using the distance between the first truth frame and the first detection frame. In the embodiment corresponding to fig. 3, the step S130 further includes steps S301 to S303.
S301: and respectively calculating the distance between each first detection frame and the corresponding first truth box.
After the training sample image is detected by utilizing the target detection network and the first detection frames are obtained, respectively determining first truth frames corresponding to the first detection frames, and further respectively calculating the distances between the first detection frames and the corresponding first truth frames after the first truth frames are determined.
Further, step S301 is to calculate the loss distance between each first detection frame and the corresponding first truth frame, so as to determine the accuracy between the first detection frame and the first truth frame according to the loss distance.
S302: and selecting a preset number of first detection frames with the smallest distance from the first detection frames as candidate detection frames.
After the distances between the first detection frames and the corresponding first truth frames are calculated, a preset number of first detection frames with the smallest distances are selected from the first detection frames to serve as candidate detection frames. Wherein the preset number is a constant value set according to the empirical value.
Further, in another embodiment, when the number of training sample images to be trained is not fixed, or the number of defects included in the training sample images is not fixed, the number of first detection frames obtained by detecting from the training sample images using the target detection network is also not determined, so that in step S302, the first detection frame with the smallest distance of the preset ratio may be selected as the candidate detection frame from the first detection frames. The candidate detection frames are detection frames which are selected from the first detection frames according to the distance between the first detection frames and the corresponding first truth frames and are used for selecting target detection frames. In other words, the candidate detection frames may be sorted from large to small according to the loss distance, and the loss distance may be smaller than the loss distance in the first detection frames of the preset number or the preset ratio.
S303: and calculating a first intersection ratio of each candidate detection frame and the corresponding first truth frame, and selecting at least part of candidate detection frames as target detection frames according to the first intersection ratio.
After the candidate detection frames are determined, a first cross ratio of each candidate detection frame to the corresponding first truth frame is further calculated. And selecting at least part of candidate detection frames as target detection frames according to the first intersection ratios.
Further, after the first intersection ratio of each candidate detection frame and the corresponding first truth frame is obtained, for example, at least part of candidate detection frames with larger intersection ratio may be selected as the target detection frames.
Further, after calculating the first cross ratio between each candidate detection frame and the corresponding first truth frame, the obtained cross ratio may be further processed, for example, a mean value and a variance value of the first cross ratio are calculated, then a cross ratio threshold corresponding to the target detection frame is determined according to the mean value and the variance value, and then the candidate detection frame with the cross ratio greater than or equal to the determined cross ratio threshold is selected as the target detection frame, which may be described in detail in the following description of the embodiment section corresponding to fig. 4. In some embodiments, the selected target box is defined as a positive sample, and the corresponding first boxes not selected as target boxes are defined as negative samples.
Referring to fig. 4, fig. 4 is a flowchart illustrating a defect detection model training method according to another embodiment of the application. In the current embodiment, the method provided by the application comprises the following steps:
s401: at least one training sample image is acquired.
S402: and detecting the training sample image by using the target detection network to obtain a first detection result.
In the present embodiment, the step S301 of calculating the distance between each first detection frame and the corresponding first truth frame further includes a step S403.
S403: and respectively acquiring the distance between the center point of each first detection frame and the center point of the corresponding first truth frame to serve as the distance between the first detection frame and the first truth frame.
In the present embodiment, after the first detection result including the first detection frames is obtained, distances between the center point of each first detection frame and the first truth frame corresponding to the first detection frame are respectively calculated and obtained, and in the present embodiment, the distance between the center point of the first detection frame and the center point of the corresponding first truth frame may refer to an actual distance value between the two points.
Further, in another embodiment, after the first detection result including the first detection frame is obtained, a loss distance between the center point of each first detection frame and the center point of the first truth frame corresponding to the first detection frame is calculated and obtained, where the loss distance may be understood as a loss value between the center point of the first detection frame and the center point of the first truth frame corresponding to the first detection frame.
Still further, in other embodiments, after the training sample image is detected by using the target detection network to obtain the first detection result, distances between the end points of each first detection frame and the end points of the corresponding first truth frame are further obtained respectively, and then the distances between the obtained end points are weighted and summed to serve as the distances between the first detection frames and the first truth frames. It is understood that in some embodiments, the distance average between the plurality of endpoints may also be obtained, and the distance average between the plurality of endpoints may be used as the distance between the first detection frame and the first truth frame.
S404: and selecting a preset number of first detection frames with the smallest distance from the first detection frames as candidate detection frames.
S405: a first cross ratio of each candidate detection box to the corresponding first truth box is calculated.
In the present embodiment, at least a part of the candidate detection frames are selected according to the first intersection ratio in the step S303, and the step S406 to the step S408 are further included as target detection frames.
S406: and obtaining the mean value and the variance value of the first cross ratios of the preset number of candidate detection frames and the corresponding first truth frames.
In the present embodiment, after calculating the first cross ratios of each candidate detection frame and the corresponding first truth frame, the mean and variance values of the first cross ratios between the preset number of candidate detection frames and the corresponding first truth frame are further calculated and obtained.
S407: and taking the sum of the mean value and the variance value as a first selected threshold value of the target detection frame.
After calculating the mean value and the variance value, the calculated mean value and variance value are used as the first selection threshold of the intersection ratio of the target detection frame, and step S408 is executed.
S408: and selecting the candidate detection frames with the first cross ratio being greater than or equal to a first selection threshold value as target detection frames.
Selecting a candidate detection frame with the first intersection ratio being greater than or equal to a first selection threshold value as a target detection frame, namely outputting the candidate detection frame with the first intersection ratio being greater than or equal to the first selection threshold value as the target detection frame so as to train and optimize the defect detection model; and discarding other candidate detection frames with the first intersection ratio smaller than the first selection threshold.
S409: and determining a network loss value based on the target detection frame and the first truth frame, and updating parameters of the target detection network by using the network loss value to obtain a final defect detection model. In the present embodiment, after calculating the first cross ratios of each candidate detection frame and the corresponding first truth frame, further calculating to obtain the mean value and the variance value of the first cross ratios between the preset number of candidate detection frames and the corresponding first truth frame, further determining a first selection threshold by using the mean value and the variance value of the first cross ratios, and selecting a target detection frame from the candidate detection frames based on the first selection threshold.
It should be noted that, steps S401 to S402, steps S404 to S405 and step S409 in the present embodiment are the same as some steps in fig. 1 or fig. 3, respectively, and specific reference may be made to the description of the corresponding parts above, and will not be described in detail herein.
Referring to fig. 5, fig. 5 is a flowchart illustrating a defect detection model training method according to another embodiment of the application. In the present embodiment, the steps included after updating the parameters of the target inspection network with the network loss values to obtain the final defect inspection model are highlighted. After updating the parameters of the target detection network by using the network loss value, i.e. obtaining a new defect detection model, evaluating the obtained new defect detection model to judge whether the new defect detection model parameters are qualified or not, and further judging whether the new defect detection model parameters are the final defect detection model or not. Specifically, the process of evaluating the new defect detection model includes the contents described in steps S501 to S504. Specifically, in the current embodiment, the method provided by the present application further includes:
S501: and inputting the test sample image into the defect detection model to obtain a second detection result of the test sample image.
After updating the parameters of the target detection network by using the network loss value, a new defect detection model is obtained, and then the test sample image is further input into the obtained new defect detection model so as to obtain a second detection result corresponding to the test sample image. The test sample image is a sample image preset for testing the defect detection model, and may specifically be an image set different from the training sample set. The second detection result comprises a second detection frame corresponding to the defect, and the second detection result comprises a defect type, specifically, the defect type is marked at the second detection frame by using a defect identification code. It should be noted that, the defect identification code for marking the defect type corresponding to the second detection frame and the defect identification code for marking the defect corresponding to the first detection frame are the same set of identification codes.
Further, in other embodiments, the steps S120 to S140 are performed for a preset number of times according to the empirical value setting in advance, so as to obtain a new defect detection model through multiple training and optimization, and then the obtained defect detection model is evaluated and detected by using the steps S501 to S504. The number of execution times of the preset training optimization is adjustable according to an empirical value, and is not limited herein.
S502: and judging whether the position and/or the size of the second detection frame of the defect accords with the theoretical characteristics of the defect.
After the second detection result including the second detection frame is obtained, whether the position and/or the size of the second detection frame corresponding to the defect accords with the theoretical characteristics of the defect is further judged. Further, after the second detection result is obtained, the position and/or the size of the second detection frame are further obtained, and then the obtained position and/or the size of the second detection frame are compared with the preset theoretical characteristics of the defect, so that whether the position and/or the size of the second detection frame of the defect accords with the theoretical characteristics of the defect or not is judged. Wherein the theoretical characteristics of the defect at least comprise a theoretical position range and a theoretical size of the defect.
Specifically, the theoretical characteristics of the defects can be preset by a user according to the requirements of the product and the characteristics of the defects frequently occurring in the product, and can be understood as other common characteristics of the defects in the product. For example, when the product is a bottle cap, the defect of abnormal code spraying of the bottle cap can be preset to be positioned on the side surface of the bottle cap, or the position where the code spraying height of the preset bottle cap is higher than two thirds of the height of the bottle cap is preset to be abnormal code spraying of the bottle cap. It can be understood that the method for training the defect detection model provided by the application can be used for training the defect detection models for detecting defects of different products, so that the user can set theoretical characteristics of defects according to application scenes of the defect detection models and common characteristics of defects in the products for detecting the defects in actual application, and the theoretical characteristics of the defects are not limited.
In another embodiment, the theoretical characteristics of the defect may also be derived from observation of the training sample image. In one embodiment, the training sample image is observed to: the bottle cap breakage defects can be positioned on the whole interface of the bottle cap; the defects of bottle cap screwing are mainly arranged on the side surface of the bottle cap, the position 1/3-1/2 of the upper surface of the bottle cap, the defects of abnormal code spraying are mainly arranged on the left middle side of the bottle cap, the defects of bottle cap break points and broken edges are mainly arranged below the bottle cap, and the corresponding defect conclusion obtained by observation is set as the theoretical characteristic of the defects. Based on this information, a method for verifying defect detection is further devised. And inputting the second detection result into a verification defect processing module, and detecting whether the defect obtained by current detection accords with the theoretical characteristics of the type of defect or not according to the type of defect, if so, leaving a second detection frame corresponding to the defect, otherwise, discarding.
S503: if yes, the second detection frame is reserved, otherwise, the second detection frame is abandoned.
If the position and/or the size of the second detection frame which is obtained by the defect is judged to be in accordance with the theoretical characteristics of the defect, the second detection frame is reserved, otherwise, if the position and/or the size of the second detection frame which is obtained by the defect is judged to be not in accordance with the theoretical characteristics of the defect, the second detection frame is abandoned.
S504: and performing performance evaluation on the defect detection model by using the number of the reserved second detection frames.
And after judging whether the position and/or the size of the second detection frame of the defect accords with the theoretical characteristics of the defect and obtaining the reserved second detection frames, performing performance evaluation on the current defect detection model by utilizing the reserved second detection frames. In particular, performance evaluation of the defect detection model may be described below in the context of the corresponding embodiment of FIG. 6.
In the embodiment corresponding to fig. 5, the second detection result of the test sample image is obtained by inputting the test sample image into the defect detection model, and whether the position and/or the size of the second detection frame of the defect accords with the theoretical characteristics of the defect is judged, if yes, the second detection frame is reserved, otherwise, the second detection frame is abandoned, then the number of reserved second detection frames is utilized to perform performance evaluation on the defect detection model, so that the detection precision of the obtained defect detection model can be better evaluated, and whether the obtained defect detection model is output or not is determined by utilizing the evaluation result, thereby realizing that the defect detection model with higher detection precision is obtained in training.
Further, referring to fig. 6, fig. 6 is a flowchart illustrating a defect detection model training method according to another embodiment of the application. In the present embodiment, the test sample images are labeled with second truth boxes corresponding to at least one type of defect, respectively.
The step S504 performs performance evaluation on the defect detection model by using the number of the reserved second detection frames, and further includes:
S601: and obtaining a second cross ratio between the reserved second detection frame and the corresponding second truth frame.
And after the position and/or the size of the second detection frame of the defect is judged to be in accordance with the theoretical characteristics of the defect and the second detection frame is reserved, further calculating a second cross-correlation ratio between the reserved second detection frame and the corresponding second truth frame. It should be noted that, in step S601, the second cross-correlation between the second truth boxes with the same defect type as the defect type corresponding to the second detection box is calculated.
S602: and determining the accuracy and recall of the defect detection model by using at least part of the second cross-over ratio.
After the second cross-over ratio between the reserved second detection frame and the corresponding second truth frame is obtained through calculation, the accuracy and recall of the defect detection model are further determined by utilizing at least part of the second cross-over ratio.
Further, in an embodiment, after the second intersection ratio between the reserved second detection frame and the corresponding second truth frame is obtained, a second intersection ratio greater than or equal to a preset second selection threshold is further selected, and then the accuracy and recall of the defect detection model are determined by using the second intersection ratio greater than or equal to the preset second selection threshold.
S603: and obtaining a performance evaluation value of the defect detection model according to the second cross ratio, the recall rate and the accuracy rate so as to evaluate the defect detection model according to the performance evaluation value.
And further judging whether the performance of the current new defect detection model meets the requirement according to the calculated second merging ratio, recall rate and accuracy rate, further judging whether training of the defect detection model can be stopped, and outputting the current defect detection model as a final defect detection model.
Where accuracy indicates how many samples of the samples predicted to be positive are truly positive samples. Then there are two possibilities for predicting positive, one is to predict positive class as positive class (TP) and the other is to predict negative class as positive class (FP). That is, in the present embodiment, the accuracy is equal to the ratio of the total number of the second detection frames to the total number of the second truth frames, and the calculation formula is: accuracy = total number of second detection boxes retained/total number of second truth boxes.
The recall is for the original sample and indicates how many positive examples in the sample were predicted to be correct. Specifically, there are two possibilities, one is to predict the original positive class as a positive class (TP) and the other is to predict the original positive class as a negative class (FN). That is, in the present embodiment, the recall is equal to the ratio of the number of second detection frames retained to the number of total second truth frames, i.e., the ratio of recall = number of second detection frames/number of total second truth frames. The following are to be described: in other embodiments, the accuracy and recall are also defined as precision and recall.
Further, an average Precision average, i.e., mAP (MEAN AVERAGE Precision), is calculated based on the second cross-over ratio, recall, and accuracy. Specifically, an average precision average value is calculated according to the following formula,Wherein, P and R are the accuracy and recall rate respectively.
It should be noted that, in order to perform a more comprehensive evaluation on the defect detection model, it is necessary to define that the defect types included in the plurality of test sample images at least cover all defect types in the training sample image for training.
It should be noted that, in some embodiments, the step of updating the parameters of the destination detection network by using the network loss value is defined as performing gradient backhaul by using the network loss value to update the parameters of the destination detection network, and further, when the destination detection network is an SSD network, the step is also understood as performing gradient backhaul by using the network loss value to update the parameters of the SSD network.
Referring to fig. 7, fig. 7 is a flowchart of a defect detection model training method according to another embodiment of the application. In the present embodiment, the determining the network loss value based on the target detection box and the first truth box in the step S140 further includes:
S701: and obtaining a position loss value according to the position difference between the target detection frame and the corresponding first truth frame.
After the target detection frame is selected, calculating a network loss value between the target detection frame and the first truth frame. Wherein the network loss values include at least a location loss value and a confidence loss value.
Specifically, the position loss value is a smoothl 1 loss function.
S702: and obtaining a confidence loss value according to the confidence of the target detection frame in the first detection result.
Wherein the confidence loss value is a cross entropy loss function.
S703: and obtaining a network loss value based on the position loss value and the confidence loss value.
After the position loss value and the confidence loss value are respectively obtained, the position loss value and the confidence loss value are further subjected to weighted summation, and then the weighted summation result of the position loss value and the confidence loss value is used as a network loss value, so that parameters of the target detection network are updated according to the obtained network loss value to obtain a final defect detection model. When the position loss value and the confidence loss value are weighted and summed, the weight ratio corresponding to the position loss value and the confidence loss value may be set according to an empirical value, which is not limited herein.
The method provided by the application can be used for better training to obtain the defect detection model which can be applied to detecting various types of defects of a certain product, and only the corresponding defect data to be detected are needed to be provided and trained.
Referring to fig. 8, fig. 8 is a flow chart of a defect detection method according to an embodiment of the application. In the current embodiment, the method provided by the application comprises the following steps:
s810: and acquiring an image to be detected, which is obtained by shooting the object to be detected.
When detecting an object to be detected, firstly shooting the object to be detected, and acquiring an image to be detected obtained by shooting the object to be detected.
Further, in another embodiment, in order to perform more accurate detection on the object to be detected, step S810 is to acquire a plurality of images to be detected with different angles obtained by performing multi-angle shooting on the object to be detected. When the multi-angle detection is carried out on the object to be detected, when the images to be detected of different angles of the object to be detected are obtained through shooting, shooting angles are further marked in the images to be detected, so that when the defect detection is carried out later, defects are determined according to the shooting angles, after the defect detection result is obtained through detection, the defect detection result and the shooting angles are correspondingly output, so that a user can quickly distinguish the defects according to the defect detection result or the obtained defects are combined with the shooting angles and compared with the theoretical characteristics of the defects, and the accuracy of the defect detection is judged.
S820: and performing defect detection on the image to be detected by using the defect detection model to obtain a defect detection result.
Inputting the image to be detected into a defect detection model, detecting the defect of the image to be detected by using the defect detection model to obtain a defect detection result, and outputting the obtained defect detection result.
Wherein the object to be detected is a bottle cap, and/or the defect detection model is a final defect detection model obtained by training the defect detection model training method in any one of the embodiments shown in fig. 1 to 7.
Further, in another embodiment, please continue to refer to fig. 8, after detecting the defect of the image to be detected by using the defect detection model and obtaining the defect detection result in step S820, the method provided by the present application further includes:
S830: and classifying the bottle caps according to the types of defects contained in the defect detection results.
And performing defect detection on the image to be detected by using the defect detection model to obtain a defect detection result, and then classifying the bottle caps according to the types of defects contained in the defect detection result. Specifically, the detected defect detection result is output to the processor, so that the processor determines the operation of classifying the bottle caps according to the type of the defect contained in the detected defect detection result, and generates a control instruction of the corresponding classifying operation, so that the bottle caps are output to corresponding stations or work lines according to the type of the defect, and the defect on the bottle caps is processed later.
Further, the sorting process refers to determining a station or a working line to which the bottle cap is output according to the defect type of the bottle cap, and feeding back the station or the working line to the processor so as to generate an operation instruction corresponding to the sorting process and further output the bottle cap comprising different types of defects to different stations or working lines, so that the defects of the bottle cap are processed and qualified bottle caps are obtained.
The defect detection result includes the following types of defects: at least one of bottle cap damage, bottle cap deformation, bottle cap edge breakage, bottle cap spinning, bottle cap breakpoint and code spraying abnormality.
Compared with the prior art, the technical scheme provided by the application can detect the defects only by using the image shot by the camera, and compared with other methods, the method is simple and feasible and can be rapidly applied to factories without using an additional image processing technology.
In the technical schemes provided in any one of the embodiments corresponding to fig. 1 to 7 and 8 of the present application, by adopting a deep learning technology and combining with visual features of images, an end-to-end deep neural network is used to train RGB images, so as to accurately classify and locate bottle cap defects. The defect detection method based on deep learning provided by the application can realize the identification of various defect types by replacing the data and a small amount of parameters of the training sample image, and can be easily updated when being deployed in products.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a defect detection model training device according to an embodiment of the application. In the present embodiment, the defect detection model training device 900 provided by the present application includes a processor 901 and a memory 902 coupled. The defect detection model training apparatus 900 may perform the defect detection model training method described in any one of fig. 1 to 7 and the corresponding embodiments.
The memory 902 includes a local storage (not shown) and stores a computer program that, when executed, implements the methods described in any of the embodiments of fig. 1-7 and corresponding thereto.
The processor 901 is coupled to the memory 902, and the processor 901 is configured to run a computer program to perform the defect detection model training method as described above in any one of fig. 1-7 and their corresponding embodiments.
Further, in another embodiment, the defect detection model training apparatus 900 further includes an image acquisition unit (not shown). The image acquisition unit is connected with the processor 901 and is used for shooting and acquiring an original image or a test sample image or a training sample image under the control of the processor 901.
Further, in another embodiment, the defect detection model training apparatus 900 further includes a communication circuit (not shown), and the communication circuit is connected to the processor 901 and is used for performing data interaction with an external terminal device under the control of the processor 901 to obtain a training sample image or an original image or a test sample image, where the external terminal device may include a photographing device or a mobile terminal, etc.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a defect detecting device according to an embodiment of the application. In the present embodiment, the defect detecting device 1000 provided by the present application includes a processor 1001 and a memory 1002 coupled. The defect detection apparatus 1000 may perform the defect detection model training method described in any one of the embodiments shown in fig. 8 and corresponding examples.
The memory 1002 includes a local storage (not shown), and stores a computer program that, when executed, implements the method described in any of the embodiments of fig. 8 and corresponding figures.
The processor 1001 is coupled to the memory 1002, and the processor 1001 is configured to execute a computer program to perform the defect detection method as described in any of the embodiments of fig. 8 and corresponding thereto.
Further, in another embodiment, the defect detecting device 1000 further includes an image capturing unit (not shown). The image acquisition unit is connected to the processor 1001 and is configured to capture and acquire an object to be detected under the control of the processor 1001, so as to acquire an image to be detected.
Further, in another embodiment, the defect detecting apparatus 1000 further includes a communication circuit (not shown), and the communication circuit is connected to the processor 1001 and is used for performing data interaction with an external terminal device under the control of the processor 1001 to obtain an image to be detected, where the external terminal device may include a photographing device or a mobile terminal, etc.
Referring to fig. 11, fig. 11 is a schematic diagram of a computer storage medium according to an embodiment of the application. The computer storage medium 1100 stores a computer program 1101 that can be executed by a processor, where the computer program 1101 is configured to implement the defect detection model training method described in any one of the embodiments of fig. 1 to 7 and corresponding embodiments, or the computer program 1101 is configured to implement the defect detection method described in any one of the embodiments of fig. 8 and corresponding embodiments. Specifically, the computer storage medium 1100 may be one of a memory, a personal computer, a server, a network device, a usb disk, and the like, which is not limited in particular herein.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the present application.
Claims (13)
1. A method of training a defect detection model, the method comprising:
Acquiring at least one training sample image, wherein the training sample image is marked with at least one first truth box corresponding to at least one type of defect respectively;
detecting the training sample image by using a target detection network to obtain a first detection result, wherein the first detection result comprises a first detection frame corresponding to the defect;
Respectively calculating the distance between each first detection frame and the corresponding first truth frame, wherein the distance comprises a loss distance, so that the accuracy between the first detection frame and the corresponding first truth frame is judged according to the loss distance;
selecting a preset number of first detection frames with the minimum distance from the first detection frames as candidate detection frames;
calculating a first cross ratio of each candidate detection frame and a corresponding first truth box;
acquiring the mean value and the variance value of the first cross ratios of the preset number of candidate detection frames and the corresponding first truth frames;
taking the sum of the mean value and the variance value as a first selected threshold value of a target detection frame;
Selecting the candidate detection frames with the first intersection ratio being greater than or equal to the first selection threshold as the target detection frames;
And determining a network loss value based on the target detection frame and the first truth frame, and updating parameters of the target detection network by using the network loss value to obtain a final defect detection model.
2. The method according to claim 1, wherein said separately calculating the distance between each of said first test frames and the corresponding first truth frame comprises:
And respectively acquiring the distance between the center point of each first detection frame and the center point of the corresponding first truth frame to serve as the distance between the first detection frame and the first truth frame.
3. The method of claim 1, wherein after updating parameters of the target inspection network with the network loss values to obtain a final defect inspection model, the method further comprises:
Inputting a test sample image into the defect detection model to obtain a second detection result of the test sample image, wherein the second detection result comprises a second detection frame corresponding to the defect;
Judging whether the position and/or the size of a second detection frame of the defect accords with the theoretical characteristics of the defect or not;
if yes, reserving the second detection frame, otherwise, discarding the second detection frame;
and performing performance evaluation on the defect detection model by using the reserved number of the second detection frames.
4. A method according to claim 3, wherein the test sample images are annotated with a second truth box corresponding to at least one type of defect, respectively;
and performing performance evaluation on the defect detection model by using the reserved number of the second detection frames, wherein the performance evaluation comprises the following steps:
acquiring a second cross ratio between the reserved second detection frame and the corresponding second truth frame;
Determining an accuracy rate and a recall rate of the defect detection model using at least a portion of the second cross-over ratio;
And obtaining a performance evaluation value of the defect detection model according to the second merging ratio, the recall rate and the accuracy rate so as to evaluate the defect detection model according to the performance evaluation value.
5. The method of claim 4, wherein the determining a network loss value based on the target detection box and a first truth box comprises:
obtaining a position loss value according to the position difference between the target detection frame and the corresponding first truth frame; and
Obtaining a confidence loss value according to the confidence of the target detection frame in the first detection result;
And obtaining a network loss value based on the position loss value and the confidence loss value.
6. The method of claim 1, wherein the acquiring at least one training sample image further comprises:
acquiring an original training sample image, wherein the training sample image is marked with at least one first truth box corresponding to at least one type of defect respectively;
And carrying out data enhancement processing on the original training sample image so as to obtain a new training sample image.
7. The method of claim 6, wherein the data enhancement processing of the original training sample image to obtain a new training sample image, further comprises:
And acquiring a non-defective image, and migrating defective features in the original training sample image into the non-defective image by using a style migration network so as to obtain a new training sample image.
8. The method of claim 1, wherein the target detection network comprises an SSD network, the first detection box being a detection box detected by each convolutional layer in the target detection network.
9. A method of defect detection, the method comprising:
Acquiring an image to be detected, which is obtained by shooting an object to be detected;
Performing defect detection on the image to be detected by using a defect detection model to obtain a defect detection result;
Wherein the object to be detected is a bottle cap, and the defect detection model is a model trained by the method according to any one of claims 1 to 8.
10. The method according to claim 9, wherein after performing defect detection on the image to be detected using a defect detection model to obtain a defect detection result, the method further comprises:
Classifying the bottle caps according to the types of defects contained in the defect detection results;
And/or, the types of defects contained in the defect detection result comprise: at least one of bottle cap damage, bottle cap deformation, bottle cap edge breakage, bottle cap spinning, bottle cap breakpoint and code spraying abnormality.
11. A defect detection model training apparatus, wherein the apparatus comprises a memory and a processor coupled, wherein,
The memory includes local storage and stores a computer program;
the processor is configured to run the computer program to perform the method of any one of claims 1 to 8.
12. A defect detection apparatus, wherein the apparatus comprises a memory and a processor coupled, wherein,
The memory includes local storage and stores a computer program;
the processor is configured to run the computer program to perform the method of any of claims 9 to 10.
13. A computer storage medium storing a computer program executable by a processor for implementing the method of any one of claims 1 to 8 or 9 to 10.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010573557.XA CN111814850B (en) | 2020-06-22 | 2020-06-22 | Defect detection model training method, defect detection method and related device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010573557.XA CN111814850B (en) | 2020-06-22 | 2020-06-22 | Defect detection model training method, defect detection method and related device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111814850A CN111814850A (en) | 2020-10-23 |
| CN111814850B true CN111814850B (en) | 2024-10-18 |
Family
ID=72845400
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010573557.XA Active CN111814850B (en) | 2020-06-22 | 2020-06-22 | Defect detection model training method, defect detection method and related device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111814850B (en) |
Families Citing this family (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114723651B (en) * | 2020-12-22 | 2025-03-28 | 东方晶源微电子科技(北京)股份有限公司 | Defect detection model training method and defect detection method, device and equipment |
| CN112634254A (en) * | 2020-12-29 | 2021-04-09 | 北京市商汤科技开发有限公司 | Insulator defect detection method and related device |
| CN112712119B (en) * | 2020-12-30 | 2023-10-24 | 杭州海康威视数字技术股份有限公司 | Method and device for determining detection accuracy of target detection model |
| CN113034449B (en) * | 2021-03-11 | 2023-12-15 | 深圳市优必选科技股份有限公司 | Target detection model training method and device and communication equipment |
| CN113095400A (en) * | 2021-04-09 | 2021-07-09 | 安徽芯纪元科技有限公司 | Deep learning model training method for machine vision defect detection |
| CN113239975B (en) * | 2021-04-21 | 2022-12-20 | 国网甘肃省电力公司白银供电公司 | Target detection method and device based on neural network |
| CN113344847B (en) * | 2021-04-21 | 2023-10-31 | 安徽工业大学 | A binder clip defect detection method and system based on deep learning |
| CN113283485A (en) * | 2021-05-14 | 2021-08-20 | 上海商汤智能科技有限公司 | Target detection method, training method of model thereof, related device and medium |
| CN113298793B (en) * | 2021-06-03 | 2023-11-24 | 中国电子科技集团公司第十四研究所 | A method for circuit board surface defect detection based on multi-view template matching |
| CN113408631A (en) * | 2021-06-23 | 2021-09-17 | 佛山缔乐视觉科技有限公司 | Method and device for identifying style of ceramic sanitary appliance and storage medium |
| CN113255590A (en) * | 2021-06-25 | 2021-08-13 | 众芯汉创(北京)科技有限公司 | Defect detection model training method, defect detection method, device and system |
| CN114331949B (en) * | 2021-09-29 | 2025-07-22 | 腾讯科技(上海)有限公司 | Image data processing method, computer device and readable storage medium |
| CN113673488B (en) * | 2021-10-21 | 2022-02-08 | 季华实验室 | Target detection method and device based on few samples and intelligent object sorting system |
| CN114219962A (en) * | 2021-12-29 | 2022-03-22 | 北京三快在线科技有限公司 | Model training and target detection method and device, storage medium and electronic equipment |
| CN114492589A (en) * | 2021-12-29 | 2022-05-13 | 浙江大华技术股份有限公司 | Model training method, target detection method, terminal device, and computer medium |
| CN114743180A (en) * | 2022-04-25 | 2022-07-12 | 中国第一汽车股份有限公司 | Detection result identification method and device, storage medium and processor |
| CN115147353B (en) * | 2022-05-25 | 2024-08-20 | 腾讯科技(深圳)有限公司 | Training method, device, equipment, medium and program product of defect detection model |
| CN114882206B (en) * | 2022-06-21 | 2025-07-22 | 上海商汤临港智能科技有限公司 | Image generation method, model training method, detection method, device and system |
| CN117710944B (en) * | 2024-02-05 | 2024-06-25 | 虹软科技股份有限公司 | Model defect detection method, model training method, target detection method and target detection system |
| CN118246511B (en) * | 2024-05-20 | 2024-11-22 | 合肥市正茂科技有限公司 | A training method, system, device and medium for vehicle detection model |
| CN119624936B (en) * | 2024-12-06 | 2025-10-28 | 西安电子科技大学 | Automatic detection method of image defects in fiber optic gyroscope assembly process based on quantitative analysis |
| CN120747059B (en) * | 2025-08-25 | 2025-12-23 | 中电信人工智能科技(北京)有限公司 | Automatic model parameter adjustment method and device |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109409517A (en) * | 2018-09-30 | 2019-03-01 | 北京字节跳动网络技术有限公司 | The training method and device of object detection network |
| CN110503112A (en) * | 2019-08-27 | 2019-11-26 | 电子科技大学 | A Small Target Detection and Recognition Method Based on Enhanced Feature Learning |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110619618B (en) * | 2018-06-04 | 2023-04-07 | 杭州海康威视数字技术股份有限公司 | Surface defect detection method and device and electronic equipment |
| CN108960174A (en) * | 2018-07-12 | 2018-12-07 | 广东工业大学 | A kind of object detection results optimization method and device |
| CN110889421A (en) * | 2018-09-07 | 2020-03-17 | 杭州海康威视数字技术股份有限公司 | Target detection method and device |
| CN109117831B (en) * | 2018-09-30 | 2021-10-12 | 北京字节跳动网络技术有限公司 | Training method and device of object detection network |
| CN109829893B (en) * | 2019-01-03 | 2021-05-25 | 武汉精测电子集团股份有限公司 | A Defective Object Detection Method Based on Attention Mechanism |
| CN110111332A (en) * | 2019-05-20 | 2019-08-09 | 陕西何止网络科技有限公司 | Collagent casing for sausages defects detection model, detection method and system based on depth convolutional neural networks |
| CN110163858A (en) * | 2019-05-27 | 2019-08-23 | 成都数之联科技有限公司 | A kind of aluminium shape surface defects detection and classification method and system |
| CN110503095B (en) * | 2019-08-27 | 2022-06-03 | 中国人民公安大学 | Positioning quality evaluation method, positioning method and device of target detection model |
| CN111161233A (en) * | 2019-12-25 | 2020-05-15 | 武汉科技大学 | Method and system for detecting defects of punched leather |
-
2020
- 2020-06-22 CN CN202010573557.XA patent/CN111814850B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109409517A (en) * | 2018-09-30 | 2019-03-01 | 北京字节跳动网络技术有限公司 | The training method and device of object detection network |
| CN110503112A (en) * | 2019-08-27 | 2019-11-26 | 电子科技大学 | A Small Target Detection and Recognition Method Based on Enhanced Feature Learning |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111814850A (en) | 2020-10-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111814850B (en) | Defect detection model training method, defect detection method and related device | |
| CN111179251B (en) | Defect detection system and method based on twin neural network and by utilizing template comparison | |
| US10878283B2 (en) | Data generation apparatus, data generation method, and data generation program | |
| KR102166458B1 (en) | Defect inspection method and apparatus using image segmentation based on artificial neural network | |
| CN111310826B (en) | Annotation anomaly detection method, device and electronic equipment for sample set | |
| CN110415214A (en) | Appearance detecting method, device, electronic equipment and the storage medium of camera module | |
| CN118967672A (en) | Industrial defect detection method, system, device and storage medium | |
| CN112131936A (en) | Inspection robot image identification method and inspection robot | |
| CN114743102B (en) | Flaw detection method, system and device for furniture plate | |
| KR20190075707A (en) | Method for sorting products using deep learning | |
| CN109816634B (en) | Detection method, model training method, device and equipment | |
| KR102141302B1 (en) | Object detection method based 0n deep learning regression model and image processing apparatus | |
| US20160371568A1 (en) | Material classification using multiview capture | |
| Han et al. | SSGD: A smartphone screen glass dataset for defect detection | |
| CN117635603B (en) | System and method for detecting on-line quality of hollow sunshade product based on target detection | |
| CN119091236B (en) | Ceramic packaging substrate detection method and system based on visual inspection and meta-learning | |
| CN116934737B (en) | A method for identifying and classifying weld combination defects | |
| CN114255339A (en) | A method, device and storage medium for identifying breakpoints of power transmission wires | |
| CN115294039A (en) | Steel coil end surface defect detection method | |
| CN113657423A (en) | Target detection method suitable for small-volume parts and stacked parts and application thereof | |
| CN110533629A (en) | A kind of detection method and detection device of Bridge Crack | |
| CN119130912A (en) | Strip steel surface defect detection method and system based on depth map | |
| CN112750113B (en) | Glass bottle defect detection method and device based on deep learning and linear detection | |
| CN115358981B (en) | Method, device, equipment and storage medium for determining glue defects | |
| KR102723094B1 (en) | Apparatus for inspecting products |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |