CN112816408B - Flaw detection method for optical lens - Google Patents
Flaw detection method for optical lens Download PDFInfo
- Publication number
- CN112816408B CN112816408B CN202011598180.XA CN202011598180A CN112816408B CN 112816408 B CN112816408 B CN 112816408B CN 202011598180 A CN202011598180 A CN 202011598180A CN 112816408 B CN112816408 B CN 112816408B
- Authority
- CN
- China
- Prior art keywords
- data
- optical lens
- flaw
- training
- flaw detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 38
- 238000001514 detection method Methods 0.000 title claims abstract description 34
- 238000012549 training Methods 0.000 claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 26
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000013136 deep learning model Methods 0.000 claims abstract description 4
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000012360 testing method Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 7
- 238000013434 data augmentation Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 4
- 238000007621 cluster analysis Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 238000000844 transformation Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000013135 deep learning Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 6
- 230000007547 defect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009833 condensation Methods 0.000 description 1
- 230000005494 condensation Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 239000005304 optical glass Substances 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/01—Arrangements or apparatus for facilitating the optical investigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/958—Inspecting transparent materials or objects, e.g. windscreens
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/01—Arrangements or apparatus for facilitating the optical investigation
- G01N2021/0106—General arrangement of respective parts
- G01N2021/0112—Apparatus in one mechanical, optical or electronic block
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/958—Inspecting transparent materials or objects, e.g. windscreens
- G01N2021/9583—Lenses
Landscapes
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a flaw detection method of an optical lens, which detects flaws such as scratches, burrs, bubbles, gaps, threads and the like of the optical lens through deep learning and computer vision technology. The invention provides a high-robustness optical lens flaw detection technology, which is innovative in that the flaw detection technology has high robustness and accuracy and can be applied to flaw detection in different scenes. The method comprises the following steps: collecting flaw data of the optical lens through an input unit and a database; analyzing and processing data: according to the known data, carrying out data preprocessing on the data set; designing a deep learning model: establishing an end-to-end convolutional neural network model by inputting a training set; designing a training strategy, and increasing the accuracy and the robustness of the model; and inputting the test set into a training model to obtain a prediction result.
Description
Technical Field
The invention relates to a flaw detection method in the field of computer vision deep learning, in particular to a flaw detection method for an optical lens.
Background
The manufacture of optical glass must be measured by optical instruments to verify that the purity, transparency, uniformity, refractive index and dispersion are in specification. In the processing process, flaws such as scratches, burrs, bubbles, gaps, threads and the like are extremely easy to form, and most of domestic enterprises still use a manual detection method at present, so that the efficiency is low, the detection quality is low, and the ever-increasing industrial demands cannot be met. Therefore, most of domestic enterprises need a highly robust flaw detection method.
While previous studies have proposed many solutions to the problem of optical lens flaw detection, none of the solutions have yet been able to solve the problem well. Several solutions are disclosed in the prior art, including:
publication No. CN204666534U discloses an optical lens detection system, provides an automatic detection optical lens flaw device, adopts novel optical lens detection system to achieve the purpose of improving detection accuracy by stabilizing the position of a lens to be detected and avoiding excessive condensation.
Publication No. CN207764138U discloses an automatic optical lens flaw detection device, which solves the technical problems that the optical lens flaw detection device in the prior art is easy to leak detection of a lens with unobvious flaw characteristics, the quality and quality of detection are unstable, and an imaging auxiliary light screen cannot be freely adjusted according to the thickness and the size of the optical lens in the flaw detection process, so that imaging is unclear and incomplete, and the detection accuracy of flaw is indirectly affected.
At present, the defect detection of the lens is basically carried out from the aspects of innovation and improvement of the equipment structure of the detection device, the defect data of the optical lens is not analyzed, the data amplification is carried out on the defect data, a high-robustness accelerator mistaken stepping model is established, and the accuracy of the defect detection of the lens is improved.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention analyzes and processes massive optical lens pictures, performs data augmentation on the pictures, extracts flaw feature points and learns flaw feature representation with high robustness. The specific technical method is as follows:
a flaw detection method of an optical lens comprises the following steps:
step 1: collecting training data: collecting flaw data of the optical lens through equipment and a system;
step 2: analyzing and processing data: according to the known data, carrying out data preprocessing on the data set;
step 3: designing a deep learning model: establishing an end-to-end optimal training model by inputting a training set;
step 4: designing a training strategy: the accuracy and the robustness of the model are improved;
step 5: and (3) result detection: and inputting the test set into a training model to obtain a prediction result.
Further, the apparatus of step 1 includes:
(1) An input unit configured to collect real-time data from one or more sources;
(2) A database configured to store real-time and offline data.
Further, the device in the step 1 is an optical sensor.
Further, in the step 2, preprocessing is performed on the data to construct a feature vector, which specifically includes:
a) Processing the missing values by using an average value filling method;
b) Carrying out feature coding on the data bureau, and normalizing the feature vector to be between 0 and 1;
c) Construction of feature vectors [ d, x ] 1 ,x 2 ,...,x n ,y]Where d represents the dimension of the data, x 1 ,x 2 ...,x n Is characteristic point information of the data.
Further, in the step 2, various transforms are randomly applied to the optical lens image data using a data augmentation method, and the training data is enlarged.
Further, in the step 3, a convolutional neural network model is constructed, and image flaw feature points are extracted for cluster analysis, so that the distance between the same flaw feature points is reduced, and the distance between different flaw feature points is increased.
Further, the step 4 specifically includes:
a) Setting a wakeup learning rate optimization method;
b) Setting a Label Smoothing image Label Smoothing algorithm;
c) The last convolution layer step size is set to 1.
Advantageous effects of the invention
The beneficial effects are that: the technical scheme adopted by the invention is as follows:
(1) And the data of the optical lens is amplified, so that the training model has good robustness.
(2) And extracting flaw feature points from flaw data of the optical lens by adopting a deep learning technology, and clustering and classifying the extracted feature points. Compared with the prior art, the method has the advantages that the accuracy and the robustness of flaw detection of the optical lens can be improved, and the method can meet different kinds of requirements in the industry.
Drawings
FIG. 1 is a flow chart of a method framework provided by an embodiment of the present application;
FIG. 2 is a block diagram of a deep neural network provided in an embodiment of the present application;
fig. 3 is an enlarged contrast chart of optical lens data according to an embodiment of the present application.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The flaw detection method of the optical lens comprises the following specific detection steps:
step S1: training data are collected, and the optical lens flaw image training data are collected through an optical camera and a microscope.
Step S2: analyzing and processing data, detecting and dividing an image, randomly applying various transformations to the image data of the optical lens by using a data augmentation method to the divided image, and expanding training data, wherein the specific implementation mode comprises the following steps of:
in this embodiment, the original image is uniformly readjusted to 256×256 pixels;
in this embodiment, a blank layer (padding) is provided on each outer layer of the image, and the size is 2px;
in this embodiment, the data of the optical lens image is amplified by an image amplifying method, as shown in fig. 3 of the specification;
in this embodiment, the balanced sample sampler is set to extract each type of data, so that the number of data of each type in each training batch is the same.
Since the embodiments of the present application relate to a large number of applications of convolutional neural networks, for ease of understanding, the related terms and the related concepts of the convolutional neural networks related to the embodiments of the present application will be described below.
The convolutional neural network is a deep convolutional neural network with a convolutional structure. The convolutional neural network comprises a feature extractor consisting of a convolutional layer and a sub-sampling layer. The feature extractor can be seen as a filter and the convolution process can be seen as convolving an input image or convolution feature plane (feature map) with a trainable filter, outputting a convolution feature plane, which can also be referred to as a feature map. The convolution layer refers to a neuron layer in the convolution neural network, which performs convolution processing on an input signal. In the convolutional layer of the convolutional neural network, one neuron may be connected with only a part of adjacent layer neurons. A convolutional layer typically contains a number of feature planes, each of which may be composed of a number of neural elements arranged in a rectangular pattern. The nerve units of the same feature plane share weights, and a weight matrix corresponding to the shared weights is a convolution kernel. Sharing weights can be understood as the way image information is extracted is independent of location. The underlying principle in this is: the statistical information of a certain part of the image is the same as the other parts, meaning that the image information learned in a certain part can also be used in another part. The same learned image information can be used for all locations on the image. In the same convolution layer, a plurality of convolution kernels may be used to extract different image information, and in general, the greater the number of convolution kernels, the more abundant the image information reflected by the convolution operation. The convolution kernel can be initialized in the form of a matrix with random size, and reasonable weight can be obtained through learning in the training process of the convolution neural network. In addition, the direct benefit of sharing weights is to reduce the connections between layers of the convolutional neural network, while reducing the risk of overfitting.
Loss function
In training the convolutional neural network, because the output of the convolutional neural network is expected to be as close to the value which is really expected to be predicted as possible, the weight vector of each layer of the convolutional neural network can be updated according to the difference between the predicted value of the current network and the really expected target value (of course, an initialization process is usually carried out before the first update, that is, the parameters are preconfigured in each layer of the convolutional neural network), for example, if the predicted value of the convolutional neural network is higher, the weight vector is adjusted to be predicted to be lower, and the adjustment is continued until the convolutional neural network can predict the really expected target value or the value which is very close to the really expected target value. Thus, it is necessary to define in advance "how to compare the difference between the predicted value and the target value", which is a loss function (loss function) or an objective function (objective function), which are important equations for measuring the difference between the predicted value and the target value. Taking the loss function as an example, the higher the output value (loss) of the loss function is, the larger the difference is, and the training of the convolutional neural network becomes to reduce the loss process as much as possible.
(3) Back propagation algorithm
The convolutional neural network can adopt a Back Propagation (BP) algorithm to correct the size of parameters in the convolutional neural network in the training process, so that the error loss between the predicted value output by the convolutional neural network and the really wanted target value is smaller and smaller. Specifically, the input signal is transmitted forward until the output generates error loss, and the parameters in the initial convolutional neural network are updated through back propagation of error loss information, so that the error loss is converged. The back propagation algorithm is a back propagation motion that dominates the error loss, and aims to obtain optimal parameters of the convolutional neural network, such as a weight matrix, i.e. the convolutional kernel of the convolutional layer.
Step S3: the method comprises the steps of designing a deep learning model, extracting characteristic points from an optical lens image, carrying out cluster analysis on the flaw characteristic points, enabling the distance between the characteristic points of the same class to be reduced, enabling the distance between the characteristic points of different flaws to be increased, training flaw data of the optical lens by using a deep neural network algorithm, wherein the neural network consists of a series of convolutional neural networks and pooling layers, and comprises an input layer, a convolutional layer and an output layer, and the algorithm comprises two processes of forward propagation and backward propagation, as shown in an attached drawing 2 of the specification.
Step S4: designing a training strategy module, wherein the design training strategy improves the accuracy of the model and comprises the following submodules:
in the embodiment, a method for optimizing the learning rate of the wakeup is set, a smaller learning rate is selected at the beginning of model training, the learning rate is set to 0.000035, and after ten epochs are trained, the training is performed by using a learning rate of 0.00035;
in the embodiment, a Label Smoothing image Label algorithm is set to obtain confidence scores of the current input picture corresponding to each category, and then softMax is used for normalization processing to finally obtain the probability that the current input picture belongs to each category;
in this embodiment, the last convolution layer step size is set to 1, removing the fine granularity of the last operation-rich feature to be sampled.
In this embodiment, a loss function is set, and a gradient descent method and a triplet method are used to train the neural network, where the triplet makes positive samples of the same kind of samples closer to the group trunk, and negative samples of different kinds away from the group trunk, and specifically includes the following steps:
(a) Cross entropy and SoftMax are calculated as a loss function (loss_cross) that characterizes the difference between the predicted value and the true value:
(b) Calculating a loss function (loss_tri) of the gap between the triplet loss feature map:
step S5, result detection: and inputting the test set into a training model to obtain a prediction result, wherein the prediction result is in the form of one-hot coding type.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (3)
1. The flaw detection method for the optical lens is characterized by comprising the following steps of:
step 1: collecting training data: collecting optical lens flaw data through an input unit and a database, wherein the input unit is configured to collect real-time data as an optical sensor by one or more sources; the database is configured to store real-time and offline data;
step 2: analyzing and processing data: according to the known data, the data set is subjected to data preprocessing, specifically:
a) Processing the missing values by using an average value filling method;
b) Carrying out feature coding on the data bureau, and normalizing the feature vector to be between 0 and 1;
c) Construction of feature vectors [ d, x ] 1 ,x 2 ,...,x n ,y]Where d represents the dimension x of the data 1 ,x 2 ,...,x n Characteristic point information of data;
step 3: designing a deep learning model: establishing an end-to-end convolutional neural network model by inputting a training set;
step 4: the training strategy is designed, the accuracy and the robustness of the model are improved, and the method specifically comprises the following steps:
a) Setting a wakeup learning rate optimization method;
b) Setting a Labelsmoothing image tag smoothing algorithm;
c) Setting the step length of the last convolution layer to be 1;
step 5: and (3) result detection: and inputting the test set into a training model to obtain a prediction result.
2. The flaw detection method for an optical lens according to claim 1, wherein: in the step 2, various transformations are randomly applied to the optical lens image data by using a data augmentation method, and training data is enlarged.
3. The flaw detection method for an optical lens according to claim 1, wherein: and 3, constructing a convolutional neural network model, extracting flaw feature points of the picture, and performing cluster analysis to reduce the distance between the same flaw feature points and increase the distance between different flaw feature points.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011598180.XA CN112816408B (en) | 2020-12-29 | 2020-12-29 | Flaw detection method for optical lens |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011598180.XA CN112816408B (en) | 2020-12-29 | 2020-12-29 | Flaw detection method for optical lens |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112816408A CN112816408A (en) | 2021-05-18 |
| CN112816408B true CN112816408B (en) | 2024-03-26 |
Family
ID=75856125
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011598180.XA Active CN112816408B (en) | 2020-12-29 | 2020-12-29 | Flaw detection method for optical lens |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112816408B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115096912A (en) * | 2022-07-11 | 2022-09-23 | 心鉴智控(深圳)科技有限公司 | Lens flaw detection imaging method and system |
| CN115082416A (en) * | 2022-07-11 | 2022-09-20 | 心鉴智控(深圳)科技有限公司 | Lens defect detection method, device, equipment and storage medium |
| WO2024225804A1 (en) * | 2023-04-28 | 2024-10-31 | 엘지이노텍 주식회사 | Method and electronic device for predicting lens defects |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1980843A1 (en) * | 2007-04-13 | 2008-10-15 | Essilor International (Compagnie Generale D'optique) | Method and apparatus for detecting defects in optical components. |
| CN109064459A (en) * | 2018-07-27 | 2018-12-21 | 江苏理工学院 | A kind of Fabric Defect detection method based on deep learning |
| CN111220544A (en) * | 2020-01-19 | 2020-06-02 | 河海大学 | Lens quality detection device and detection method |
-
2020
- 2020-12-29 CN CN202011598180.XA patent/CN112816408B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1980843A1 (en) * | 2007-04-13 | 2008-10-15 | Essilor International (Compagnie Generale D'optique) | Method and apparatus for detecting defects in optical components. |
| CN109064459A (en) * | 2018-07-27 | 2018-12-21 | 江苏理工学院 | A kind of Fabric Defect detection method based on deep learning |
| CN111220544A (en) * | 2020-01-19 | 2020-06-02 | 河海大学 | Lens quality detection device and detection method |
Non-Patent Citations (1)
| Title |
|---|
| 光学镜片外观瑕疵视觉检测方法;朱宇栋;陈於学;;应用光学;20200515(03);全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112816408A (en) | 2021-05-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN119006469B (en) | Automatic detection method and system for surface defects of substrate glass based on machine vision | |
| CN113393461B (en) | Method and system for screening metaphase chromosome image quality based on deep learning | |
| CN111524137A (en) | Cell identification counting method and device based on image identification and computer equipment | |
| CN112816408B (en) | Flaw detection method for optical lens | |
| CN116863242B (en) | PCB defect detection method based on improvement YOLOv7 | |
| CN114463759A (en) | Lightweight character detection method and device based on anchor-frame-free algorithm | |
| CN116721343B (en) | A cross-domain field cotton boll identification method based on deep convolutional neural networks | |
| CN113642498A (en) | A video target detection system and method based on multi-level spatiotemporal feature fusion | |
| CN115393944A (en) | A micro-expression recognition method based on multi-dimensional feature fusion | |
| CN114092441A (en) | A product surface defect detection method and system based on dual neural network | |
| CN119515792A (en) | A method for defect recognition of spacecraft thermal control thin film coating based on multi-dimensional attention mechanism | |
| Gao et al. | No-reference image quality assessment: Obtain mos from image quality score distribution | |
| CN119205668A (en) | A method for processing industrial cable defect image data | |
| CN118570619A (en) | Underwater image quality evaluation method based on multi-scale deep element learning | |
| CN110992301A (en) | Gas contour identification method | |
| CN116309270B (en) | Binocular image-based transmission line typical defect identification method | |
| CN117953343A (en) | Small sample target detection method based on supporting sample feature fusion and attention mechanism | |
| CN117237599A (en) | Image target detection method and device | |
| CN116958662A (en) | A steel strip defect classification method based on convolutional neural network | |
| CN115457323A (en) | Classification method for unbalanced surface defects of mobile phone glass screen based on visual inspection system | |
| CN119360439B (en) | Fall detection method based on lightweight LMBW-YOLO | |
| CN111612803B (en) | A semantic segmentation method for vehicle images based on image clarity | |
| CN118072115B (en) | Medical cell detection method and system | |
| CN119723643A (en) | A method and device for recognizing facial expression | |
| CN116434087A (en) | Concrete crack identification method and device based on GOA-SVM cooperative algorithm and unmanned aerial vehicle |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |