CN111242214A - Small animal identification method based on image - Google Patents
Small animal identification method based on image Download PDFInfo
- Publication number
- CN111242214A CN111242214A CN202010030628.1A CN202010030628A CN111242214A CN 111242214 A CN111242214 A CN 111242214A CN 202010030628 A CN202010030628 A CN 202010030628A CN 111242214 A CN111242214 A CN 111242214A
- Authority
- CN
- China
- Prior art keywords
- layer
- image
- formula
- sub
- convolutional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及图像识别领域,特别是涉及一种基于图像的小型动物识别方法。The invention relates to the field of image recognition, in particular to an image-based small animal recognition method.
背景技术Background technique
随着经济全球化进程的加快,越来越多的人开始饲养宠物,尤其是小型动物,国内常见的例如猫、狗,国外常见的例如蜥蜴、蛇等。但是很多非饲养小型动物的人不了解且不熟悉相应小型动物的生活习性导致与其嬉戏打闹时被其误伤,所以识别并了解与自己相处的小型动物的习性是非常重要的。小型动物图像的识别常用卷积神经网络,但是训练一般的卷积神经网络模型需要大量的样本数据,费时费力,故一种少样本的卷积神经网络模型能够带来极大的便利,省时省力。With the acceleration of economic globalization, more and more people have started to keep pets, especially small animals, such as cats and dogs in China, and lizards and snakes in foreign countries. However, many people who do not keep small animals do not understand and are not familiar with the living habits of the corresponding small animals, so they are accidentally injured by them when they play and play, so it is very important to identify and understand the habits of the small animals that they get along with. Convolutional neural networks are commonly used in the recognition of small animal images, but training a general convolutional neural network model requires a large amount of sample data, which is time-consuming and labor-intensive. Therefore, a convolutional neural network model with few samples can bring great convenience and save time. Effortless.
发明内容SUMMARY OF THE INVENTION
常用卷积神经网络模型训练需要大量的样本数据,样本数据数量不够则会产生过拟合的问题从而导致动物识别率降低,本发明提供了一种基于图像的小型动物识别方法,能够优化少样本产生过拟合导致的模型精度降低问题,提高识别的准确度。The training of the common convolutional neural network model requires a large amount of sample data. If the number of sample data is insufficient, the problem of over-fitting will occur, which will lead to the reduction of the animal recognition rate. The invention provides an image-based small animal recognition method, which can optimize the small number of samples. The problem of reducing the accuracy of the model caused by over-fitting is generated, and the accuracy of recognition is improved.
为了解决上述技术问题,本发明提供如下的技术方案:In order to solve the above-mentioned technical problems, the present invention provides the following technical solutions:
一种基于图像的小型动物识别方法,包括以下步骤:An image-based small animal identification method, comprising the following steps:
步骤1:将图像缩小为32×32的图像;Step 1: Reduce the image to a 32×32 image;
步骤2:输入层输入图片的长度,宽度和深度。长和宽代表图像大小,本发明输入的为步骤1中缩小后的32×32大小的图像,其长度为32,宽度为32;深度代表图像的彩色通道,输入的为RGB三色通道,故彩色图像深度为3,即输入的图像为32×32×3;Step 2: The input layer inputs the length, width and depth of the image. The length and width represent the size of the image. The input of the present invention is the 32×32 image reduced in step 1, and its length is 32 and the width is 32; the depth represents the color channel of the image, and the input is the RGB three-color channel, so The color image depth is 3, that is, the input image is 32×32×3;
步骤3:构建卷积神经网络结构,过程如下:Step 3: Build a convolutional neural network structure, the process is as follows:
步骤3-1:卷积层C1对输入层输入的图像矩阵进行卷积操作,提取特征,经过激励函数的作用得到特征图,卷积层特征图公式如式(1)所示。Step 3-1: The convolution layer C1 performs a convolution operation on the image matrix input by the input layer, extracts features, and obtains a feature map through the action of the excitation function. The formula of the feature map of the convolution layer is shown in formula (1).
式(1)中,代表第l层第j个神经元,f为激励函数,Mj代表前一层输出特征图集合,代表第l-1层上第i个神经元,*代表卷积运算,代表l-1层第i个神经元到l层第j个神经元的卷积核矩阵,代表l层第j个神经元的偏置;In formula (1), represents the jth neuron of the lth layer, f is the excitation function, Mj represents the output feature map set of the previous layer, represents the ith neuron on the l-1th layer, * represents the convolution operation, Represents the convolution kernel matrix from the ith neuron in layer l-1 to the jth neuron in layer l, represents the bias of the jth neuron in layer l;
激励函数为Relu函数,Relu函数如式(2)所示。The excitation function is the Relu function, and the Relu function is shown in formula (2).
Relu(x)=max(0,x) (2)Relu(x)=max(0,x) (2)
在样本数小于参数个数的情况下,样本矩阵很可能是不可逆的,而引入正则化项则会解决这个问题;正则化项的引入平衡了偏差与方差、拟合能力与泛化能力、经验风险与结构风险,使用L2正则化方法如式(3)所示。When the number of samples is less than the number of parameters, the sample matrix is likely to be irreversible, and the introduction of the regularization term will solve this problem; the introduction of the regularization term balances bias and variance, fitting ability and generalization ability, experience Risk and structural risk, using the L2 regularization method, is shown in formula (3).
C0代表原始的代价参数,w为共享权重,λ为给定的正则项系数,n代表神经元个数,则化项的系数为1×10-4;C 0 represents the original cost parameter, w is the shared weight, λ is the given regular term coefficient, n represents the number of neurons, then the coefficient of the regularization term is 1×10 -4 ;
卷积层C1通过32个5×5的卷积核同时对权重进行L2正则化,得到32个28×28×3的特征图;The convolutional layer C1 performs L2 regularization on the weights at the same time through 32 5×5 convolution kernels, and obtains 32 28×28×3 feature maps;
步骤3-2:亚采样层S1根据图像的局部相关原理,对卷积层输出的特征图在相邻的小区域内进行聚合取样,在减少特征和参数的同时,保留图像的有用信息,使用Stochastic-pooling方法,对像素点按照数值大小赋予概率,再按照概率进行亚采样;Step 3-2: According to the local correlation principle of the image, the sub-sampling layer S1 aggregates and samples the feature map output by the convolution layer in a small adjacent area. While reducing the features and parameters, it retains the useful information of the image and uses Stochastic -pooling method, assign probability to pixels according to the size of the value, and then perform sub-sampling according to the probability;
亚采样层S1在2×2的区域内对步骤3-1中的卷积层C1产生的特征图进行亚采样,步长为1,得到32个14×14×3的特征图;The subsampling layer S1 subsamples the feature map generated by the convolutional layer C1 in step 3-1 in a 2×2 area, with a step size of 1, to obtain 32 feature maps of 14×14×3;
步骤3-3:卷积层C2同步骤3-1中的卷积层C1提取特征图的方式一样,唯一区别为卷积层C2的卷积核为3×3,其卷积层特征图公式如式(1)所示,使用的激励函数如式(2)所示,使用的L2正则化方法如式(3)所示;Step 3-3: The convolutional layer C2 extracts the feature map in the same way as the convolutional layer C1 in step 3-1. The only difference is that the convolution kernel of the convolutional layer C2 is 3×3, and the convolutional layer feature map formula As shown in formula (1), the excitation function used is shown in formula (2), and the L2 regularization method used is shown in formula (3);
卷积层C2通过32个3×3的卷积核同时对权重进行L2正则化,得到1024个12×12×3的特征图;The convolutional layer C2 performs L2 regularization on the weights through 32 3×3 convolution kernels at the same time, and obtains 1024 12×12×3 feature maps;
步骤3-4:亚采样层S2与步骤3-2中的亚采样层S1方法相同,亚采样层S2在2×2的区域内对卷积层C2产生的特征图进行亚采样,步长为1,得到1024个6×6×3的特征图;Step 3-4: The sub-sampling layer S2 has the same method as the sub-sampling layer S1 in step 3-2. The sub-sampling layer S2 sub-samples the feature map generated by the convolutional layer C2 in the 2×2 area, and the step size is 1. Obtain 1024 6×6×3 feature maps;
步骤3-5:对步骤3-4中的特征图采取flatten操作,产生一维特征向量输入全链接层,全链接层有三层分别是:Step 3-5: Flatten the feature map in step 3-4 to generate a one-dimensional feature vector and input it to the full link layer. The full link layer has three layers:
全链接层F1的大小为256;The size of the full link layer F1 is 256;
全链接层F2的大小为128,然后采用dropout方法对某一层的神经元按照一定概率激活,dropout的概率参数为0.5,全链接层F2的激励函数使用Relu函数,具体表达式如式(2)所示;The size of the full link layer F2 is 128, and then the dropout method is used to activate the neurons in a certain layer with a certain probability. The probability parameter of dropout is 0.5, and the activation function of the full link layer F2 uses the Relu function. ) shown;
全链接层F3的大小为64,然后采用dropout方法对某一层的神经元按照一定概率激活,dropout的概率参数为0.5,全链接层F3的激励函数使用Relu函数,具体表达式如式(2)所示;The size of the full link layer F3 is 64, and then the dropout method is used to activate the neurons in a certain layer with a certain probability. The probability parameter of dropout is 0.5, and the activation function of the full link layer F3 uses the Relu function. ) shown;
步骤4:输出层为分类器,作用为输出属于各个类别的概率,并且所有类别的概率值之和为1,使用的分类器函数为soft-max函数,soft-max函数公式见式(4):Step 4: The output layer is a classifier, which is used to output the probability of each category, and the sum of the probability values of all categories is 1. The classifier function used is the soft-max function. The soft-max function formula is shown in formula (4) :
式(4)中,h(x(i))表示样本i属于第k类的概率,总类别数为K。In formula (4), h(x (i) ) represents the probability that the sample i belongs to the kth class, and the total number of classes is K.
本发明的技术构思为:考虑到训练模型的耗时耗力以及需要的样本数量庞大等问题,在卷积层引入L2正则化项,使用Relu函数以及在全链接层加入dropout方法可以有效优化过拟合导致的模型精度降低问题。The technical idea of the present invention is as follows: considering the time-consuming and labor-intensive problems of training the model and the large number of samples required, introducing an L2 regularization term in the convolution layer, using the Relu function and adding the dropout method in the full link layer can effectively optimize the The problem of model accuracy reduction caused by fitting.
本发明的有益效果如下:本方法在样本数小于参数个数的情况下,样本矩阵很可能是不可逆的,而在卷积层引入正则化项则会解决这个问题。正则化项的引入平衡了偏差与方差、拟合能力与泛化能力、经验风险与结构风险,故能够提高识别的准确度。使用Relu函数作为激励函数会使一部分神经元的输出为0,这样就造成了网络的稀疏性,减少了参数的相互依存关系,也能够缓解过拟合问题的发生,提高识别准确度。The beneficial effects of the present invention are as follows: in this method, when the number of samples is less than the number of parameters, the sample matrix is likely to be irreversible, and this problem can be solved by introducing a regularization term in the convolution layer. The introduction of regularization term balances bias and variance, fitting ability and generalization ability, empirical risk and structural risk, so it can improve the accuracy of identification. Using the Relu function as the excitation function will cause the output of some neurons to be 0, which will cause the sparsity of the network, reduce the interdependence of parameters, and alleviate the overfitting problem and improve the recognition accuracy.
附图说明Description of drawings
图1为本方法的整体流程图。FIG. 1 is an overall flow chart of the method.
图2为CNN网络流程图。Figure 2 is a flow chart of the CNN network.
图3为CNN网络结构图。Figure 3 shows the structure of the CNN network.
具体实施方式Detailed ways
下面结合附图对本发明做进一步说明。The present invention will be further described below with reference to the accompanying drawings.
参照图1~图3,一种基于图像的小型动物识别方法,整个过程如图1所示,分为四个步骤,其中比较关键的步骤为第三步。Referring to Figures 1 to 3, an image-based small animal recognition method, the whole process is shown in Figure 1, divided into four steps, the more critical step is the third step.
如图2所示,将图像输入卷积神经网络后,输入层将图像缩小成32×32大小,再输入RGB三色通道,即输入图像为32×32×3。As shown in Figure 2, after inputting the image into the convolutional neural network, the input layer reduces the image to a size of 32 × 32, and then inputs the RGB three-color channels, that is, the input image is 32 × 32 × 3.
然后将32×32×3的图像输入到加入L2正则化的卷积层C1,卷积层C1的卷积核为32个5×5的卷积核,卷积层C1对输入层输入的图像矩阵进行卷积操作,提取特征,经过激励函数的作用,然后得到32个28×28×3的特征图。卷积层C1的激励函数为Relu函数,公式如下:Then the 32×32×3 image is input to the convolutional layer C1 with L2 regularization. The convolution kernel of the convolutional layer C1 is 32 5×5 convolution kernels. The matrix is convolved to extract features, and after the action of the excitation function, 32 feature maps of 28×28×3 are obtained. The excitation function of the convolutional layer C1 is the Relu function, and the formula is as follows:
Relu(x)=max(0,x)Relu(x)=max(0,x)
卷积层特征图公式具体如下:The formula of the feature map of the convolutional layer is as follows:
在样本数小于参数个数的情况下,样本矩阵很可能是不可逆的,而引入正则化项则会解决这个问题,正则化项的引入平衡了偏差与方差、拟合能力与泛化能力、经验风险与结构风险,使用的L2正则化方法公式具体如下:When the number of samples is less than the number of parameters, the sample matrix is likely to be irreversible, and the introduction of the regularization term will solve this problem. The introduction of the regularization term balances bias and variance, fitting ability and generalization ability, experience Risk and structural risk, the L2 regularization method formula used is as follows:
然后将32个28×28×3的特征图输入亚采样层S1,亚采样层S1使用Stochastic-pooling的方法在2×2的区域内对卷积层C1产生的特征图进行亚采样,步长为1,得到32个14×14×3的特征图;Then 32 28×28×3 feature maps are input into the sub-sampling layer S1, and the sub-sampling layer S1 uses the Stochastic-pooling method to sub-sample the feature maps generated by the convolutional layer C1 in the 2×2 area. is 1, and 32 feature maps of 14×14×3 are obtained;
然后将32个14×14×3的特征图输入到加入L2正则化的卷积层C2,卷积层C1的卷积核为32个3×3的卷积核,卷积层C1对输入层输入的图像矩阵进行卷积操作,提取特征,经过激励函数的作用,然后得到1024个12×12×3的特征图。卷积层C1的激励函数为Relu函数,公式如下:Then 32 14×14×3 feature maps are input to the convolutional layer C2 added with L2 regularization. The convolution kernel of the convolutional layer C1 is 32 3×3 convolution kernels. The input image matrix is subjected to convolution operation to extract features, and through the action of the excitation function, 1024 feature maps of 12×12×3 are obtained. The excitation function of the convolutional layer C1 is the Relu function, and the formula is as follows:
Relu(x)=max(0,x)Relu(x)=max(0,x)
卷积层特征图公式如下:The formula of the feature map of the convolutional layer is as follows:
使用的L2正则化方公式法如下:The L2 regularization formula used is as follows:
然后将1024个12×12×3的特征图输入亚采样层S2,亚采样层S2使用Stochastic-pooling的方法在2×2的区域内对卷积层C2产生的特征图进行亚采样,步长为1,得到1024个6×6×3的特征图。Then 1024 12×12×3 feature maps are input into the sub-sampling layer S2, and the sub-sampling layer S2 uses the Stochastic-pooling method to sub-sample the feature maps generated by the convolutional layer C2 in the 2×2 area. is 1, and 1024 6×6×3 feature maps are obtained.
然后将1024个6×6×3的特征图采取flatten操作,产生一维特征向量。Then the 1024 6×6×3 feature maps are flattened to generate a one-dimensional feature vector.
然后将产生的一维向量输入全链接层F1,全链接层F1大小为256。Then the generated one-dimensional vector is input into the fully linked layer F1, and the size of the fully linked layer F1 is 256.
然后将全链接层F1产生的结果输入全链接层F2,全链接层F2大小为128,再用dropout方法对某一层的神经元按照一定概率激活。dropout方法的概率参数为0.5。Then, the result generated by the full link layer F1 is input into the full link layer F2, the size of the full link layer F2 is 128, and the dropout method is used to activate the neurons of a certain layer according to a certain probability. The probability parameter of the dropout method is 0.5.
然后将全链接层F2产生的结果输入全链接层F3,全链接层F3大小为64,再用dropout方法对某一层的神经元按照一定概率激活。dropout方法的概率参数为0.5。Then, the result generated by the full link layer F2 is input into the full link layer F3, and the size of the full link layer F3 is 64, and then the dropout method is used to activate the neurons of a certain layer according to a certain probability. The probability parameter of the dropout method is 0.5.
最后将结果分类输出。Finally, the results are classified and output.
如上所述为本发明一种基于图像的小型动物识别方法的实施例介绍,本发明引入的卷积神经网络含一个输入层,两个加入L2正则化的卷积层,两个亚采样层,三个加入dropout方法的全链接层以及一个输出层,通过该卷积神经网络,能够达到增强小型动物识别率的要求。The above is an introduction to an embodiment of an image-based small animal recognition method of the present invention. The convolutional neural network introduced by the present invention includes an input layer, two convolution layers with L2 regularization added, and two sub-sampling layers. Three full-link layers and one output layer are added to the dropout method. Through this convolutional neural network, the requirements of enhancing the recognition rate of small animals can be achieved.
这里需要指出的是,本发明中的具体实施只是列举了本发明的个别实例,使用本发明的设计思想及其等效变化实现的方案,均应属于本发明的保护范围。It should be pointed out here that the specific implementation in the present invention merely enumerates individual examples of the present invention, and the solutions implemented using the design idea of the present invention and its equivalent changes shall all belong to the protection scope of the present invention.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010030628.1A CN111242214A (en) | 2020-01-13 | 2020-01-13 | Small animal identification method based on image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010030628.1A CN111242214A (en) | 2020-01-13 | 2020-01-13 | Small animal identification method based on image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111242214A true CN111242214A (en) | 2020-06-05 |
Family
ID=70876018
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010030628.1A Pending CN111242214A (en) | 2020-01-13 | 2020-01-13 | Small animal identification method based on image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111242214A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117932005A (en) * | 2024-03-21 | 2024-04-26 | 成都市技师学院(成都工贸职业技术学院、成都市高级技工学校、成都铁路工程学校) | Voice interaction method based on artificial intelligence |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6178261B1 (en) * | 1997-08-05 | 2001-01-23 | The Regents Of The University Of Michigan | Method and system for extracting features in a pattern recognition system |
CN106056043A (en) * | 2016-05-19 | 2016-10-26 | 中国科学院自动化研究所 | Animal behavior identification method and apparatus based on transfer learning |
CN108171274A (en) * | 2018-01-17 | 2018-06-15 | 百度在线网络技术(北京)有限公司 | For identifying the method and apparatus of animal |
-
2020
- 2020-01-13 CN CN202010030628.1A patent/CN111242214A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6178261B1 (en) * | 1997-08-05 | 2001-01-23 | The Regents Of The University Of Michigan | Method and system for extracting features in a pattern recognition system |
CN106056043A (en) * | 2016-05-19 | 2016-10-26 | 中国科学院自动化研究所 | Animal behavior identification method and apparatus based on transfer learning |
CN108171274A (en) * | 2018-01-17 | 2018-06-15 | 百度在线网络技术(北京)有限公司 | For identifying the method and apparatus of animal |
Non-Patent Citations (2)
Title |
---|
TIBOR TRNOVSZKY 等: "Animal Recognition System Based on Convolutional Neural Network", 《DIGITAL IMAGE PROCESSING AND COMPUTER GRAPHICS》 * |
李建伟 等: "基于CNN 的动物识别研究", 《软件导刊》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117932005A (en) * | 2024-03-21 | 2024-04-26 | 成都市技师学院(成都工贸职业技术学院、成都市高级技工学校、成都铁路工程学校) | Voice interaction method based on artificial intelligence |
CN117932005B (en) * | 2024-03-21 | 2024-06-04 | 成都市技师学院(成都工贸职业技术学院、成都市高级技工学校、成都铁路工程学校) | Voice interaction method based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Park et al. | Classification and morphological analysis of vector mosquitoes using deep convolutional neural networks | |
CN111695467B (en) | Spatial Spectral Fully Convolutional Hyperspectral Image Classification Method Based on Superpixel Sample Expansion | |
Yadav et al. | AFD-Net: Apple Foliar Disease multi classification using deep learning on plant pathology dataset | |
CN112232151B (en) | Iterative polymerization neural network high-resolution remote sensing scene classification method embedded with attention mechanism | |
CN107016405A (en) | A kind of insect image classification method based on classification prediction convolutional neural networks | |
CN108830330A (en) | Classification of Multispectral Images method based on self-adaptive features fusion residual error net | |
Singh et al. | Performance Analysis of CNN Models with Data Augmentation in Rice Diseases | |
CN109886161A (en) | A road traffic sign recognition method based on likelihood clustering and convolutional neural network | |
CN114187183B (en) | A fine-grained insect image classification method | |
CN105718932A (en) | Colorful image classification method based on fruit fly optimization algorithm and smooth twinborn support vector machine and system thereof | |
CN113033321A (en) | Training method of target pedestrian attribute identification model and pedestrian attribute identification method | |
CN108491864A (en) | Based on the classification hyperspectral imagery for automatically determining convolution kernel size convolutional neural networks | |
CN112733912A (en) | Fine-grained image recognition method based on multi-grained countermeasure loss | |
Luan et al. | Sunflower seed sorting based on convolutional neural network | |
CN115546187A (en) | Agricultural pest detection method and device based on YOLO v5 | |
Fauzi et al. | Butterfly image classification using convolutional neural network (cnn) | |
CN111242214A (en) | Small animal identification method based on image | |
Jingyi et al. | Classification of images by using TensorFlow | |
Melo et al. | A fully convolutional network for signature segmentation from document images | |
CN110288041A (en) | Chinese herbal medicine classification modeling method and system based on deep learning | |
CN117830834A (en) | Plant leaf disease recognition method based on attention and cross-layer mutual discriminant learning | |
Liu et al. | Hyperspectral image classification based on long short term memory network | |
CN114972889B (en) | Wheat seed classification method based on data enhancement and attention mechanism | |
CN114943290B (en) | A biological invasion identification method based on multi-source data fusion analysis | |
CN114005002B (en) | Image recognition method of kernel fully-connected neural network based on kernel operation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200605 |