CN116416523A - Machine learning-based rice growth stage identification system and method - Google Patents
Machine learning-based rice growth stage identification system and method Download PDFInfo
- Publication number
- CN116416523A CN116416523A CN202310209296.7A CN202310209296A CN116416523A CN 116416523 A CN116416523 A CN 116416523A CN 202310209296 A CN202310209296 A CN 202310209296A CN 116416523 A CN116416523 A CN 116416523A
- Authority
- CN
- China
- Prior art keywords
- rice
- image
- growth stage
- gray level
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域Technical Field
本发明属于水稻生长阶段监测技术领域,具体涉及一种基于机器学习的水稻生长阶段的识别系统和方法。The invention belongs to the technical field of rice growth stage monitoring, and in particular relates to a rice growth stage identification system and method based on machine learning.
背景技术Background Art
水稻是世界上最重要的作物之一,在农业生产中发挥着重要作用。对水稻生长阶段的充分了解将使人们能够根据其不同生长阶段水稻的生理特点和对外界条件的要求,采取相应的栽培管理措施,合理地满足它的生长发育需要使用适量的水、肥料和杀虫剂,从而达到水稻高产稳产的目的;同时作物不同生长阶段的自动化监督可以在水稻不同生长阶段做出及时合理的管理决策,这是农业的未来发展方向,对现代农田管理具有重要意义。Rice is one of the most important crops in the world and plays an important role in agricultural production. A full understanding of the growth stages of rice will enable people to take corresponding cultivation and management measures according to the physiological characteristics of rice at different growth stages and its requirements for external conditions, and reasonably meet its growth and development needs by using appropriate amounts of water, fertilizers and pesticides, so as to achieve the goal of high and stable rice yields; at the same time, automated supervision of crops at different growth stages can make timely and reasonable management decisions at different growth stages of rice, which is the future development direction of agriculture and is of great significance to modern farmland management.
在整个生长周期中,我们可以直观地观察到水稻的外部形态结构发生了显著变化,并且随着水稻生长阶段的递增,其植株形状的自相似性和无标度性也更加明显,近年来随着农业智能化的发展,计算机视觉技术在水稻质量和生长阶段检测中的应用已经出现,机器学习和互联网等技术越来越多的应用于农业之中来自动观察、检测和区分水稻的不同关键生长阶段,实现水稻自动化分类和预测,从而提高水稻的数量和质量。Throughout the growth cycle, we can visually observe that the external morphological structure of rice has changed significantly, and as the growth stage of rice increases, the self-similarity and scale-free nature of the plant shape become more obvious. In recent years, with the development of intelligent agriculture, the application of computer vision technology in rice quality and growth stage detection has emerged. Technologies such as machine learning and the Internet are increasingly being used in agriculture to automatically observe, detect and distinguish different key growth stages of rice, realize automatic classification and prediction of rice, and thus improve the quantity and quality of rice.
目前,主要用人工检查的方法来识别,但这非常繁琐、耗时、费力且往往受制于观察者对水稻状态的主观感知,存在传统水稻轮廓特征提取不易、不准导致生长阶段判断不准确的问题。因此,迫切需要研究水稻不同生长阶段的自动识别方法,以降低人工成本,提高观察的准确性和实时性,避免对植物的损害。然而传统水稻轮廓特征提取不易、不准,因此导致生长阶段判断不准确,不能及时准确地识别水稻植株发育的关键生长期阶段。At present, the main method of identification is manual inspection, but this is very cumbersome, time-consuming, laborious and often subject to the observer's subjective perception of the rice state. There is a problem that the traditional rice contour feature extraction is difficult and inaccurate, resulting in inaccurate judgment of the growth stage. Therefore, it is urgent to study the automatic recognition method of different growth stages of rice to reduce labor costs, improve the accuracy and real-time performance of observation, and avoid damage to plants. However, the traditional rice contour feature extraction is difficult and inaccurate, which leads to inaccurate judgment of the growth stage and cannot timely and accurately identify the key growth stage of rice plant development.
发明内容Summary of the invention
为了提高水稻生长阶段判断的准确性,本发明提出了一种基于机器学习的水稻生长阶段的识别系统和方法。In order to improve the accuracy of rice growth stage judgment, the present invention proposes a rice growth stage recognition system and method based on machine learning.
实现本发明目的之一的基于机器学习的水稻生长阶段的识别系统,包括水稻特征生成模块和水稻生长阶段识别模型构建模块;A rice growth stage recognition system based on machine learning to achieve one of the purposes of the present invention comprises a rice feature generation module and a rice growth stage recognition model construction module;
所述水稻特征生成模块用于根据水稻图像得到水稻的多个分形维数和利用灰度共生矩阵构建用于表征水稻的纹理特征的多个统计量;The rice feature generation module is used to obtain multiple fractal dimensions of rice according to the rice image and to construct multiple statistics for characterizing the texture features of rice using the gray-level co-occurrence matrix;
所述水稻生长阶段识别模型构建模块用于根据神经网络模型得到水稻的特征与水稻生长阶段的映射关系,从而构建水稻生长阶段识别模型;所述水稻生长阶段识别模型用于根据水稻图像识别水稻所处的生长阶段;所述水稻的特征包括从水稻特征生成模块得到的水稻的多个分形维数和利用灰度共生矩阵构建的统计量。The rice growth stage recognition model construction module is used to obtain the mapping relationship between rice characteristics and rice growth stages according to the neural network model, so as to construct a rice growth stage recognition model; the rice growth stage recognition model is used to recognize the growth stage of rice according to the rice image; the rice characteristics include multiple fractal dimensions of rice obtained from the rice feature generation module and statistics constructed using the gray-level co-occurrence matrix.
进一步地,所述水稻特征生成模块包括第一分形维数生成模块、第二分形维数生成模块和纹理特征获取模块;Furthermore, the rice feature generation module includes a first fractal dimension generation module, a second fractal dimension generation module and a texture feature acquisition module;
所述第一分形维数生成模块用于根据水稻图像计算基于水稻整体和水稻叶片边缘的两个分形维数D1和RFD;The first fractal dimension generation module is used to calculate two fractal dimensions D1 and RFD based on the whole rice and the edge of the rice leaf according to the rice image;
所述第二分形维数生成模块用于根据水稻图像计算基于水稻整体和外接矩形的两个分形维数D2和Sandbox;The second fractal dimension generation module is used to calculate two fractal dimensions D2 and Sandbox based on the rice as a whole and the circumscribed rectangle according to the rice image;
所述纹理特征获取模块用于根据相邻像素的相关性和共生矩阵对角线元素的灰度变化,得到灰度共生矩阵,根据灰度共生矩阵构建用于表征水稻的纹理特征的多个统计量。The texture feature acquisition module is used to obtain a grayscale symbiosis matrix according to the correlation between adjacent pixels and the grayscale changes of the diagonal elements of the symbiosis matrix, and to construct multiple statistics for characterizing the texture features of rice according to the grayscale symbiosis matrix.
更进一步地,所述统计量包括:对比度、差异性、反差分矩阵、熵、相关性和角二阶矩。Furthermore, the statistics include: contrast, difference, inverse difference matrix, entropy, correlation and angular second moment.
更进一步地,所述分形维数生成模块中还包括灰度图生成模块,用于在得到分形维数前将水稻图像转化为灰度图,所述灰度图用于得到水稻的分形维数D1、RFD、D2和Sandbox;所述分形维数生成模块中还包括二值图生成模块,用于对水稻图像的灰度图采用Sobel算子与高斯滤波的方法对图片进行边缘检测和去噪得到水稻图像的二值图,所述二值图用于得到水稻的分形维数D1、RFD、D2和Sandbox。Furthermore, the fractal dimension generation module also includes a grayscale image generation module, which is used to convert the rice image into a grayscale image before obtaining the fractal dimension, and the grayscale image is used to obtain the fractal dimensions D1, RFD, D2 and Sandbox of the rice; the fractal dimension generation module also includes a binary image generation module, which is used to perform edge detection and denoising on the grayscale image of the rice image using the Sobel operator and Gaussian filtering method to obtain a binary image of the rice image, and the binary image is used to obtain the fractal dimensions D1, RFD, D2 and Sandbox of the rice.
所述分形维数D1和RFD的计算方法包括:The calculation method of the fractal dimension D1 and RFD includes:
S501、从水稻的灰度图像中随机选取一个像素点A,其坐标值记为(i,j);从水稻的二值图像中随机选取一个像素点A',其坐标值记为(i',j');S501, randomly selecting a pixel point A from the grayscale image of rice, and recording its coordinate value as (i, j); randomly selecting a pixel point A' from the binary image of rice, and recording its coordinate value as (i', j');
S502、分别在水稻的灰度图像和二值图像中采用随机游走法确定另一个像素点B和像素点B',其坐标值分别记为(u,v)和(u',v');且R=||(i,j)-(u,v)||=||(i',j')-(u',v')||,R为随机设置的值,R=1,2,...,n;S502, using random walk method to determine another pixel point B and pixel point B' in the grayscale image and binary image of rice respectively, and their coordinate values are recorded as (u, v) and (u', v') respectively; and R = || (i, j) - (u, v) || = || (i', j') - (u', v') ||, R is a randomly set value, R = 1, 2, ..., n;
S503,计算A和B两个像素点灰度值的差值G及A'和B'两个像素点灰度值的差值G':S503, calculating the difference G between the grayscale values of the two pixels A and B and the difference G' between the grayscale values of the two pixels A' and B':
G=I(i,j)-I(u,v)G=I(i,j)-I(u,v)
G'=I(i',j')-I(u',v')G'=I(i',j')-I(u',v')
式中:Where:
I(i,j)、I(u,v)分别为像素点A和像素点B的灰度值;I(i,j) and I(u,v) are the grayscale values of pixel A and pixel B respectively;
I(i',j')、I(u',v')分别为像素点A'和像素点B'的灰度值;I(i',j') and I(u',v') are the grayscale values of pixel A' and pixel B' respectively;
S503、重复步骤S501~S502得到每张灰度图像和二值图像的多个差值G和G',根据差值G和G'计算每张灰度图像的平均值E(G)和每张二值图像的平均值E(G');S503, repeating steps S501 to S502 to obtain multiple difference values G and G' of each grayscale image and binary image, and calculating the average value E(G) of each grayscale image and the average value E(G') of each binary image according to the difference values G and G';
S504、按如下公式计算得到每张灰度图像和二值图像的两个分形维数:S504, calculate two fractal dimensions of each grayscale image and binary image according to the following formula:
式中:Where:
D1为基于灰度图像的分形维数;D1 is the fractal dimension based on the grayscale image;
RFD为基于二值图像的分形维数;RFD is the fractal dimension based on binary images;
c为设定常数。c is a set constant.
进一步地,所述分形维数D2和Sandbox的计算方法还包括:Furthermore, the calculation method of the fractal dimension D2 and Sandbox also includes:
S701、在每张水稻灰度图像和二值图像上分别选择一个M×M网格进行划分,其中M为网格边界的个数;S701, selecting an M×M grid on each rice grayscale image and binary image for division, where M is the number of grid boundaries;
S702、在每张水稻灰度图像上随机选取第(a,b)个网格(a∈[1,M],b∈[1,M]);在每张水稻二值图像上随机选取第(a',b')个网格(a'∈[1,M],b'∈[1,M]);覆盖每个网格所需要的框数计算如下:S702. Randomly select the (a, b)th grid (a∈[1,M], b∈[1,M]) on each rice grayscale image; randomly select the (a', b')th grid (a'∈[1,M], b'∈[1,M]) on each rice binary image; the number of frames required to cover each grid is calculated as follows:
式中:Where:
n(a,b)表示覆盖第(a,b)个网格所需要的框数;n(a,b) represents the number of boxes required to cover the (a,b)th grid;
n(a',b')表示覆盖第(a',b')个网格所需要的框数;n(a',b') represents the number of boxes required to cover the (a',b')th grid;
Pmax:表示第(a,b)个网格中的所有像素点的像素值的最大值;P max : represents the maximum value of all pixel values in the (a, b)th grid;
Pmin:表示第(a,b)个网格中的所有像素点的像素值的最小值;P min : represents the minimum pixel value of all pixels in the (a, b)th grid;
P'max:表示第(a',b')个网格中的所有像素点的像素值的最大值;P' max : represents the maximum value of all pixel values in the (a', b')th grid;
P'min:表示第(a',b')个网格中的所有像素点的像素值的最小值;P' min : represents the minimum value of all pixel values in the (a', b')th grid;
S703、计算每个网格的所覆盖的框数之和N为:S703, calculate the sum N of the number of frames covered by each grid:
S704、在灰度图像和二值图像分别按如下公式计算得到两个分形维数:S704, two fractal dimensions are calculated on the grayscale image and the binary image according to the following formulas:
式中:Where:
D2为基于灰度图像的分形维数;D2 is the fractal dimension based on the grayscale image;
Sandbox为基于二值图像的分形维数。Sandbox is a fractal dimension based on binary images.
进一步地,所述利用灰度共生矩阵构建用于表征水稻的纹理特征的多个统计量的计算方法包括:Furthermore, the calculation method of constructing multiple statistics for characterizing the texture characteristics of rice using the gray level co-occurrence matrix includes:
S801、将每张水稻的二值图像按灰度值分为L个灰度等级,每个像素对应一个灰度等级;S801, dividing each rice binary image into L gray levels according to the gray value, and each pixel corresponds to one gray level;
S802、根据每张水稻的二值图像的每个像素的灰度级得到一个灰度共生矩阵p(x,y);S802, obtaining a gray level co-occurrence matrix p(x,y) according to the gray level of each pixel of each rice binary image;
x,y分别表示两个像素点的灰度等级,x∈[0,L-1],y∈[0,L-1];x, y represent the grayscale levels of two pixels, x∈[0,L-1], y∈[0,L-1];
S803、从灰度共生矩阵中提取6种纹理特征:对比度、差异性、反差分矩阵、熵、相关性和角二阶矩,分别记为Con、DISL、IDM、ENT、Corr和ASM,计算公式包括:S803, extracting six texture features from the gray level co-occurrence matrix: contrast, difference, inverse difference matrix, entropy, correlation and angular second moment, respectively denoted as Con, DISL, IDM, ENT, Corr and ASM, and the calculation formula includes:
式中:Where:
μx,μy:两个不同像素点的灰度级x、y的均值;μ x ,μ y : the mean of the gray levels x and y of two different pixels;
σx,σy:两个不同像素点灰度级x、y的标准差。σ x ,σ y : standard deviation of gray levels x and y of two different pixels.
实现本发明目的之二的一种基于机器学习的水稻生长阶段的识别方法,包括如下步骤:A method for identifying rice growth stages based on machine learning to achieve the second objective of the present invention comprises the following steps:
S1、根据水稻的原始图像得到水稻的多个分形维数和利用灰度共生矩阵构建用于表征水稻的纹理特征的多个统计量;所述分形维数为用于表征水稻的表型性状的数据特征量;S1. Obtaining multiple fractal dimensions of rice according to the original image of rice and constructing multiple statistics for characterizing the texture characteristics of rice using a gray-level symbiosis matrix; the fractal dimensions are data feature quantities for characterizing the phenotypic traits of rice;
S2、根据神经网络模型得到水稻的特征与水稻生长阶段的映射关系,从而构建水稻生长阶段识别模型;所述水稻生长阶段识别模型用于根据水稻图像识别水稻所处的生长阶段;所述水稻的特征包括从水稻特征生成模块得到的水稻的多个分形维数和利用灰度共生矩阵构建用于表征水稻的纹理特征的多个统计量。S2. A mapping relationship between rice characteristics and rice growth stages is obtained according to a neural network model, thereby constructing a rice growth stage recognition model; the rice growth stage recognition model is used to recognize the growth stage of rice according to a rice image; the rice characteristics include multiple fractal dimensions of rice obtained from a rice feature generation module and multiple statistics for characterizing texture characteristics of rice constructed using a gray-level co-occurrence matrix.
进一步地,所述步骤S1包括如下步骤:Furthermore, the step S1 comprises the following steps:
根据水稻图像计算基于水稻整体和水稻叶片边缘的两个分形维数D1和RFD;Two fractal dimensions D1 and RFD based on the whole rice and the edge of rice leaves were calculated according to the rice image;
根据水稻图像计算基于水稻整体和外接矩形的两个分形维数D2和Sandbox;Two fractal dimensions D2 and Sandbox based on the rice as a whole and the circumscribed rectangle are calculated according to the rice image;
根据相邻像素的相关性和共生矩阵对角线元素的灰度变化,得到灰度共生矩阵,根据灰度共生矩阵构建用于表征水稻的纹理特征的多个统计量;According to the correlation between adjacent pixels and the grayscale changes of the diagonal elements of the co-occurrence matrix, a grayscale co-occurrence matrix is obtained, and multiple statistics for characterizing the texture characteristics of rice are constructed according to the grayscale co-occurrence matrix;
更进一步地,所述多个统计量包括:对比度、差异性、反差分矩阵、熵、相关性和角二阶矩;Furthermore, the plurality of statistics include: contrast, difference, inverse difference matrix, entropy, correlation and angular second moment;
进一步地,所述步骤S2包括如下步骤:Furthermore, step S2 comprises the following steps:
S201、将多个水稻样本的多个表型性状数据集与从灰度图像和二值图像中提取的包含四个分形维数D1、RFD、D2和Sandbox和六个灰度共生矩阵的数据集相结合,得到水稻的初始数据集;S201, combining multiple phenotypic trait data sets of multiple rice samples with data sets containing four fractal dimensions D1, RFD, D2 and Sandbox and six gray-level co-occurrence matrices extracted from grayscale images and binary images to obtain an initial data set of rice;
S202、根据所述水稻初始数据集的所有特征绘制相关性分析的热力图;根据热力图得到每个特征与水稻生长阶段的关系;根据所述特征与水稻生长阶段的关系保留部分特征;S202, drawing a heat map of correlation analysis based on all the features of the initial rice data set; obtaining the relationship between each feature and the rice growth stage based on the heat map; retaining some features based on the relationship between the features and the rice growth stage;
S203、在递归特征消除的基础上采用随机森林模型对剩下的特征进行组合交叉验证,通过计算其决策系数之和,得到不同特征数对于水稻生长阶段判断的准确率的重要程度,根据所述重要程度保留多个特征组合数得到建模数据集;S203, on the basis of recursive feature elimination, a random forest model is used to perform combined cross-validation on the remaining features, and the importance of different feature numbers for the accuracy of rice growth stage judgment is obtained by calculating the sum of their decision coefficients, and a plurality of feature combination numbers are retained according to the importance to obtain a modeling data set;
S204、对所述建模数据集进行数据预处理和归一化处理并进行训练,得到训练完成的水稻生长阶段识别模型。S204, performing data preprocessing and normalization processing on the modeling data set and performing training to obtain a trained rice growth stage recognition model.
实现本发明目的之三的一种非暂态计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现所述基于机器学习的水稻生长阶段的识别方法的步骤。A non-transitory computer-readable storage medium is provided to achieve the third objective of the present invention, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the rice growth stage identification method based on machine learning are implemented.
有益效果:Beneficial effects:
本发明发现引入分形维数和灰度共生矩阵对于生长周期的判别是具有积极意义的,加入分形维数变量后单一机器学习模型和最优加权集成模型准确率分别提高了2-8%左右,加入灰度共生矩阵变量后单一机器学习模型和最优加权集成模型准确率分别提高了2-7%左右,大大提高了水稻生长阶段的检测精度。The present invention finds that the introduction of fractal dimension and gray-level symbiosis matrix has positive significance for the discrimination of growth cycle. After adding fractal dimension variable, the accuracy of single machine learning model and optimal weighted integrated model is improved by about 2-8%, and after adding gray-level symbiosis matrix variable, the accuracy of single machine learning model and optimal weighted integrated model is improved by about 2-7%, which greatly improves the detection accuracy of rice growth stage.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明所述方法的实施例的流程示意图;FIG1 is a schematic flow chart of an embodiment of the method of the present invention;
图2是实施例中六种机器学习模型的ROC曲线;FIG2 is a ROC curve of six machine learning models in the embodiment;
图3是实施例中优化加权集成模型的ROC曲线;FIG3 is a ROC curve of the optimized weighted integrated model in the embodiment;
图4是实施例中优化加权集成模型的的混淆矩阵;FIG4 is a confusion matrix of an optimized weighted integration model in an embodiment;
图5是实施例中支持向量机、决策树、Adaboost和优化加权集成模型输入变量的特征重要性结果。FIG5 is a result of feature importance of input variables of support vector machine, decision tree, Adaboost and optimized weighted integrated model in the embodiment.
具体实施方式DETAILED DESCRIPTION
下列具体实施方式用于对本发明权利要求技术方案的解释,以便本领域的技术人员理解本权利要求书。本发明的保护范围不限于下列具体的实施结构。本领域的技术人员做出的包含有本发明权利要求书技术方案而不同于下列具体实施方式的也是本发明的保护范围。The following specific implementations are used to explain the technical solutions of the claims of the present invention so that those skilled in the art can understand the claims. The protection scope of the present invention is not limited to the following specific implementation structures. The technical solutions of the claims of the present invention made by those skilled in the art but different from the following specific implementations are also within the protection scope of the present invention.
如图1所示为本发明所述方法的实施例的流程图,本发明提出的基于水稻自动表型平台数据构建机器学习模型和集成模型对水稻生长阶段进行检测,具体步骤如下:FIG1 is a flow chart of an embodiment of the method of the present invention. The present invention proposes to construct a machine learning model and an integrated model based on the data of the rice automatic phenotyping platform to detect the rice growth stage. The specific steps are as follows:
(1)数据提取(1) Data extraction
a.水稻自动表型平台测量了521个水稻品种共1094个水稻样本在三个不同生长阶段(分蘖期、拔节期、抽穗期)的28个表型性状,作为水稻自动表型平台测量得到的数据集;a. The rice automatic phenotyping platform measured 28 phenotypic traits of 1,094 rice samples of 521 rice varieties at three different growth stages (tillering stage, jointing stage, and heading stage), which served as the data set measured by the rice automatic phenotyping platform;
表1水稻自动表型平台数据特征Table 1 Data characteristics of the rice automatic phenotyping platform
b.水稻自动表型平台使用可见光工业相机(AVT Stingray FG504)拍摄水稻照片,得到的水稻RGB图像;b. The rice automatic phenotyping platform uses a visible light industrial camera (AVT Stingray FG504) to take rice photos and obtain rice RGB images;
c.对所述水稻图像采用基于核线性判别分析和高斯过程回归的自动分割过程的分层方法,得到水稻的灰度图和二值图。使用核线性判别分析分两步区分目标叶与其相似叶:一是粗略检测整个叶片,二是精细地检测叶的边缘,具体方法如下:c. A hierarchical method based on kernel linear discriminant analysis and Gaussian process regression is used for the rice image to obtain a grayscale image and a binary image of the rice. Kernel linear discriminant analysis is used to distinguish the target leaf from its similar leaves in two steps: one is to roughly detect the entire leaf, and the other is to finely detect the edge of the leaf. The specific method is as follows:
第一步是利用RGB通道灰度化将采集到的水稻彩色图像转化为灰度图像,将图像的红绿蓝三种颜色的相互叠加及变化,分别取每一个通道的值作为灰度图像的灰度值,得到RGB三个通道的灰度图像,并画出三个灰度图像的灰度直方图,根据水稻盆栽的主要信息分布从而选出最佳的水稻灰度图像。The first step is to convert the collected rice color image into a grayscale image using RGB channel grayscale conversion. The red, green and blue colors of the image are superimposed and changed, and the value of each channel is taken as the grayscale value of the grayscale image to obtain the grayscale image of the three RGB channels. The grayscale histogram of the three grayscale images is drawn, and the best rice grayscale image is selected according to the distribution of the main information of the rice potted plants.
第二步是基于粗略分割的核线性判别分析,利用核线性判别分析建模一种监督分类器,从相似的叶片背景中分割目标叶片。从灰度图像中裁剪并收集目标叶区域,并创建裁剪的背景叶区域,分割目标叶片后,对粗分割图像进行边界提取,收集包含目标叶片边界的区域,以及收集其他区域(如叶片区域和背景)。此方法采用了基于Python、Ruby、深度学习等技术的Remove Image Background工具,利用强大的人工智能AI算法,能够自动地识别前景物体(即目标叶片)和背景,从而实现大规模批量的图像分割。The second step is to use kernel linear discriminant analysis based on rough segmentation to model a supervised classifier to segment the target leaf from the similar leaf background. The target leaf area is cropped and collected from the grayscale image, and the cropped background leaf area is created. After the target leaf is segmented, the boundary of the rough segmented image is extracted to collect the area containing the target leaf boundary and other areas (such as leaf area and background). This method uses the Remove Image Background tool based on Python, Ruby, deep learning and other technologies, and uses powerful artificial intelligence AI algorithms to automatically identify foreground objects (i.e., target leaves) and backgrounds, thereby achieving large-scale batch image segmentation.
第三步经过核线性判别分析分割后,对图像进行边缘检测。由于核线性判别分析的一些错误分类,在边界区域上检测到的边缘可能不是连续的。为了消除这些误差,需要对叶片进行掩模处理,即用选定物体遮盖部分区域并对其进行图像处理。然后,采用Sobel算子与高斯滤波相结合进行图片边缘检测并且去噪,高斯滤波的基本原理是将滑动窗口中的图像像素进行加权平均,并利用高斯函数求出权值系数,而根据滑动窗口图像中心像素点与视窗内图像中其它像素点之间的距离来确定其权重值,且该权值系数会随着距离的增加而增加,否则若该距离愈小则权值愈低。该公式为:The third step is to perform edge detection on the image after segmentation by kernel linear discriminant analysis. Due to some misclassifications of kernel linear discriminant analysis, the edges detected in the boundary area may not be continuous. In order to eliminate these errors, the leaves need to be masked, that is, part of the area is covered with a selected object and image processing is performed on it. Then, the Sobel operator is combined with Gaussian filtering to perform image edge detection and denoising. The basic principle of Gaussian filtering is to perform weighted averaging of the image pixels in the sliding window and use the Gaussian function to calculate the weight coefficient. The weight value is determined according to the distance between the center pixel of the sliding window image and other pixels in the image in the window, and the weight coefficient will increase with the increase of distance, otherwise the smaller the distance, the lower the weight. The formula is:
其中,σ2代表高斯函数的方差,p,q是横向坐标,h(p,q)是高斯滤波器的函数。目前,常用的模板有3*3和5*5两种,如下所示:Among them, σ 2 represents the variance of the Gaussian function, p, q are the horizontal coordinates, and h(p, q) is the function of the Gaussian filter. Currently, the commonly used templates are 3*3 and 5*5, as shown below:
通过比较结果表明采用3*3的模板是最佳的,然后我们将灰度图中的水稻颜色更改为白色,进而得到我们所需要的二值图。The comparison results show that the 3*3 template is the best. Then we change the color of rice in the grayscale image to white to get the binary image we need.
d.对所述的水稻灰度图像和二值图像,采用随机游走法计算基于水稻整体和水稻叶边缘的两个分形维数,具体包括:d. For the rice grayscale image and the binary image, two fractal dimensions based on the rice as a whole and the rice leaf edge are calculated using a random walk method, specifically including:
S501、分别从水稻的灰度图像和二值图像中随机选取一个像素点A和像素点A',其坐标值分别记为(i,j)和(i',j');S501, randomly selecting a pixel point A and a pixel point A' from the grayscale image and the binary image of rice, and recording their coordinate values as (i, j) and (i', j') respectively;
S502、分别在水稻的灰度图像和二值图像中采用随机游走法确定另一个像素点B和像素点B',其坐标值分别记为(u,v)和(u',v');且R=||(i,j)-(u,v)||=||(i',j')-(u',v')||,R为随机设置的值,R=1,2,...,n;S502, using random walk method to determine another pixel point B and pixel point B' in the grayscale image and binary image of rice respectively, and their coordinate values are recorded as (u, v) and (u', v') respectively; and R = || (i, j) - (u, v) || = || (i', j') - (u', v') ||, R is a randomly set value, R = 1, 2, ..., n;
S503,计算A和B两个像素点灰度值的差值G及A'和B'两个像素点灰度值的差值G':S503, calculating the difference G between the grayscale values of the two pixels A and B and the difference G' between the grayscale values of the two pixels A' and B':
G=I(i,j)-I(u,v)G=I(i,j)-I(u,v)
G'=I(i',j')-I(u',v')G'=I(i',j')-I(u',v')
式中:Where:
I(i,j)、I(u,v)分别为像素点A和像素点B的灰度值;I(i,j) and I(u,v) are the grayscale values of pixel A and pixel B respectively;
I(i',j')、I(u',v')分别为像素点A'和像素点B'的灰度值;I(i',j') and I(u',v') are the grayscale values of pixel A' and pixel B' respectively;
S503、重复步骤S501~S502得到每张灰度图像和二值图像的多个差值G和G',根据差值G和G'计算每张灰度图像的平均值E(G)和每张二值图像的平均值E(G');S503, repeating steps S501 to S502 to obtain multiple difference values G and G' of each grayscale image and binary image, and calculating the average value E(G) of each grayscale image and the average value E(G') of each binary image according to the difference values G and G';
S504、按如下公式计算得到每张灰度图像和二值图像的两个分形维数:S504, calculate two fractal dimensions of each grayscale image and binary image according to the following formula:
式中:Where:
D1为基于灰度图像的分形维数;D1 is the fractal dimension based on the grayscale image;
RFD为基于二值图像的分形维数;RFD is the fractal dimension based on binary images;
e.对所述的水稻灰度图像和二值图像,采用计盒维数法计算基于水稻整体和外接矩形的两个分形维数,具体包括:e. For the rice grayscale image and binary image, two fractal dimensions based on the rice as a whole and the circumscribed rectangle are calculated using the box counting method, specifically including:
S701、在每张水稻灰度图像和二值图像上分别选择一个M×M网格进行划分,其中M为网格边界的个数;S701, selecting an M×M grid on each rice grayscale image and binary image for division, where M is the number of grid boundaries;
S702、在每张水稻灰度图像上随机选取第(a,b)个网格(a∈[1,M],b∈[1,M]);在每张水稻二值图像上随机选取第(a',b')个网格(a'∈[1,M],b'∈[1,M]);覆盖每个网格所需要的框数计算如下:S702. Randomly select the (a, b)th grid (a∈[1,M], b∈[1,M]) on each rice grayscale image; randomly select the (a', b')th grid (a'∈[1,M], b'∈[1,M]) on each rice binary image; the number of frames required to cover each grid is calculated as follows:
式中:Where:
n(a,b)表示覆盖第(a,b)个网格所需要的框数;n(a,b) represents the number of boxes required to cover the (a,b)th grid;
n(a',b')表示覆盖第(a',b')个网格所需要的框数;n(a',b') represents the number of boxes required to cover the (a',b')th grid;
Pmax:表示第(a,b)个网格中的所有像素点的像素值的最大值;P max : represents the maximum value of all pixel values in the (a, b)th grid;
Pmin:表示第(a,b)个网格中的所有像素点的像素值的最小值;P min : represents the minimum pixel value of all pixels in the (a, b)th grid;
P'max:表示第(a',b')个网格中的所有像素点的像素值的最大值;P' max : represents the maximum value of all pixel values in the (a', b')th grid;
P'min:表示第(a',b')个网格中的所有像素点的像素值的最小值;P' min : represents the minimum value of all pixel values in the (a', b')th grid;
S703、计算每个网格的所覆盖的框数之和N为:S703, calculate the sum N of the number of frames covered by each grid:
S704、在灰度图像和二值图像分别按如下公式计算得到两个分形维数:S704, two fractal dimensions are calculated on the grayscale image and the binary image according to the following formulas:
式中:Where:
D2为基于灰度图像的分形维数;D2 is the fractal dimension based on the grayscale image;
Sandbox为基于二值图像的分形维数。Sandbox is a fractal dimension based on binary images.
f.对所述的水稻二值图像,计算相邻像素相关性的表现和共生矩阵对角线元素的灰度变化程度,得到灰度共生矩阵构建的统计量作为水稻分类的纹理特征具体计算方法如下:f. For the rice binary image, the performance of the correlation between adjacent pixels and the grayscale change degree of the diagonal elements of the symbiosis matrix are calculated to obtain the statistics constructed by the grayscale symbiosis matrix as the texture feature of rice classification. The specific calculation method is as follows:
根据每张水稻的二值图像的灰度级得到一个灰度共生矩阵p(x,y);得到灰度共生矩阵p(x,y)的具体方法如下:A gray-level co-occurrence matrix p(x,y) is obtained according to the gray level of each binary image of rice. The specific method of obtaining the gray-level co-occurrence matrix p(x,y) is as follows:
将图片按灰度值分为L个等级,每个像素的灰度值对应一个灰度级,从任意灰度级为x的像素点出发,离开某个固定位置关系d=(dx,dy)在方向为θ的直线上,到达灰度级为y的像素点的概率,所有估计的值均可以表示成一个矩阵的形式,即灰度共生矩阵;灰度共生矩阵用p(x,y)(x,y=0,1,2...L-1)表示,其中L表示图像的灰度级,x,y分别表示像素点的灰度,d表示两个像素点间的空间位置关系;本实施例中灰度共生矩阵的d设为1,方向θ设为[“0”,“45”,“90”,“135”],本实施例分别计算四个角度的值,之后取这四个角度的平均值;然后从灰度共生矩阵中提取6种纹理特征:对比度、差异性、反差分矩阵、熵、相关性和角二阶矩(能量),分别记为Con、DISL、IDM、ENT、Corr和ASM,具体计算公式如下:The image is divided into L levels according to the grayscale value. The grayscale value of each pixel corresponds to a grayscale level. Starting from any pixel with grayscale level x, the probability of leaving a fixed position relationship d = (dx, dy) on a straight line with a direction of θ to reach a pixel with grayscale level y. All estimated values can be expressed in the form of a matrix, namely the grayscale co-occurrence matrix; the grayscale co-occurrence matrix is represented by p(x, y) (x, y = 0, 1, 2...L-1), where L represents the grayscale of the image, x, y represent the grayscale of the pixel, respectively. d represents the spatial position relationship between two pixels; in this embodiment, d of the gray level co-occurrence matrix is set to 1, and the direction θ is set to ["0", "45", "90", "135"]. In this embodiment, the values of the four angles are calculated respectively, and then the average value of the four angles is taken; then six texture features are extracted from the gray level co-occurrence matrix: contrast, difference, inverse difference matrix, entropy, correlation and angular second moment (energy), which are respectively denoted as Con, DISL, IDM, ENT, Corr and ASM. The specific calculation formula is as follows:
其中,μx,μy为不同像素点灰度级x、y均值,σx,σy为不同像素点灰度级x、y的标准差。Among them, μ x , μ y are the means of gray levels x and y of different pixels, and σ x , σ y are the standard deviations of gray levels x and y of different pixels.
g.将所述的四种分形维数D1、RFD、D2、Sandbox和六种灰度共生矩阵数据分别做K-S检验,判断水稻不同阶段各个角度是服从均匀分布的,存在的自相关性是十分相近的,说明它们不存在生长上受自身影响的局部特异情况,说明可以将水稻分形维数和灰度共生矩阵可以作为一个重要特征的加入数据集。g. The four fractal dimensions D1, RFD, D2, Sandbox and six gray-level symbiosis matrix data were subjected to K-S tests respectively to determine that the angles at different stages of rice are uniformly distributed and the autocorrelations are very similar, indicating that there is no local specificity in their growth that is affected by themselves, indicating that the fractal dimension of rice and the gray-level symbiosis matrix can be added to the data set as an important feature.
(2)特征选择(2) Feature selection
a.将所述水稻自动表型平台测量得到的1094个水稻样本的28个表型性状数据集与从灰度图像和二值图像中提取的包含四个分形维数和六个灰度共生矩阵的数据集相结合,得到水稻的初始数据集;a. combining 28 phenotypic trait data sets of 1094 rice samples measured by the rice automatic phenotyping platform with data sets containing four fractal dimensions and six gray-level co-occurrence matrices extracted from grayscale images and binary images to obtain an initial data set of rice;
b.根据所述水稻初始数据集的所有特征绘制相关性分析的热力图,观察每个特征与水稻生长阶段的关系,去掉与水稻生长阶段的关系不太明显的特征,本实施例中去掉了表1中所示的特征f1-f12和特征LD1-LD6;b. Draw a heat map of correlation analysis based on all the features of the initial rice data set, observe the relationship between each feature and the rice growth stage, and remove the features that are not obviously related to the rice growth stage. In this embodiment, the features f1-f12 and features LD1-LD6 shown in Table 1 are removed;
c.在递归特征消除的基础上采用随机森林模型对剩下的特征进行组合交叉验证,通过计算其决策系数之和,最终得到不同特征数对于水稻生长阶段判断的准确率的重要程度,保留最佳的特征组合数得到建模数据集;所述重要程度根据实际需求进行判定,本实施例中最佳的特征组合数为31个特征组成的特征组合。c. Based on recursive feature elimination, the random forest model is used to perform combined cross-validation on the remaining features, and the importance of different feature numbers for the accuracy of judging the rice growth stage is finally obtained by calculating the sum of their decision coefficients. The best feature combination number is retained to obtain the modeling data set; the importance is determined according to actual needs. In this embodiment, the best feature combination number is a feature combination consisting of 31 features.
本步骤中采用递归特征消除方法,其主要思想是采用随机森林模型,针对水稻的初始数据集进行多次的训练,根据得到的最佳的特征组合数31,每一次训练后都会根据权值系数来移除权重比较低的特征,重复构建模型,然后根据系数选择最佳特征,提取出选定的特征,对其余特征重复此过程,直到所有特征都遍历,选出最佳的特征组合得到建模数据集,使用所述递归特征消除法进行特征选择后的水稻数据集即建模数据集。In this step, a recursive feature elimination method is used. The main idea is to use a random forest model to perform multiple training on the initial rice data set. According to the best feature combination number 31 obtained, the features with relatively low weights are removed according to the weight coefficients after each training, and the model is repeatedly constructed. Then, the best features are selected according to the coefficients, and the selected features are extracted. This process is repeated for the remaining features until all features are traversed, and the best feature combination is selected to obtain the modeling data set. The rice data set after feature selection using the recursive feature elimination method is the modeling data set.
(3)机器学习建模(3) Machine Learning Modeling
a.对所述建模数据集进行数据预处理,包括缺失值处理、异常值处理和检验数据是否平衡;a. Perform data preprocessing on the modeling data set, including missing value processing, outlier processing and checking whether the data is balanced;
其中所述检验数据是否平衡即:由于不同标签的数据集的样本数量比例很可能是不均衡的。因此,如果直接使用算法训练进行分类,训练效果可能会很差;因此需要对数据检验,使标签的数据集的样本数量比例相当;The test data is balanced, that is, the sample ratio of data sets with different labels is likely to be unbalanced. Therefore, if the algorithm is directly used for training and classification, the training effect may be very poor; therefore, the data needs to be tested to make the sample ratio of the labeled data sets equal;
b.对所述建模数据集进行归一化处理,目的就是消除水稻特征的量纲影响,具体计算公式如下:b. The modeling data set is normalized to eliminate the dimensional influence of rice characteristics. The specific calculation formula is as follows:
其中,xi表示每一个特征,表示该特征的均值,si表示该特征的标准差Among them, xi represents each feature, represents the mean of the feature, and si represents the standard deviation of the feature
c.对所述建模数据集按8:2的划分比例采用随机抽样的方法,将其划分为训练集和测试集,每次所有样本的80%用于模型训练,其余20%用作测试集以估计性能指标,并对采用相同的模型设置相同的随机数种子保证模型的一致性;c. The modeling data set is divided into a training set and a test set by random sampling at a ratio of 8:2. Each time, 80% of all samples are used for model training, and the remaining 20% are used as a test set to estimate performance indicators. The same random number seed is set for the same model to ensure the consistency of the model;
d.对所建立的模型采用十折交叉验证的方法进行训练和验证,这让模型有机会拆分成多个训练集测试集进行训练,其中训练集被随机划分为10个大小大致相等的子集,按照顺序将其中一份作为验证集验证模型的准确性,其余九份作为训练集训练模型;d. The established model is trained and validated using the 10-fold cross validation method, which gives the model the opportunity to be split into multiple training sets and test sets for training. The training set is randomly divided into 10 subsets of roughly equal size, one of which is used as a validation set to verify the accuracy of the model, and the remaining nine are used as training sets to train the model;
e.对所建立的模型采用贝叶斯优化的方法进行参数寻优,其采用高斯过程,考虑之前的参数信息,不断地更新先验,寻找使各模型分类器识别效果最优的超参数组合;e. The established model is optimized using the Bayesian optimization method, which uses the Gaussian process, considers the previous parameter information, continuously updates the prior, and searches for the hyperparameter combination that optimizes the recognition effect of each model classifier;
f.对所述水稻初始数据集和经过特征选择后的水稻建模数据集用来训练机器学习模型,分类标签为“分蘖期”、“拔节期”以及“抽穗期”,建立多分类机器学习模型,本实施例所使用的机器学习分类模型包括:支持向量机(SVM)、决策树、随机森林、Adaboost、堆叠集成以及优化加权集成学习分类器;f. The rice initial data set and the rice modeling data set after feature selection are used to train a machine learning model, and the classification labels are "tillering period", "jointing period" and "heading period", and a multi-classification machine learning model is established. The machine learning classification models used in this embodiment include: support vector machine (SVM), decision tree, random forest, Adaboost, stacked ensemble and optimized weighted ensemble learning classifier;
堆叠集成模型的具体计算方法如下:The specific calculation method of the stacked integration model is as follows:
第一层模型:使用支持向量机、决策树、随机森林和AdaBoost五种算法进行建模、拟合、预测;The first layer model: use five algorithms including support vector machine, decision tree, random forest and AdaBoost for modeling, fitting and prediction;
第二层模型:本次分类模型是使用前5个模型的预测结果作为特征,测试集的标签作为标签,以XGBClassifier算法作为基分类器,进行建模、拟合、预测。Second-layer model: This classification model uses the prediction results of the first five models as features, the labels of the test set as labels, and the XGBClassifier algorithm as the base classifier for modeling, fitting, and prediction.
优化加权集成模型具体计算公式如下:The specific calculation formula of the optimized weighted integration model is as follows:
其中,wj是对应于基础模型j(j=l,...,k)的权重,n是样本总数,yi是观察i的真实值,是基础模型j对观察i的预测。Where wj is the weight corresponding to the base model j (j = l, ..., k), n is the total number of samples, yi is the true value of observation i, is the prediction of base model j for observation i.
g.对所建立的模型性能的评估可以通过模型预测结果与真实结果之间的混淆矩阵利用Python中的sklearn.metrics中的confusion_matrix函数来表征,计算AUC的值并画出ROC曲线将其用于评价模型的分类性能,对模型结果进行可视化展示g. The performance of the established model can be evaluated by using the confusion matrix between the model prediction results and the actual results using the confusion_matrix function in Python's sklearn.metrics. The AUC value is calculated and the ROC curve is drawn to evaluate the classification performance of the model and visualize the model results.
h.利用Python中的sklearn.metrics中的precision_score,recall_score,accuracy_score,f1_score,cohen_kappa_score函数得到模型的评价指标精确率、召回率、准确率、F1-score值和kappa系数,具体计算公式如下:h. Use the precision_score, recall_score, accuracy_score, f1_score, and cohen_kappa_score functions in sklearn.metrics in Python to obtain the model's evaluation indicators of precision, recall, accuracy, F1-score value, and kappa coefficient. The specific calculation formula is as follows:
其中,TPi为真阳性,表示正确分类为第i个生长阶段的水稻;FPi为假阳性,表示错误分类为第i个生长阶段的水稻;FNi为i类的假阴性,表示错误分类为其他生长阶段的第i个生长阶段的水稻。Pi和Ri分别为i类的精确率和查全率,n为类数(本研究n=3),Pw和Rw分别为加权F1-score的精确率和查全率。p0是每一类正确分类的样本数量之和除以总样本数,也就是总体分类精度,pe是所有类别分别对应的“实际与预测数量的乘积”之总和;Among them, TP i is the true positive, indicating rice correctly classified as the i-th growth stage; FP i is the false positive, indicating rice incorrectly classified as the i-th growth stage; FN i is the false negative of class i, indicating rice incorrectly classified as the i-th growth stage in other growth stages. P i and R i are the precision and recall of class i, respectively, n is the number of classes (n = 3 in this study), P w and R w are the precision and recall of the weighted F1-score, respectively. p 0 is the sum of the number of correctly classified samples in each class divided by the total number of samples, that is, the overall classification accuracy, and p e is the sum of the "product of the actual and predicted numbers" corresponding to all classes;
i.对所述的经过相关性分析和递归特征消除法进行特征选择后的水稻建模数据集分别用多种机器学习分类器进行建模,比较特征选择前后模型通过所述评价指标评估的分类结果;i. The rice modeling data set after feature selection by correlation analysis and recursive feature elimination method is modeled using a variety of machine learning classifiers, and the classification results of the models before and after feature selection are compared by the evaluation index;
j.对所述从水稻灰度图像和二值图像计算的四种分形维数D1、D2、RFD、Sandbox和六种利用灰度共生矩阵得到的纹理特征Con、DISL、IDM、ENT、Corr和ASM分别单独加入特征选择后的水稻建模数据集中,形成新的数据集,再分别用多种机器学习分类器进行建模,比较单独加入四种分形维数的模型和单独加入六种灰度共生矩阵通过所述评价指标评估的分类结果;具体实施如下:j. The four fractal dimensions D1, D2, RFD, Sandbox calculated from the rice grayscale image and the binary image and the six texture features Con, DISL, IDM, ENT, Corr and ASM obtained by using the gray-level symbiosis matrix are added separately to the rice modeling data set after feature selection to form a new data set, and then modeled using a variety of machine learning classifiers, and the classification results of the model with the four fractal dimensions added separately and the model with the six gray-level symbiosis matrices added separately evaluated by the evaluation index are compared; the specific implementation is as follows:
实施例一:Embodiment 1:
(1)分别读取水稻的全部数据集以及经过所述特征选择后的水稻建模数据集,并生成两个不同的数据集:数据集1为水稻的初始数据集,数据集2为水稻的建模数据集;(1) reading the entire rice data set and the rice modeling data set after the feature selection respectively, and generating two different data sets: data set 1 is the initial data set of rice, and
(2)按照水稻生长阶段类别设置对应的分类标签,多分类模型的分类标签分别为分蘖期、拔节期以及抽穗期,各多分类模型中,水稻样本数据共1094条,其中处于分蘖期的水稻样本数据,标签设置为“-1”;处于拔节期的水稻样本数据,标签设置为“0”;处于抽穗期的水稻样本数据,标签设置为“1”;(2) Set corresponding classification labels according to the rice growth stage categories. The classification labels of the multi-classification model are tillering stage, jointing stage and heading stage. In each multi-classification model, there are 1094 rice sample data in total. Among them, the rice sample data in the tillering stage has a label set to "-1"; the rice sample data in the jointing stage has a label set to "0"; and the rice sample data in the heading stage has a label set to "1";
(3)对所述两个数据集分别建模,使用支持向量机(SVM)、决策树、随机森林、Adaboost、堆叠集成以及优化加权集成多个机器学习分类器,按8:2的划分比例划分为训练集和测试集,并对采用相同的模型设置相同的随机数种子保证模型的一致性。采用十折交叉验证的方法进行训练和验证,并采用贝叶斯优化的方法寻找使各模型分类器识别效果最优的超参数组合;(3) Modeling the two data sets separately, using support vector machine (SVM), decision tree, random forest, Adaboost, stacking ensemble and optimized weighted ensemble of multiple machine learning classifiers, dividing them into training set and test set at a ratio of 8:2, and setting the same random number seed for the same model to ensure the consistency of the model. The 10-fold cross-validation method was used for training and verification, and the Bayesian optimization method was used to find the hyperparameter combination that optimizes the recognition effect of each model classifier;
(4)本实施例中采用六种机器学习模型在水稻全部数据集的精度指标如表2所示,在特征选择后的水稻数据集的精度指标如表3所示;可以看出6种机器学习分类器在特征选择后评价指标都有所提升,分类效果最好的单一机器学习模型是Adaboost模型,其准确率和F1分数分别为93.15%和0.93,Kappa系数均为0.91,与未经过特征选择的模型相比准确率提高了0.5%左右。与基础模型相比,集成模型提供了更好的性能,其中优化加权集成是最精确的模型较优于堆叠模型,准确率和F1-score达到了94.06%和0.94,Kappa系数达到0.92,与未经过特征选择的模型相比准确率提高了0.6%左右。总体来说,机器学习分类器在特征选择后的性能更好;(4) The accuracy indicators of the six machine learning models used in this embodiment on the entire rice data set are shown in Table 2, and the accuracy indicators of the rice data set after feature selection are shown in Table 3; it can be seen that the evaluation indicators of the six machine learning classifiers are improved after feature selection, and the single machine learning model with the best classification effect is the Adaboost model, with an accuracy rate and F1 score of 93.15% and 0.93 respectively, and a Kappa coefficient of 0.91, which is about 0.5% higher than the model without feature selection. Compared with the basic model, the integrated model provides better performance, among which the optimized weighted integration is the most accurate model, which is better than the stacking model, with an accuracy rate and F1-score of 94.06% and 0.94, and a Kappa coefficient of 0.92, which is about 0.6% higher than the model without feature selection. In general, the performance of the machine learning classifier is better after feature selection;
表2数据集1的六种模型的精度指标Table 2 Accuracy indicators of the six models of
表3数据集2的六种模型的精度指标Table 3 Accuracy indicators of six models of
(5)六种机器学习模型的ROC曲线如图2所示,ROC曲线以假正率为横坐标,而真正率为它的纵坐标。对于一个分类任务的数据样本,ROC曲线需要计算该样本属于正确类别的概率,为了把概率转化成相应的类别,我们需要选择一个阈值,ROC曲线是通过阈值的变化获得的,并显示了每个分类模型的性能。由于模型的高精度,所有品种都更接近代表真阳性率的值。与基础模型相比,集成模型的AUC值更高均达到了0.98的水平,而单一机器学习模型中基于随机森林模型和Adaboost模型的AUC值也达到了0.97左右,说明各个分类模型的性能都不错;(5) The ROC curves of the six machine learning models are shown in Figure 2. The ROC curve uses the false positive rate as the horizontal axis and the true positive rate as its vertical axis. For a data sample of a classification task, the ROC curve needs to calculate the probability that the sample belongs to the correct category. In order to convert the probability into the corresponding category, we need to select a threshold. The ROC curve is obtained by changing the threshold and shows the performance of each classification model. Due to the high accuracy of the model, all varieties are closer to the value representing the true positive rate. Compared with the basic model, the AUC value of the integrated model is higher, reaching 0.98, and the AUC value based on the random forest model and Adaboost model in the single machine learning model also reached about 0.97, indicating that the performance of each classification model is good;
(6)优化加权集成模型的ROC曲线和混淆矩阵分别如图3、4所示,在所有模型的三个生长阶段中,分蘖期是识别的最准确的,不太容易混淆,其次是拔节期最后是抽穗期,其原因可能是水稻在拔节期茎的节间向上迅速伸长,而在抽穗期水稻随着茎秆的伸长而伸出顶部叶,水稻的整体形态差异不大。(6) The ROC curve and confusion matrix of the optimized weighted ensemble model are shown in Figures 3 and 4, respectively. Among the three growth stages of all models, the tillering stage is the most accurate and less likely to be confused, followed by the jointing stage and finally the heading stage. The reason may be that the internodes of the rice stems extend upward rapidly during the jointing stage, while during the heading stage, the top leaves of the rice extend as the stems elongate, and the overall morphology of the rice does not differ much.
实施例二:Embodiment 2:
(1)分别读取经过所述特征选择后的水稻建模数据集,并生成三个不同的数据集3、数据集4和数据集5:其中数据集3中的水稻特征不包括分形维数和灰度共生矩阵;数据集4中的水稻特征不包括灰度共生矩阵,包括分形维数;数据集5中的水稻特征不包括分形维数,包括灰度共生矩阵;(1) respectively reading the rice modeling datasets after the feature selection, and generating three different datasets 3, 4 and 5: wherein the rice features in dataset 3 do not include fractal dimension and gray-level symbiosis matrix; the rice features in dataset 4 do not include gray-level symbiosis matrix, but include fractal dimension; the rice features in dataset 5 do not include fractal dimension, but include gray-level symbiosis matrix;
(2)按照水稻生长阶段类别设置对应的分类标签,多分类模型的分类标签分别为分蘖期、拔节期以及抽穗期,各多分类模型中,本实施例中水稻样本数据共1094条,其中处于分蘖期的水稻样本数据,标签设置为“-1”;处于拔节期的水稻样本数据,标签设置为“0”;处于抽穗期的水稻样本数据,标签设置为“1”;(2) Setting corresponding classification labels according to the rice growth stage categories. The classification labels of the multi-classification models are respectively tillering stage, jointing stage and heading stage. In each multi-classification model, there are 1094 rice sample data in this embodiment, among which the rice sample data in the tillering stage has a label set to "-1"; the rice sample data in the jointing stage has a label set to "0"; and the rice sample data in the heading stage has a label set to "1";
(3)对所述三个数据集分别建模,使用支持向量机(SVM)、决策树、随机森林、Adaboost、堆叠集成以及优化加权集成六个机器学习分类器,按8:2的划分比例划分为训练集和测试集,并对采用相同的模型设置相同的随机数种子保证模型的一致性。采用十折交叉验证的方法进行训练和验证,并采用贝叶斯优化的方法寻找使各模型分类器识别效果最优的超参数组合;(3) Modeling the three data sets separately, using six machine learning classifiers including support vector machine (SVM), decision tree, random forest, Adaboost, stacked ensemble, and optimized weighted ensemble, and dividing them into training set and test set at a ratio of 8:2, and setting the same random number seed for the same model to ensure the consistency of the model. The 10-fold cross-validation method was used for training and validation, and the Bayesian optimization method was used to find the hyperparameter combination that optimizes the recognition effect of each model classifier;
(4)六种机器学习模型在数据集3、4、5的准确率、F1分数和kappa系数指标如表4、5、6所示,从表中可以发现在不同的模型中加不加分形维数或者灰度共生矩阵平均情况而言,当引入分形维数或者灰度共生矩阵特征之后,分类器的判别效果都得到了提升。总体来说,引入灰度共生矩阵特征后,各模型评价指标提升更多,准确率都提高了2-7%左右。引入分形维数特征后,决策树模型和优化加权集成模型比引入灰度共生矩阵特征后模型提升更多,准确率分别提高了3%和8%左右。总体来说,单一机器学习模型提高的更多,引入分形维数和灰度共生矩阵对于生长周期的判别是具有积极意义的。(4) The accuracy, F1 score and kappa coefficient of the six machine learning models in data sets 3, 4 and 5 are shown in Tables 4, 5 and 6. From the table, it can be found that in different models, whether the fractal dimension or gray-level co-occurrence matrix is added or not, the discrimination effect of the classifier is improved after the fractal dimension or gray-level co-occurrence matrix features are introduced. In general, after the gray-level co-occurrence matrix features are introduced, the evaluation indicators of each model are improved more, and the accuracy is improved by about 2-7%. After the fractal dimension features are introduced, the decision tree model and the optimized weighted ensemble model are improved more than the model after the gray-level co-occurrence matrix features are introduced, and the accuracy is improved by about 3% and 8% respectively. In general, the single machine learning model is improved more, and the introduction of fractal dimension and gray-level co-occurrence matrix is of positive significance for the discrimination of growth cycle.
表4数据集3,4,5的六种模型的准确率指标Table 4 Accuracy indicators of six models of datasets 3, 4, and 5
表5数据集3,4,5的六种模型的F1分数指标Table 5 F1 score indicators of six models of datasets 3, 4, and 5
表6数据集3,4,5的六种模型的kappa系数指标Table 6 Kappa coefficient indicators of six models of data sets 3, 4, and 5
实施例三:Embodiment three:
(1)分别读取经过所述特征选择后的水稻建模数据集,并生成一个数据集6,其与实施例1中的数据集2相同。(1) Read the rice modeling datasets after the feature selection respectively, and generate a dataset 6, which is the same as the
(2)按照水稻生长阶段类别设置对应的分类标签,多分类模型的分类标签分别为分蘖期、拔节期以及抽穗期,各多分类模型中,水稻样本数据共1094条,其中处于分蘖期的水稻样本数据,标签设置为“-1”;处于拔节期的水稻样本数据,标签设置为“0”;处于抽穗期的水稻样本数据,标签设置为“1”(2) Set corresponding classification labels according to the rice growth stage categories. The classification labels of the multi-classification model are tillering stage, jointing stage and heading stage. In each multi-classification model, there are 1094 rice sample data in total. Among them, the rice sample data in the tillering stage has a label set to "-1"; the rice sample data in the jointing stage has a label set to "0"; the rice sample data in the heading stage has a label set to "1".
(3)对所述数据集6建模,使用支持向量机(SVM)、决策树、随机森林、Adaboost、堆叠集成以及优化加权集成六个机器学习分类器,分别采用特征选择树模型中的随机森林法进行特征重要性评估,计算每一个特征对于水稻生长阶段类别的重要性,将特征贡献度进行排序,以找到较有影响力的特征变量以优化模型(3) Modeling the dataset 6, using six machine learning classifiers including support vector machine (SVM), decision tree, random forest, Adaboost, stacked ensemble, and optimized weighted ensemble, respectively, using the random forest method in the feature selection tree model to evaluate feature importance, calculating the importance of each feature for the rice growth stage category, and ranking the feature contributions to find the more influential feature variables to optimize the model
(4)前10个输入变量的特征重要性结果,如表7所示。根据六种模型计算了输入特征重要性,以找到最有影响力的自变量以优化模型,表7显示了所有6个模型加权得到的前10个特征重要性结果。参数SA的权重达到0.1359是最重要的特征。前10个输入变量中包括从图像提取的两个纹理特征变量,还有两个分形维数变量。故应添加从图像衍生的其他辅助数据,因为它们可能会导致更高的估计精度。此外,水稻相对频数和结构参数这类反应植株紧凑程度的参数似乎不那么重要;(4) The feature importance results of the first 10 input variables are shown in Table 7. The input feature importance was calculated according to the six models to find the most influential independent variables to optimize the model. Table 7 shows the weighted first 10 feature importance results of all 6 models. The parameter SA has a weight of 0.1359 and is the most important feature. The first 10 input variables include two texture feature variables extracted from images and two fractal dimension variables. Therefore, other auxiliary data derived from images should be added because they may lead to higher estimation accuracy. In addition, parameters such as rice relative frequency and structural parameters that reflect the compactness of plants seem to be less important;
表7六种模型的特征重要性系数及排名Table 7 Feature importance coefficients and rankings of six models
(5)支持向量机、决策树、Adaboost和优化加权集成模型输入变量的特征重要性结果,如图5所示。基于所提出的特征重要性方法,决策树模型中形态参数的特征重要性所占比重更大,而其他四种模型的纹理参数特征重要性所占比重更大,故可以推断出分形维数和灰度共生矩阵等纹理参数对水稻生长阶段的检测是十分重要的。虽然不同模型的特征重要性不尽相同,总体来说四种模型中RFD、D2、Sandbox、ENT、ASM和G_g的特征重要性所占比重都比较大。(5) The feature importance results of the input variables of the support vector machine, decision tree, Adaboost and optimized weighted ensemble model are shown in Figure 5. Based on the proposed feature importance method, the feature importance of morphological parameters in the decision tree model accounts for a larger proportion, while the feature importance of texture parameters in the other four models accounts for a larger proportion. Therefore, it can be inferred that texture parameters such as fractal dimension and gray-level co-occurrence matrix are very important for the detection of rice growth stages. Although the feature importance of different models is not the same, in general, the feature importance of RFD, D2, Sandbox, ENT, ASM and G_g in the four models accounts for a relatively large proportion.
k.根据所述的机器学习分类器模型分别采用特征选择树模型中的随机森林法进行特征重要性评估,计算每一个特征对于水稻生长阶段类别的重要性,将特征贡献度进行排序,以找到较有影响力的特征变量以优化模型。k. According to the machine learning classifier model, the random forest method in the feature selection tree model is used to evaluate the feature importance, calculate the importance of each feature to the rice growth stage category, and sort the feature contributions to find the more influential feature variables to optimize the model.
从实施例中可以看出利用本发明所述用于检测水稻生长阶段的方法,最优的单一机器学习模型为Adaboost模型,其准确率和F1分数分别达到93.15%和0.93,Kappa系数均为0.91;利用本发明用于检测水稻生长阶段的方法,集成模型尤其是经过特征选择后的优化加权集成进行了最佳分类,与现有单一机器学习模型相比,准确率提高了1.5%左右,准确率和F1-score分别达到94.06%和0.94,kappa系数达到0.92。It can be seen from the embodiments that using the method for detecting the growth stage of rice according to the present invention, the optimal single machine learning model is the Adaboost model, whose accuracy and F1 score reach 93.15% and 0.93 respectively, and the Kappa coefficient is 0.91; using the method for detecting the growth stage of rice according to the present invention, the integrated model, especially the optimized weighted integration after feature selection, performs the best classification, and compared with the existing single machine learning model, the accuracy is improved by about 1.5%, the accuracy and F1-score reach 94.06% and 0.94 respectively, and the kappa coefficient reaches 0.92.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the serial numbers of the steps in the above embodiments does not mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
本实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序包括程序指令,该程序指令被处理器执行时实现本发明所述方法的各个步骤,在此不再赘述。This embodiment also provides a computer-readable storage medium, which stores a computer program. The computer program includes program instructions. When the program instructions are executed by a processor, the steps of the method of the present invention are implemented, which will not be repeated here.
计算机可读存储介质可以是前述任一实施例提供的数据传输装置或者计算机设备的内部存储单元,例如计算机设备的硬盘或内存。该计算机可读存储介质也可以是该计算机设备的外部存储设备,例如该计算机设备上配备的插接式硬盘,智能存储卡(smartmedia card,SMC),安全数字(secure digital,SD)卡,闪存卡(flash card)等。The computer-readable storage medium may be the data transmission device provided in any of the aforementioned embodiments or the internal storage unit of the computer device, such as the hard disk or memory of the computer device. The computer-readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, a flash card, etc., provided on the computer device.
进一步地,该计算机可读存储介质还可以既包括该计算机设备的内部存储单元也包括外部存储设备。该计算机可读存储介质用于存储该计算机程序以及该计算机设备所需的其他程序和数据。该计算机可读存储介质还可以用于暂时地存储将要输出或己输出的数据。Furthermore, the computer-readable storage medium may include both an internal storage unit of the computer device and an external storage device. The computer-readable storage medium is used to store the computer program and other programs and data required by the computer device. The computer-readable storage medium may also be used to temporarily store data to be output or has been output.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment in combination with software and hardware. Moreover, the present application may adopt the form of a computer program product implemented in one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) that contain computer-usable program code.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to the flowchart and/or block diagram of the method, device (system) and computer program product according to the embodiment of the present application. It should be understood that each process and/or box in the flowchart and/or block diagram, and the combination of the process and/or box in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer, an embedded processor or other programmable data processing device to produce a machine, so that the instructions executed by the processor of the computer or other programmable data processing device produce a device for realizing the function specified in one process or multiple processes in the flowchart and/or one box or multiple boxes in the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured product including an instruction device that implements the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions may also be loaded onto a computer or other programmable data processing device so that a series of operational steps are executed on the computer or other programmable device to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
本说明书未作详细描述的内容属于本领域专业技术人员公知的现有技术。The contents not described in detail in this specification belong to the prior art known to professional and technical personnel in this field.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310209296.7A CN116416523A (en) | 2023-03-07 | 2023-03-07 | Machine learning-based rice growth stage identification system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310209296.7A CN116416523A (en) | 2023-03-07 | 2023-03-07 | Machine learning-based rice growth stage identification system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116416523A true CN116416523A (en) | 2023-07-11 |
Family
ID=87055666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310209296.7A Pending CN116416523A (en) | 2023-03-07 | 2023-03-07 | Machine learning-based rice growth stage identification system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116416523A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117152620A (en) * | 2023-10-30 | 2023-12-01 | 江西立盾光电科技有限公司 | Plant growth control method and system following plant state change |
-
2023
- 2023-03-07 CN CN202310209296.7A patent/CN116416523A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117152620A (en) * | 2023-10-30 | 2023-12-01 | 江西立盾光电科技有限公司 | Plant growth control method and system following plant state change |
CN117152620B (en) * | 2023-10-30 | 2024-02-13 | 江西立盾光电科技有限公司 | Plant growth control method and system following plant state change |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | A computer vision system for early stage grape yield estimation based on shoot detection | |
CN105930815B (en) | A kind of underwater biological detection method and system | |
CN110414577A (en) | A Deep Learning-Based Lidar Point Cloud Multi-target Object Recognition Method | |
US9064151B2 (en) | Device and method for detecting plantation rows | |
CN109977780A (en) | A kind of detection and recognition methods of the diatom based on deep learning algorithm | |
CN101751666A (en) | Semi-supervised multi-spectral remote sensing image segmentation method based on spectral clustering | |
CN108596038B (en) | Method for identifying red blood cells in excrement by combining morphological segmentation and neural network | |
CN109886146B (en) | Flood information remote sensing intelligent acquisition method and device based on machine vision detection | |
CN106340016A (en) | DNA quantitative analysis method based on cell microscope image | |
CN116206208B (en) | Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence | |
Quispe et al. | Automatic building change detection on aerial images using convolutional neural networks and handcrafted features | |
CN115170542B (en) | Potato early-late blight classification model construction method based on GLCM feature extraction | |
CN116416523A (en) | Machine learning-based rice growth stage identification system and method | |
CN119863727A (en) | Forest pest intelligent identification method based on unmanned aerial vehicle remote sensing | |
CN109299295B (en) | Blue printing layout database searching method | |
CN118072295B (en) | Tobacco leaf identification method, system, storage medium, equipment and program product | |
CN119418117A (en) | Tobacco seedling growth state identification and emergence rate detection method and related device | |
Liu et al. | Automatic grape bunch detection in vineyards for precise yield estimation | |
CN116778205A (en) | Methods, equipment, storage media and devices for identifying citrus disease levels | |
CN112418318B (en) | Intelligent rice health state distinguishing method based on Fourier descriptor | |
CN109726641B (en) | A circular classification method for remote sensing images based on automatic optimization of training samples | |
CN109948421B (en) | Hyperspectral image classification method based on PCA and attribute configuration file | |
CN115082789B (en) | Forestry seedling detection method and device based on artificial intelligence | |
CN118485915B (en) | An intelligent wolfberry pest image recognition method | |
Ning et al. | Extraction of soybean pod features based on computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |