CN111627059A - Method for positioning center point position of cotton blade - Google Patents
Method for positioning center point position of cotton blade Download PDFInfo
- Publication number
- CN111627059A CN111627059A CN202010465318.2A CN202010465318A CN111627059A CN 111627059 A CN111627059 A CN 111627059A CN 202010465318 A CN202010465318 A CN 202010465318A CN 111627059 A CN111627059 A CN 111627059A
- Authority
- CN
- China
- Prior art keywords
- scanning
- llower
- image
- center point
- axis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Image Processing (AREA)
- Treatment Of Fiber Materials (AREA)
Abstract
本发明公开了一种棉花叶片中心点位置定位方法,包括以下步骤:S1:采集棉花图像;S2:提取图像中各棉花叶片的叶片轮廓;S3:提取某一棉花叶片的叶片轮廓的四个坐标点x1、x2、x3和x4;S4:计算所述棉花叶片的叶片轮廓的初始中心点的坐标;S5:获取中心点的横坐标xc;S6:获取中心点的纵坐标yc;S7:坐标(xc,yc)即为所述棉花叶片的中心点位置。本发明所述方法计算量小使其实时性较好,而且中心准确性符合实际应用要求,在实时性试验中,实时处理速度指标FPS值为20.107;在定位准确性上,测试最终获得平均误差MAE为26.861,符合实际应用场景对于定位准确性的要求。
The invention discloses a method for locating the center point of a cotton leaf, comprising the following steps: S1: collecting a cotton image; S2: extracting the leaf contour of each cotton leaf in the image; S3: extracting four coordinates of the leaf contour of a certain cotton leaf Points x1, x2, x3 and x4; S4: Calculate the coordinates of the initial center point of the leaf profile of the cotton blade; S5: Obtain the abscissa x c of the center point; S6: Obtain the ordinate y c of the center point; S7: The coordinates (x c, y c ) are the position of the center point of the cotton blade. The method of the invention has a small amount of calculation, so that it has good real-time performance, and the center accuracy meets the requirements of practical applications. In the real-time test, the real-time processing speed index FPS value is 20.107; in terms of positioning accuracy, the test finally obtains the average error The MAE is 26.861, which meets the requirements for positioning accuracy in practical application scenarios.
Description
技术领域technical field
本发明涉及作物自动识别定位技术领域,具体涉及一种棉花叶片中心点位置定位方法。The invention relates to the technical field of automatic identification and positioning of crops, in particular to a method for positioning the center point of cotton leaves.
背景技术Background technique
基于图像处理的棉花叶片中心点定位方法可以快速获取目标作物的中心点坐标位置,为田间目标作物的精准喷洒、自动化管理等提供参考,该方法能提高作物精细化管理效率,降低额外劳动力的依赖和农业成本优化控制。通过特定相机获取目标作物的图像具有成本低、灵活性强、效率高的优点,目前也有激光传感器、红外传感器、深度相机等方法定位图像(不规则几何图像)中心点位置坐标,但这些方法普遍存在计算量大、制作成本高昂、实时性差等不足,因此,在实际生产生活中基于图像处理的棉花叶片中心点定位方法相较之下更为实用。The center point positioning method of cotton leaves based on image processing can quickly obtain the coordinate position of the center point of the target crop, and provide a reference for the precise spraying and automatic management of the target crop in the field. This method can improve the efficiency of refined crop management and reduce the dependence on additional labor. and optimal control of agricultural costs. Obtaining the image of the target crop through a specific camera has the advantages of low cost, high flexibility and high efficiency. At present, there are also methods such as laser sensor, infrared sensor, depth camera, etc. to locate the coordinates of the center point of the image (irregular geometric image), but these methods are generally There are shortcomings such as large amount of calculation, high production cost, and poor real-time performance. Therefore, the method of positioning the center point of cotton leaves based on image processing is more practical in actual production and life.
基于图像处理的棉花叶片中心点定位方法是利用相机获取目标作物的真实图像信息,通过一系列的图像分析计算处理,最终实现目标作物叶片中心点位置定位。目前主流叶片中心定位算法主要是通过提取作物的轮廓以及联通区域,并通过计算联通区域的质心获取作物中心点坐标。但这种方法对形状规则要求较高,对于几何形状不规则、背景复杂、目标物体重叠等图像的实时性、适应性较差。The cotton leaf center point location method based on image processing is to use the camera to obtain the real image information of the target crop, and through a series of image analysis and calculation processing, the location of the center point of the target crop leaf center is finally realized. At present, the mainstream leaf center positioning algorithm mainly obtains the coordinates of the crop center point by extracting the contour of the crop and the connected area, and calculating the centroid of the connected area. However, this method has high requirements on shape rules, and has poor real-time performance and adaptability to images such as irregular geometric shapes, complex backgrounds, and overlapping target objects.
为了弥补因作物形状不规则、背景复杂等带来的定位误差影响,已有技术采用先通过图像直方图目标作物统计确定待测作物,再对目标作物行进行直方图统计以此来确定目标作物的中心点(王永康等.一种基于图像灰度直方图相似度计算的室内定位方法[J].测绘通报,2018(4):63-67.)。但是,由于实际目标作物叶片普遍存在重叠现象,导致这种方法在具体应用场景中中心点定位误差较大。In order to make up for the influence of positioning errors caused by the irregular shape of crops and the complex background, the existing technology uses the image histogram target crop statistics to determine the crops to be tested, and then performs histogram statistics on the target crop rows to determine the target crops. (Wang Yongkang et al. An indoor positioning method based on image grayscale histogram similarity calculation [J]. Bulletin of Surveying and Mapping, 2018(4):63-67.). However, due to the common overlapping phenomenon of actual target crop leaves, this method has a large center point positioning error in specific application scenarios.
发明内容SUMMARY OF THE INVENTION
本发明要解决的技术问题是提供一种实时性较好、中心点定位准确性符合实际场景应用要求的棉花叶片中心点位置定位方法。The technical problem to be solved by the present invention is to provide a method for locating the position of the cotton leaf center point, which has better real-time performance and whose center point positioning accuracy meets the application requirements of the actual scene.
为解决上述技术问题,本发明采用以下技术方案:In order to solve the above-mentioned technical problems, the present invention adopts the following technical solutions:
一种棉花叶片中心点位置定位方法,包括以下步骤:A method for locating the center point of a cotton leaf, comprising the following steps:
S1:采集棉花图像;S1: Collect cotton images;
S2:提取图像中各棉花叶片的叶片轮廓;S2: extract the leaf contour of each cotton leaf in the image;
S3:提取某一棉花叶片的叶片轮廓的四个坐标点,分别为左上坐标点x1(xltop,yltop)、右上坐标点x2(xrtop,yrtop)、左下坐标点x3(xllower,yllower)以及右下坐标点x4(xrlower,yrlower);S3: Extract four coordinate points of the leaf profile of a cotton leaf, which are the upper left coordinate point x1 (x ltop , y ltop ), the upper right coordinate point x2 (x rtop , y rtop ), and the lower left coordinate point x3 (x llower , y llower ) and the lower right coordinate point x4(x rlower ,y rlower );
S4:根据以下公式计算所述棉花叶片的叶片轮廓的初始中心点的坐标(x0,y0):S4: Calculate the coordinates (x 0 , y 0 ) of the initial center point of the leaf profile of the cotton leaf according to the following formula:
S5:以初始中心点为起点,设定横向步长△x,xllower≤△x≤xrlower,以经过初始中心点的纵轴为主轴进行向左、向右横向扫描,获取中心点的横坐标xc;S5: Take the initial center point as the starting point, set the horizontal step size △x, x llower ≤△x≤x rlower , take the vertical axis passing through the initial center point as the main axis, and perform horizontal scanning to the left and right to obtain the horizontal direction of the center point. coordinate x c ;
S6:以初始中心点为起点,设定纵向步长△y,yllower≤△y≤yltop,以经过初始中心点的横轴为主轴进行向上、向下纵向扫描,获取中心点的纵坐标yc;S6: Take the initial center point as the starting point, set the vertical step length △y, y llower ≤△y≤y ltop , and take the horizontal axis passing through the initial center point as the main axis to perform vertical scanning up and down, and obtain the ordinate of the center point y c ;
S7:坐标(xc,yc)即为所述棉花叶片的中心点位置。S7: The coordinates (x c , y c ) are the position of the center point of the cotton blade.
上述方法的步骤S1中,采用现有技术采集棉花图像并上传。步骤S2中,采用现有技术提取图像中各棉花叶片的叶片轮廓(即对棉花叶片的叶片区域进行分割),如利用MaskR-CNN目标检测模型对棉花图像实施棉花叶片分割、基于卷积神经网络的目标叶片分割以及基于迁移学习的图像分割等;提取所得棉花叶片的叶片轮廓通常为长方形或正方形。In step S1 of the above method, the cotton image is collected and uploaded using the prior art. In step S2, the existing technology is used to extract the leaf contour of each cotton leaf in the image (that is, the leaf area of the cotton leaf is segmented), such as using the MaskR-CNN target detection model to perform cotton leaf segmentation on the cotton image, based on the convolutional neural network. The target leaf segmentation and transfer learning-based image segmentation, etc.; the leaf contours of the extracted cotton leaves are usually rectangular or square.
上述方法的步骤S5中,横向步长△x可以根据需要进行设置,通常情况下步长△x越小中心点的定位越准确。由于需要以经过初始中心点的纵轴为主轴进行向左(即以初始中心点为原点向x轴负方向)、向右(即以初始中心点为原点向x轴正方向)横向扫描,所以横向步长△x会存在正负值;当进行向左横向扫描时,横向步长△x为负值,当进行向右横向扫描时,横向步长△x为正值。在本申请的一个优选实施例中,横向步长△x设置为±5px。所述步骤S5进一步包括:In step S5 of the above method, the lateral step size Δx can be set as required, and in general, the smaller the step size Δx, the more accurate the positioning of the center point. Since it needs to take the longitudinal axis passing through the initial center point as the main axis to perform lateral scanning to the left (that is, to the negative direction of the x-axis with the initial center point as the origin) and to the right (that is, to the positive direction of the x-axis with the initial center point as the origin), so There are positive and negative values for the horizontal step size △x; when performing horizontal scanning to the left, the horizontal step size △x is a negative value, and when performing horizontal scanning to the right, the horizontal step size △x is a positive value. In a preferred embodiment of the present application, the lateral step size Δx is set to ±5px. The step S5 further includes:
S501,初始化初次扫描次数i=1,纵轴经过坐标(x0 *=x0+△x,y0=0);S501, initialize the number of initial scans i=1, and the vertical axis passes through the coordinates (x 0 * =x 0 +△x,y 0 =0);
S502,扫描时,叶片轮廓被经过坐标(x0 *,y0)的纵轴分为两个图像区域,分别为图像区域ABCD和图像区域A*B*C*D*,其中:S502 , during scanning, the blade contour is divided into two image areas through the vertical axis of the coordinates (x 0 * , y 0 ), which are the image area ABCD and the image area A * B * C * D * respectively, where:
图像区域ABCD的四个顶点位置分别为:A(xltop,yltop)、B(x0+△x,yltop)、C(x0+△x,yllower)、D(xllower,yllower);The four vertex positions of the image area ABCD are: A(x ltop ,y ltop ), B(x 0 +△x,y ltop ), C(x 0 +△x,y llower ), D(x llower ,y llower );
图像区域A*B*C*D*的四个顶点位置分别为:A*(x0+△x,yrtop)、B*(xrtop,yrtop)、C*(xrlower,yrlower)、D*(x0+△x,yllower);The four vertex positions of the image area A * B * C * D * are: A * (x 0 +△x,y rtop ), B * (x rtop ,y rtop ), C * (x rlower ,y rlower ) , D * (x 0 +△x,y llower );
S503,将图像区域ABCD和A*B*C*D*分别缩放至同一规格的图片,并进一步转化为灰度图;S503, the image areas ABCD and A * B * C * D * are respectively scaled to images of the same specification, and further converted into grayscale images;
在步骤S503中,对图像区域ABCD和A*B*C*D*进行缩放并缩放至相同规格,以此保留结构、明暗等基本信息,摒弃不同尺寸/比例的图像带来的比较差异。所述图像区域ABCD和A*B*C*D*的缩放要求根据需要进行设定,通常是缩放至(20~30)×(20~30)大小,更优选是缩放至20px×20px的规格,即每张图像为400个像素。对对缩放后的图像转化成灰度图,可以进一步降低图像冗余信息量,提高算法的实时处理效率。In step S503, the image areas ABCD and A * B * C * D * are scaled and scaled to the same specification, so as to retain basic information such as structure, light and shade, and discard the comparison differences caused by images of different sizes/ratios. The scaling requirements of the image areas ABCD and A * B * C * D * are set as required, usually scaling to (20~30)×(20~30) size, more preferably scaling to 20px×20px , i.e. each image is 400 pixels. Converting the scaled image into a grayscale image can further reduce the amount of redundant information in the image and improve the real-time processing efficiency of the algorithm.
S504,分别计算两个灰度图每一行其像素点的均值和中位数,并记录每行像素点的均值和中位数;S504, calculate the mean and median of the pixels in each row of the two grayscale images respectively, and record the mean and median of the pixels in each row;
S505,分别计算步骤S504所得两个图的所有均值和中位数的标准差,以得到的均值标准差和中位数标准差(图像区域ABCD经缩放并进一步转化成灰度图后的均值标准差和中位数标准差分别表示为σavg和σmedian,图像区域A*B*C*D*经缩放并进一步转化成灰度图后的均值标准差和中位数标准差分别表示为σavg *和σmedian *)作为各自的数值特征,即:S505, calculate the standard deviation of all means and medians of the two graphs obtained in step S504, respectively, to obtain the mean standard deviation and median standard deviation (the mean value standard after the image area ABCD is scaled and further converted into a grayscale image) The difference and median standard deviation are denoted as σ avg and σ median , respectively, and the mean standard deviation and median standard deviation after the image area A * B * C * D * is scaled and further converted into a grayscale image are denoted as σ , respectively avg * and σmedian * ) as their respective numerical features, namely:
图像区域ABCD的数值特征:ABCD=(σavg,σmedian);Numerical feature of image area ABCD: ABCD=(σ avg ,σ median );
图像区域A*B*C*D*的数值特征:A*B*C*D*=(σavg *,σmedian *);Numerical features of image area A * B * C * D * : A * B * C * D * = (σ avg * ,σ median * );
S506,利用图像区域ABCD、A*B*C*D*的数值特征,通过余弦函数计算二者的相似度imageSimilarity;即S506, using the numerical features of the image areas ABCD, A * B * C * D * , and calculating the similarity imageSimilarity of the two through the cosine function; that is,
S507,将计算所得相似度及当前扫描次数中纵轴经过的横坐标x0 *以元组形式保存至结果列表imageSimilaritys中,即imageSimilaritys.append((x0 *,imageSimilarity)),结束当前次的扫描;S507, save the calculated similarity and the abscissa x 0 * of the vertical axis in the current number of scans into the result list imageSimilaritys in the form of a tuple, that is, imageSimilaritys.append((x 0 * ,imageSimilarity)), end the current scan scanning;
S508,更新x0 *坐标,即将第i次扫描中的x0 *与横向步长△x的和重新定义为第i+1次扫描中的x0 *,判断该重新定义的x0 *与xrlower和xllower的关系,当x0 *>xrlower或者x0 *<xllower时,结束以纵轴为主轴的向右或向左横向扫描,执行步骤S509;否则执行步骤S502进行第i+1次扫描,直至更新后的x0 *满足结束以纵轴为主轴的向右或向左横向扫描的条件;S508, update the x 0 * coordinates, that is, redefine the sum of x 0 * in the i-th scan and the lateral step size △x as x 0 * in the i+1-th scan, and judge that the redefined x 0 * and The relationship between x rlower and x llower , when x 0 * >x rlower or x 0 * <x llower , end the right or left lateral scan with the vertical axis as the main axis, and execute step S509; otherwise, execute step S502 to carry out the i-th +1 scan until the updated x 0 * satisfies the condition for ending the right or left lateral scan with the vertical axis as the main axis;
S509,在结束以纵轴为主轴的向右或向左横向扫描后,开始以纵轴为主轴的向左或向右横向扫描,实现步骤同S501~S508,得到同时包含以经过初始中心点的纵轴为主轴进行向左、向右横向扫描的结果列表imageSimilaritys;S509, after finishing the rightward or leftward horizontal scanning with the vertical axis as the main axis, start the leftward or rightward horizontal scanning with the vertical axis as the main axis. The vertical axis is the main axis for the left and right horizontal scan results list imageSimilaritys;
S510,遍历结果列表imageSimilaritys,找出其中相似度值最小的元组数据,并以该元组数据中的x0 *作为所述棉花叶片中心点的横坐标xc。S510, traverse the result list imageSimilaritys, find out the tuple data with the smallest similarity value, and use x 0 * in the tuple data as the abscissa x c of the cotton leaf center point.
上述方法的步骤S6中,纵向步长△y可以根据需要进行设置,其与横向步长△x的数值可以相同也可以不同,优选相同。由于需要以经过初始中心点的横轴为主轴进行向上(即以初始中心点为原点向y轴正方向)、向下(即以初始中心点为原点向y轴向方向)纵向扫描,所以纵向步长△y也会存在正负值;当进行向上纵向扫描时,纵向步长△y为负值,当进行向下纵向扫描时,纵向步长△y为正值。所述步骤S6进一步包括:In step S6 of the above method, the vertical step Δy can be set as required, and the value of the vertical step Δy and the horizontal step Δx can be the same or different, and preferably the same. Since it is necessary to take the horizontal axis passing through the initial center point as the main axis to perform vertical scanning upward (that is, taking the initial center point as the origin to the positive direction of the y-axis) and downward (that is, taking the initial center point as the origin to the y-axis direction), the longitudinal scan is The step size △y also has positive and negative values; when the vertical scanning is performed upward, the vertical step size △y is a negative value, and when the vertical scanning is performed downward, the vertical step size △y is a positive value. The step S6 further includes:
S601,初始化初次扫描次数i’=1,横轴经过坐标(x0=0,y0 *=y0+△y);S601, initialize the number of initial scans i'=1, and the horizontal axis passes through the coordinates (x 0 =0, y 0 * =y 0 +△y);
S602,扫描时,叶片轮廓被经过坐标(x0,y0 *)的横轴分为两个图像区域,分别为图像区域A’B’C’D’和图像区域A’*B’*C’*D’*,其中:S602 , during scanning, the blade contour is divided into two image areas through the horizontal axis of the coordinates (x 0 , y 0 * ), which are image area A'B'C'D' and image area A' * B' * C respectively ' * D' * , where:
图像区域A’B’C’D’的四个顶点位置分别为:A’((xltop,yltop))、B’(xrtop,yrtop)、C’(xrtop,y0+△y)、D’(xltop,y0+△y);The four vertex positions of the image area A'B'C'D' are: A'((x ltop , y ltop )), B'(x rtop ,y rtop ), C'(x rtop ,y 0 +△ y), D'(x ltop ,y 0 +△y);
图像区域A’*B’*C’*D’*的四个顶点位置分别为:A’*(xllower,y0+△y)、B’*(xllower,y0+△y)、C’*(xrlower,yrlower)、D’*(xllower,yllower);The four vertex positions of the image area A' * B' * C' * D' * are: A' * (x llower ,y 0 +△y), B' * (x llower ,y 0 +△y), C' * (x rlower , y rlower ), D' * (x llower , y llower );
S603,将图像区域A’B’C’D’和A’*B’*C’*D’*分别进行缩放,缩放至与经过初始中心点的纵轴为主轴进行向左、向右横向扫描过程中图像区域ABCD和A*B*C*D*进行缩放后的规格相同,并进一步转化为灰度图;S603: Scale the image areas A'B'C'D' and A' * B' * C' * D' * respectively, and zoom to the vertical axis passing through the initial center point as the main axis to perform horizontal scanning to the left and right In the process, the image areas ABCD and A * B * C * D * are scaled to the same specifications, and are further converted into grayscale images;
在步骤S603中,对图像区域A’B’C’D’和A’*B’*C’*D’*进行缩放的目的与前述步骤S503相同。In step S603, the purpose of scaling the image areas A'B'C'D' and A' * B' * C' * D' * is the same as the aforementioned step S503.
S604,分别计算两个灰度图每一行其像素点的均值和中位数,并记录每行像素点的均值和中位数;S604, calculate the mean and median of the pixels in each row of the two grayscale images respectively, and record the mean and median of the pixels in each row;
S605,分别计算步骤S604所得两个图的所有均值和中位数的标准差(图像区域A’B’C’D’经缩放并进一步转化成灰度图后的均值标准差和中位数标准差分别表示为σavg’和σmedian’,图像区域A’*B’*C’*D’*经缩放并进一步转化成灰度图后的均值标准差和中位数标准差分别表示为σavg’*,σmedian’*),以得到的均值标准差和中位数标准差作为各自的数值特征,即:S605, calculate the standard deviation of all the mean and median of the two images obtained in step S604 respectively (the standard deviation of the mean and the median after the image area A'B'C'D' is scaled and further converted into a grayscale image The differences are expressed as σ avg ' and σ median ' respectively, and the mean standard deviation and median standard deviation of the image area A' * B' * C' * D' * after scaling and further conversion into grayscale images are respectively expressed as σ avg ' * ,σ median ' * ), take the obtained mean standard deviation and median standard deviation as their respective numerical features, namely:
图像区域A’B’C’D’的数值特征:A’B’C’D’=(σavg’,σmedian’);Numerical features of image area A'B'C'D': A'B'C'D'=(σ avg ',σ median ');
图像区域A’*B’*C’*D’*的数值特征:A’*B’*C’*D’*=(σavg’*,σmedian’*);Numerical features of image area A' * B' * C' * D' * : A' * B' * C' * D' * = (σ avg ' * ,σ median ' * );
S606,利用图像区域A’B’C’D’、A’*B’*C’*D’*的数值特征,通过余弦函数计算二者的相似度imageSimilarity’;即S606, using the numerical features of the image areas A'B'C'D', A' * B' * C' * D' * , calculate the similarity imageSimilarity' of the two through the cosine function; that is,
S607,将计算所得相似度及当前扫描次数中横轴经过的纵坐标y0 *以元组形式保存至结果列表imageSimilaritys’中,即imageSimilaritys’.append((y0 *,imageSimilarity’)),结束当前次的扫描;S607, save the calculated similarity and the ordinate y 0 * that the horizontal axis passes through in the current number of scans to the result list imageSimilaritys' in the form of a tuple, that is, imageSimilaritys'.append((y 0 * , imageSimilarity')), end the current scan;
S608,更新y0 *坐标,即将第i’次扫描中的y0 *与纵向步长△y的和重新定义为第i’+1次扫描中的y0 *,判断该重新定义的y0 *与yltop或yllower的关系,当y0 *>yltop或y0 *<yllower时,结束以横轴为主轴的向上或向下纵向扫描,执行步骤S609;否则执行步骤S602进行第i’+1次扫描,直至更新后的y0 *满足结束以横轴为主轴的向上或向下纵向扫描的条件;S608, update y 0 * coordinates, that is, redefine the sum of y 0 * in the i'th scan and the vertical step Δy as y 0 * in the i'+1th scan, and determine the redefined y 0 * Relationship with y ltop or y llower , when y 0 * >y ltop or y 0 * <y llower , end the upward or downward vertical scanning with the horizontal axis as the main axis, and execute step S609; otherwise, execute step S602 for the first i'+1 scans until the updated y 0 * satisfies the condition for ending the upward or downward vertical scan with the horizontal axis as the main axis;
S609,在结束以横轴为主轴的向上或向下纵向扫描后,开始以横轴为主轴的向下或向上纵向扫描,实现步骤同S601~S608,得到同时包含以经过初始中心点的横轴为主轴的向上、向下纵向扫描的结果列表imageSimilaritys’;S609, after finishing the upward or downward vertical scanning with the horizontal axis as the main axis, start the downward or upward vertical scanning with the horizontal axis as the main axis, the implementation steps are the same as S601-S608, and the horizontal axis including the initial center point at the same time is obtained. It is the result list imageSimilaritys' of the upward and downward vertical scanning of the main axis;
S610,遍历结果列表imageSimilaritys’,找出其中相似度值最小的元组数据,并以该元组数据中的y0 *作为所述棉花叶片中心点的纵坐标yc。S610, traverse the result list imageSimilaritys', find out the tuple data with the smallest similarity value, and use y 0 * in the tuple data as the ordinate y c of the center point of the cotton leaf.
在本申请中,所述的横向为水平方向,所述的纵向为水平方向垂直的纵向方向,所述的横轴为与水平线平行的水平轴线,所述的纵轴为与水平线垂直的纵向轴线。In this application, the transverse direction is the horizontal direction, the longitudinal direction is the longitudinal direction perpendicular to the horizontal direction, the transverse axis is the horizontal axis parallel to the horizontal line, and the longitudinal axis is the longitudinal axis perpendicular to the horizontal line .
与现有技术相比,本发明所述方法计算量小使其实时性较好,而且中心准确性符合实际应用要求,在实时性试验中,申请人对1800张图片进行算法测试,最终获得的实时处理速度指标FPS值为20.107的表现;在定位准确性上,采用人工标准的图片和算法标注的定位误差进行测试(考虑到实际环境的干扰,允许存在定位坐标的误差为0~5px,则实际场景可接受的平均误差MAE范围为0~25),测试最终获得平均误差MAE为26.861,接近误差上限25,符合实际应用场景对于定位准确性的要求。Compared with the prior art, the method of the present invention has a small amount of calculation, which makes the real-time performance better, and the accuracy of the center meets the practical application requirements. The performance of the real-time processing speed index FPS value is 20.107; in terms of positioning accuracy, the artificial standard picture and the positioning error marked by the algorithm are used for testing (considering the interference of the actual environment, the allowable positioning coordinate error is 0 ~ 5px, then The acceptable average error MAE in the actual scene is in the range of 0 to 25). The final average error MAE obtained in the test is 26.861, which is close to the upper error limit of 25, which meets the requirements of the actual application scene for positioning accuracy.
附图说明Description of drawings
图1为本发明所述棉花叶片中心点位置定位方法的流程图;Fig. 1 is the flow chart of the cotton blade center point position location method of the present invention;
图2为获取的棉花图像(即待上传的棉花图像);Fig. 2 is the cotton image obtained (that is, the cotton image to be uploaded);
图3为对图2所示图像进行棉花叶片区域分割后的结果图(即提取图2所示图像中各棉花叶片的叶片轮廓后的结果图);Fig. 3 is the result graph after the cotton leaf area is divided on the image shown in Fig. 2 (that is, the result graph after extracting the leaf contour of each cotton leaf in the image shown in Fig. 2);
图4为从图3中单独截取某一棉花叶片的叶片轮廓图片;Fig. 4 is from Fig. 3 and intercepts the leaf profile picture of a certain cotton leaf independently;
图5为图4所示图片中棉花叶片(以下也简称为目标作物)的初始中心点位置(x0,y0)的示意图;FIG. 5 is a schematic diagram of the initial center point position (x 0 , y 0 ) of the cotton leaf (hereinafter also referred to as the target crop) in the picture shown in FIG. 4 ;
图6为以目标作物初始中心点位置(x0,y0)为起点,以纵轴为主轴进行向左、向右横向扫描的示意图;FIG. 6 is a schematic diagram of performing horizontal scanning to the left and right with the vertical axis as the main axis taking the initial center point position (x 0 , y 0 ) of the target crop as the starting point;
图7为以目标作物初始中心点位置(x0,y0)为起点,以纵轴为主轴进行向右(即向x轴正方向)移动+5px后的区域分割效果图;Fig. 7 is the area segmentation effect diagram after moving +5px to the right (that is, in the positive direction of the x-axis) with the vertical axis as the main axis taking the initial center point position (x 0 , y 0 ) of the target crop as the starting point;
图8为以目标作物初始中心点位置(x0,y0)为起点,以横轴为主轴进行向上、向下纵向扫描示意图;8 is a schematic diagram of vertical scanning upward and downward with the initial center point position (x 0 , y 0 ) of the target crop as the starting point and the horizontal axis as the main axis;
图9为以目标作物初始中心点位置(x0,y0)为起点,以横轴为主轴进行向y轴向上(即向y轴正方向)移动+5px后的区域分割效果图;Fig. 9 is a region segmentation effect diagram after moving +5px to the y-axis upward (that is, to the positive direction of the y-axis) with the horizontal axis as the main axis, taking the initial center point position (x 0 , y 0 ) of the target crop as the starting point;
图10为理论环境下目标作物的中心点位置定位示意图;Figure 10 is a schematic diagram of the location of the center point of the target crop in a theoretical environment;
图11为实际环境下不同尺寸的目标作物中心点位置定位效果一览图。Figure 11 is an overview of the positioning effect of the center point position of target crops of different sizes in the actual environment.
具体实施方式Detailed ways
下面结合具体附图对本发明作进一步的详述,以更好地理解本发明的内容,但本发明并不限于以下实施例。The present invention will be further described in detail below in conjunction with the specific drawings to better understand the content of the present invention, but the present invention is not limited to the following examples.
参照图1,本发明所述的棉花叶片中心点位置定位方法,包括以下步骤:Referring to Fig. 1, the method for locating the position of the cotton blade center point of the present invention comprises the following steps:
S1:采集棉花图像;S1: Collect cotton images;
S2:提取图像中各棉花叶片的叶片轮廓;S2: extract the leaf contour of each cotton leaf in the image;
S3:提取某一棉花叶片的叶片轮廓的四个坐标点,分别为左上坐标点x1(xltop,yltop)、右上坐标点x2(xrtop,yrtop)、左下坐标点x3(xllower,yllower)以及右下坐标点x4(xrlower,yrlower);S3: Extract four coordinate points of the leaf profile of a cotton leaf, which are the upper left coordinate point x1 (x ltop , y ltop ), the upper right coordinate point x2 (x rtop , y rtop ), and the lower left coordinate point x3 (x llower , y llower ) and the lower right coordinate point x4(x rlower ,y rlower );
S4:根据以下公式计算所述棉花叶片的叶片轮廓的初始中心点的坐标(x0,y0):S4: Calculate the coordinates (x 0 , y 0 ) of the initial center point of the leaf profile of the cotton leaf according to the following formula:
S5:以初始中心点为起点,设定横向步长△x,xllower≤△x≤xrlower,以经过初始中心点的纵轴为主轴进行向左、向右横向扫描,获取中心点的横坐标xc;S5: Take the initial center point as the starting point, set the horizontal step size △x, x llower ≤△x≤x rlower , take the vertical axis passing through the initial center point as the main axis, and perform horizontal scanning to the left and right to obtain the horizontal direction of the center point. coordinate x c ;
S6:以初始中心点为起点,设定纵向步长△y,yllower≤△y≤yltop,以经过初始中心点的横轴为主轴进行向上、向下纵向扫描,获取中心点的纵坐标yc;S6: Take the initial center point as the starting point, set the vertical step length △y, y llower ≤△y≤y ltop , and take the horizontal axis passing through the initial center point as the main axis to perform vertical scanning up and down, and obtain the ordinate of the center point y c ;
S7:坐标(xc,yc)即为所述棉花叶片的中心点位置。S7: The coordinates (x c , y c ) are the position of the center point of the cotton blade.
在一个具体的实施例中,以图2所示的图像作为获取的棉花图像对本发明所述方法进行详述。In a specific embodiment, the method of the present invention is described in detail by taking the image shown in FIG. 2 as the acquired cotton image.
S1:以图2所示的图像作为获取的棉花图像并上传。S1: Take the image shown in Figure 2 as the acquired cotton image and upload it.
S2:利用现有Mask R-CNN目标检测模型对图2所示棉花图像实施棉花叶片分割,结果如图3所示。提取所得棉花叶片的叶片轮廓为长方形或正方形。S2: Use the existing Mask R-CNN target detection model to perform cotton leaf segmentation on the cotton image shown in Figure 2, and the result is shown in Figure 3. The leaf outline of the extracted cotton leaves is rectangular or square.
S3:单独截取图3中某一棉花叶片的叶片轮廓图片(如图4),同时提取该所述棉花叶片的叶片轮廓的四个坐标点,分别记为左上坐标点x1(xltop,yltop)、右上坐标点x2(xrtop,yrtop)、左下坐标点x3(xllower,yllower)以及右下坐标点x4(xrlower,yrlower)。S3: Take the leaf profile picture of a certain cotton leaf in FIG. 3 separately (as shown in FIG. 4 ), and extract four coordinate points of the leaf profile of the cotton leaf at the same time, which are respectively recorded as the upper left coordinate point x1 (x ltop , y ltop ) ), the upper right coordinate point x2(x rtop ,y rtop ), the lower left coordinate point x3(x llower ,y llower ), and the lower right coordinate point x4(x rlower ,y rlower ).
S4:根据以下公式计算所述棉花叶片的叶片轮廓的初始中心点的坐标(x0y0)(如图5所示):S4: Calculate the coordinates (x 0 y 0 ) of the initial center point of the leaf profile of the cotton leaf according to the following formula (as shown in FIG. 5 ):
S5:以初始中心点为起点,设定横向步长△x=±5px,以经过初始中心点的纵轴为主轴进行向左、向右横向扫描,获取中心点的横坐标xc(如图6所示),具体以纵轴为主轴进行向右横向扫描进行展示,包括:S5: Take the initial center point as the starting point, set the horizontal step size △x=±5px, and use the vertical axis passing through the initial center point as the main axis to scan left and right, and obtain the abscissa x c of the center point (as shown in the figure) 6), specifically, the vertical axis is used as the main axis to scan to the right for display, including:
S501,初始化初次扫描次数i=1,纵轴经过坐标(x0 *=x0+5px,y0=0);S501, initialize the number of initial scans i=1, and the vertical axis passes through the coordinates (x 0 * =x 0 +5px, y 0 =0);
S502,扫描时,叶片轮廓被经过坐标(x0 *,y0)的纵轴分为两个图像区域,分别为图像区域ABCD和图像区域A*B*C*D*(如图7所示),其中:S502 , during scanning, the blade contour is divided into two image areas by the vertical axis of the coordinates (x 0 * , y 0 ), namely the image area ABCD and the image area A * B * C * D * (as shown in FIG. 7 ). ),in:
图像区域ABCD的四个顶点位置分别为:A(xltop,yltop)、B(x0+5px,yltop)、C(x0+5px,yllower)、D(xllower,yllower);The four vertex positions of the image area ABCD are: A(x ltop ,y ltop ), B(x 0 +5px,y ltop ), C(x 0 +5px,y llower ), D(x llower ,y llower ) ;
图像区域A*B*C*D*的四个顶点位置分别为:A*(x0+5px,yrtop)、B*(xrtop,yrtop)、C*(xrlower,yrlower)、D*(x0+5px,yllower);The four vertex positions of the image area A * B * C * D * are: A * (x 0 +5px, y rtop ), B * (x rtop , y rtop ), C * (x rlower , y rlower ), D * (x 0 +5px,y llower );
S503,将图像区域ABCD和A*B*C*D*分别缩放至20px×20px规格的图片(即每张图像为400个像素),并采用现有技术将它们进一步转化为灰度图;S503, scale the image areas ABCD and A * B * C * D * to 20px×20px specifications respectively (that is, each image is 400 pixels), and further convert them into grayscale images by using the existing technology;
S504,分别计算两个灰度图每一行其像素点的均值和中位数,并记录每行像素点的均值和中位数;S504, calculate the mean and median of the pixels in each row of the two grayscale images respectively, and record the mean and median of the pixels in each row;
S505,分别计算步骤S504所得两个图的所有均值和中位数的标准差,以得到的均值标准差和中位数标准差(图像区域ABCD经缩放并进一步转化成灰度图后的均值标准差和中位数标准差分别表示为σavg和σmedian,图像区域A*B*C*D*经缩放并进一步转化成灰度图后的均值标准差和中位数标准差分别表示为σavg *和σmedian *)作为各自的数值特征,即:S505, calculate the standard deviation of all means and medians of the two graphs obtained in step S504, respectively, to obtain the mean standard deviation and median standard deviation (the mean value standard after the image area ABCD is scaled and further converted into a grayscale image) The difference and median standard deviation are denoted as σ avg and σ median , respectively, and the mean standard deviation and median standard deviation after the image area A * B * C * D * is scaled and further converted into a grayscale image are denoted as σ , respectively avg * and σmedian * ) as their respective numerical features, namely:
图像区域ABCD的数值特征:ABCD=(σavg,σmedian);Numerical feature of image area ABCD: ABCD=(σ avg ,σ median );
图像区域A*B*C*D*的数值特征:A*B*C*D*=(σavg *,σmedian *);Numerical features of image area A * B * C * D * : A * B * C * D * = (σ avg * ,σ median * );
S506,利用图像区域ABCD、A*B*C*D*的数值特征,通过余弦函数计算二者的相似度imageSimilarity;即S506, using the numerical features of the image areas ABCD, A * B * C * D * , and calculating the similarity imageSimilarity of the two through the cosine function; that is,
S507,将计算所得相似度及当前扫描次数中纵轴经过的横坐标以元组形式保存至结果列表imageSimilaritys中,即imageSimilaritys.append((imageSimilarity)),结束当前次的扫描;S507, the calculated similarity and the abscissa passed by the vertical axis in the current number of scans Save it to the result list imageSimilaritys in the form of a tuple, that is, imageSimilaritys.append(( imageSimilarity)), end the current scan;
S508,更新x0 *坐标,即将第1次扫描中的x0 *(x0 *=x0+5px)与+5px的和重新定义为第2次扫描中的x0 *,判断该重新定义的x0 *与xrlower和xllower的关系,当x0 *>xrlower或者x0 *<xllower时,结束以纵轴为主轴的向右横向扫描,执行步骤S509;否则执行步骤S502进行第2次扫描,直至更新后的x0 *满足结束以纵轴为主轴的向右横向扫描的条件;S508, update the x 0 * coordinates, that is, redefine the sum of x 0 * (x 0 * = x 0 +5px) and +5px in the first scan as x 0 * in the second scan, and determine that the redefinition is required The relationship between x 0 * and x rlower and x llower , when x 0 * >x rlower or x 0 * <x llower , end the rightward horizontal scan with the vertical axis as the main axis, and execute step S509; otherwise, execute step S502 to carry out The second scan, until the updated x 0 * meets the conditions for ending the rightward horizontal scan with the vertical axis as the main axis;
S509,由于不仅需要以经过初始中心点的纵轴为主轴进行向左横向扫描,也需要以经过初始中心点的纵轴为主轴进行向右横向扫描,因此,在结束以纵轴为主轴的向右横向扫描后,需要开始以纵轴为主轴的向左横向扫描,向左横向扫描的实现步骤同S501~S508,只是横向步长的取值为负值(-5px),直至完成以纵轴为主轴的向左横向扫描,得到同时包含纵轴为主轴的向左、向右横向扫描的结果列表imageSimilaritys。S509, since it is not only necessary to perform leftward horizontal scanning with the longitudinal axis passing through the initial center point as the main axis, but also rightward transverse scanning with the vertical axis passing through the initial center point as the main axis, therefore, after the end of the vertical axis as the main axis After the right horizontal scan, it is necessary to start the left horizontal scan with the vertical axis as the main axis. The implementation steps of the left horizontal scan are the same as S501~S508, except that the value of the horizontal step is negative (-5px), until the vertical axis is completed. It is the leftward horizontal scan of the main axis, and the result list imageSimilaritys that contains both the left and right horizontal scans with the vertical axis as the main axis is obtained.
S510,遍历结果列表imageSimilaritys,找出其中相似度值最小的元组数据,并以该元组数据中的x0 *作为所述棉花叶片中心点的横坐标xc。S510, traverse the result list imageSimilaritys, find out the tuple data with the smallest similarity value, and use x 0 * in the tuple data as the abscissa x c of the cotton leaf center point.
S6:以初始中心点为起点,设定纵向步长△y=±5px,以经过初始中心点的横轴为主轴进行向上、向下纵向扫描,获取中心点的纵坐标yc(如图8所示);具体以横轴为主轴进行向上纵向扫描进行展示,包括:S6: Take the initial center point as the starting point, set the vertical step size △y=±5px, and use the horizontal axis passing through the initial center point as the main axis to perform vertical scanning up and down, and obtain the vertical coordinate y c of the center point (as shown in Figure 8). shown); specifically, the horizontal axis is used as the main axis to perform an upward vertical scan for display, including:
S601,初始化初次扫描次数i’=1,横轴经过坐标(x0=0,y0 *=y0+5px);S601, initialize the number of initial scans i'=1, and the horizontal axis passes through the coordinates (x 0 =0, y 0 * =y 0 +5px);
S602,扫描时,叶片轮廓被经过坐标(x0,y0 *)的横轴分为两个图像区域,分别为图像区域A’B’C’D’和图像区域A’*B’*C’*D’*(如图9所示),其中:S602 , during scanning, the blade contour is divided into two image areas through the horizontal axis of the coordinates (x 0 , y 0 * ), which are image area A'B'C'D' and image area A' * B' * C respectively ' * D' * (as shown in Figure 9), where:
图像区域A’B’C’D’的四个顶点位置分别为:A’((xltop,yltop))、B’(xrtop,yrtop)、C’(xrtop,y0+5px)、D’(xltop,y0+5px);The positions of the four vertices of the image area A'B'C'D' are: A'((x ltop , y ltop )), B'(x rtop , y rtop ), C'(x rtop , y 0 +5px ), D'(x ltop ,y 0 +5px);
图像区域A’*B’*C’*D’*的四个顶点位置分别为:A’*(xllower,y0+5px)、B’*(xllower,y0+5px)、C’*(xrlower,yrlower)、D’*(xllower,yllower);The four vertex positions of the image area A' * B' * C' * D' * are: A' * (x llower , y 0 +5px), B' * (x llower , y 0 +5px), C' * (x rlower , y rlower ), D' * (x llower , y llower );
S603,将图像区域A’B’C’D’和A’*B’*C’*D’*分别缩放至20px×20px规格的图片,并采用现有技术将它们进一步转化为灰度图;S603, scale the image areas A'B'C'D' and A' * B' * C' * D' * to 20px×20px images respectively, and further convert them into grayscale images by using the existing technology;
S604,分别计算两个灰度图每一行其像素点的均值和中位数,并记录每行像素点的均值和中位数;S604, calculate the mean and median of the pixels in each row of the two grayscale images respectively, and record the mean and median of the pixels in each row;
S605,分别计算步骤S604所得两个图的所有均值和中位数的标准差(图像区域A’B’C’D’经缩放并进一步转化成灰度图后的均值标准差和中位数标准差分别表示为σavg’和σmedian’,图像区域A’*B’*C’*D’*经缩放并进一步转化成灰度图后的均值标准差和中位数标准差分别表示为σavg’*,σmedian’*),以得到的均值标准差和中位数标准差作为各自的数值特征,即:S605, calculate the standard deviation of all the mean and median of the two images obtained in step S604 respectively (the standard deviation of the mean and the median after the image area A'B'C'D' is scaled and further converted into a grayscale image The differences are expressed as σ avg ' and σ median ' respectively, and the mean standard deviation and median standard deviation of the image area A' * B' * C' * D' * after scaling and further conversion into grayscale images are respectively expressed as σ avg ' * ,σ median ' * ), take the obtained mean standard deviation and median standard deviation as their respective numerical features, namely:
图像区域A’B’C’D’的数值特征:A’B’C’D’=(σavg’,σmedian’);Numerical features of image area A'B'C'D': A'B'C'D'=(σ avg ',σ median ');
图像区域A’*B’*C’*D’*的数值特征:A’*B’*C’*D’*=(σavg’*,σmedian’*);Numerical features of image area A' * B' * C' * D' * : A' * B' * C' * D' * = (σ avg ' * ,σ median ' * );
S606,利用图像区域A’B’C’D’、A’*B’*C’*D’*的数值特征,通过余弦函数计算二者的相似度imageSimilarity’;即S606, using the numerical features of the image areas A'B'C'D', A' * B' * C' * D' * , calculate the similarity imageSimilarity' of the two through the cosine function; that is,
S607,将计算所得相似度及当前扫描次数中横轴经过的纵坐标y0 *以元组形式保存至结果列表imageSimilaritys’中,即imageSimilaritys’.append((y0 *,imageSimilarity’)),结束当前次的扫描;S607, save the calculated similarity and the ordinate y 0 * that the horizontal axis passes through in the current number of scans to the result list imageSimilaritys' in the form of a tuple, that is, imageSimilaritys'.append((y 0 * , imageSimilarity')), end the current scan;
S608,更新y0 *坐标,即将第1次扫描中的y0 *(y0 *=y0+5px)与纵向步长△y的和重新定义为第2次扫描中的y0 *,判断该重新定义的y0 *与yltop或yllower的关系,当y0 *>yltop时,结束以横轴为主轴的向上纵向扫描,执行步骤S609;否则执行步骤S602进行第2次扫描,直至更新后的y0 *满足结束以横轴为主轴的向上纵向扫描的条件;S608, update y 0 * coordinates, that is, redefine the sum of y 0 * (y 0 * =y 0 +5px) in the first scan and the vertical step size △y as y 0 * in the second scan, and judge The redefined relationship between y 0 * and y ltop or y llower , when y 0 * >y ltop , ends the upward vertical scan with the horizontal axis as the main axis, and executes step S609; otherwise, executes step S602 for the second scan, Until the updated y 0 * satisfies the condition for ending the upward vertical scan with the horizontal axis as the main axis;
S609,由于不仅需要以经过初始中心点的横轴为主轴的向上纵向扫描,也需要以经过初始中心点的横轴为主轴的向下纵向扫描,因此,在结束以横轴为主轴的向上纵向扫描后,需要开始以横轴为主轴的向下纵向扫描,所述向下纵向扫描的实现步骤同S601~S608,只是纵向步长的取值为负值(-5px),直至完成以横轴为主轴的向下纵向扫描,得到同时包含横轴为主轴的向上、向下纵向扫描的结果列表imageSimilaritys’。S609, since not only the upward vertical scanning with the horizontal axis passing through the initial center point as the main axis is required, but also the downward vertical scanning with the horizontal axis passing through the initial center point as the main axis, after the end of the upward vertical scanning with the horizontal axis as the main axis After scanning, it is necessary to start the downward vertical scanning with the horizontal axis as the main axis. The implementation steps of the downward vertical scanning are the same as S601 to S608, except that the value of the vertical step is a negative value (-5px) until the horizontal axis is completed. It is the downward vertical scan of the main axis, and the result list imageSimilaritys' that includes both the upward and downward vertical scans of the horizontal axis as the main axis is obtained.
S610,遍历结果列表imageSimilaritys’,找出其中相似度值最小的元组数据,并以该元组数据中的y0 *作为所述棉花叶片中心点的纵坐标yc。S610, traverse the result list imageSimilaritys', find out the tuple data with the smallest similarity value, and use y 0 * in the tuple data as the ordinate y c of the center point of the cotton leaf.
S7:坐标(xc,yc)即为所述棉花叶片的中心点位置(如图10所示)。S7: The coordinates (x c , y c ) are the position of the center point of the cotton blade (as shown in FIG. 10 ).
采用本发明所述方法在实际环境下对不同尺寸的目标作物中心点位置定位的效果一览图如图11所示。Figure 11 shows a list of effects of using the method of the present invention to locate the center points of target crops of different sizes in an actual environment.
对上述具体实施例中的画面每秒传输帧数(PFS)进行测试,测试方法及结果如下:The number of frames per second (PFS) of the picture in the above-mentioned specific embodiment is tested, and the test method and the result are as follows:
FPS测试方法如下:The FPS test method is as follows:
步骤1:利用file_read函数读取系统目录下的所有的目标作物访问地址,并用list形式保存到seeds中;Step 1: Use the file_read function to read all the target crop access addresses in the system directory, and save them in the seeds in the form of a list;
步骤2:根据步骤1中seeds通过切片操作完成种子100、500、900、1300、1800规模的提取;Step 2: Complete the extraction of seeds 100, 500, 900, 1300, 1800 scales by slicing according to the seeds in Step 1;
步骤3:通过循环遍历seeds100、seeds500、seeds900、seeds1300、seeds1800种子文件,逐一的调用算法对每张图片进行处理并记录算法每次单张处理图片的用时,通过单张图片处理用时的累加形成该种子规模下的总用时,同时通过总用时除以总图片数量获得单张平均用时,由于记录所有单张图片处理用时,则可以最后获得单张最低用时和单张最高用时记录;Step 3: By looping through the seeds100, seeds500, seeds900, seeds1300, seeds1800 seed files, call the algorithm one by one to process each image and record the time used by the algorithm to process a single image each time. The total time at the seed scale, and the average time per image is obtained by dividing the total time by the total number of pictures. Since the processing time of all single images is recorded, the record of the minimum time and the highest time of a single image can be finally obtained;
步骤4:通过累加不同规模下的平均单张用时并除以规模总数获得了最终的“汇总/平均单张用时”,即FPS。Step 4: The final "aggregate/average single time", or FPS, is obtained by accumulating the average single time at different scales and dividing by the total number of scales.
为了验证算法的实时处理性能,在自建的图像数据库(共计1820张目标作物)上进行试验仿真,表1列出了本发明所述算法在不同规模数据量上的实时处理速度的试验结果,时间单位为毫秒(ms)。In order to verify the real-time processing performance of the algorithm, a test simulation is carried out on a self-built image database (a total of 1820 target crops), and Table 1 lists the test results of the real-time processing speed of the algorithm of the present invention on different scale data volumes, The time unit is milliseconds (ms).
表1:Table 1:
对上述具体实施例中的中心点位置定位准确性进行测试:Test the positioning accuracy of the center point in the above-mentioned specific embodiment:
中心点定位准确性验证主要采用事先人工定位标注的100张图片作为基准,同时,通过算法对上述图片进行二次定位标注,通过人工定位标注的坐标点dotorigin(xorigin,yorigin)与识别后的坐标点dotpredict(xpredict,ypredict)之间的欧式距离表示二者的误差,最后通过误差累积所得值除以100获得平均定位误差。The center point positioning accuracy verification mainly uses 100 images marked by manual positioning in advance as the benchmark. At the same time, the above images are marked with secondary positioning through the algorithm, and the coordinate points dot origin (x origin , y origin ) and The Euclidean distance between the later coordinate points dot predict (x predict , y predict ) represents the error between the two, and finally the average positioning error is obtained by dividing the accumulated error value by 100.
定位准确性测试方法:Positioning accuracy test method:
步骤1:获取100张目标作物图片,人工为每张图片标注中心点坐标位置;Step 1: Obtain 100 target crop pictures, and manually mark the coordinates of the center point for each picture;
步骤2:算法对人工标注的100张目标作物图片进行二次定位标注;Step 2: The algorithm performs secondary positioning and labeling on the 100 manually labeled target crop images;
步骤3:计算人工定位标注的坐标点dotorigin(xorigin,yorigin)与识别后的坐标点dotpredict(xpredict,ypredict)之间的欧式距离;Step 3: Calculate the Euclidean distance between the coordinate point dot origin (x origin , y origin ) of the manual positioning annotation and the identified coordinate point dot predict (x predict , y predict );
步骤4:将步骤3中误差累加,获得所有图片的误差累积和,并将误差累积和除以100,获得单张图片定位平均误差,即MAE,结果如下述表2所示。Step 4: Accumulate the errors in step 3 to obtain the cumulative sum of errors of all pictures, and divide the cumulative sum of errors by 100 to obtain the average positioning error of a single picture, ie MAE. The results are shown in Table 2 below.
表2:Table 2:
Claims (4)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010465318.2A CN111627059B (en) | 2020-05-28 | 2020-05-28 | A method for locating the center point of cotton leaves |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010465318.2A CN111627059B (en) | 2020-05-28 | 2020-05-28 | A method for locating the center point of cotton leaves |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111627059A true CN111627059A (en) | 2020-09-04 |
| CN111627059B CN111627059B (en) | 2023-05-30 |
Family
ID=72259334
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010465318.2A Expired - Fee Related CN111627059B (en) | 2020-05-28 | 2020-05-28 | A method for locating the center point of cotton leaves |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111627059B (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112470735A (en) * | 2020-11-11 | 2021-03-12 | 江苏大学 | Regular-shape nursery stock automatic trimming device and method based on three-dimensional positioning |
| CN113298768A (en) * | 2021-05-20 | 2021-08-24 | 山东大学 | Cotton detection, segmentation and counting method and system |
Citations (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0997342A (en) * | 1995-08-03 | 1997-04-08 | Sumitomo Electric Ind Ltd | Tree separation distance measurement system |
| JPH11125506A (en) * | 1997-10-22 | 1999-05-11 | Bando Chem Ind Ltd | Length measuring method and device for blades etc. |
| JP2002203242A (en) * | 2000-12-28 | 2002-07-19 | Japan Science & Technology Corp | Plant recognition system |
| CA2391670A1 (en) * | 2001-06-29 | 2002-12-29 | Samsung Electronics Co., Ltd. | Hierarchical image-based representation of still and animated three-dimensional object, method and apparatus for using this representation for the object rendering |
| JP2005182164A (en) * | 2003-12-16 | 2005-07-07 | Sanyo Electric Co Ltd | Image area determination method, and image area determination apparatus, image correction apparatus, and digital watermark extraction apparatus capable of using method |
| US20060013482A1 (en) * | 2004-06-23 | 2006-01-19 | Vanderbilt University | System and methods of organ segmentation and applications of same |
| US20060072849A1 (en) * | 2004-09-27 | 2006-04-06 | Siemens Medical Solutions Usa, Inc. | Multi-leaf collimator position sensing |
| WO2010063252A1 (en) * | 2008-12-03 | 2010-06-10 | Forschungszentrum Jülich GmbH | Method for measuring the growth of leaf disks of plants and apparatus suited therefor |
| DE102010028382A1 (en) * | 2010-04-29 | 2011-11-03 | Siemens Aktiengesellschaft | Method for processing tomographic image data from X-ray computed tomography investigation of liver for recognition of liver tumor, involves performing iterative classification, and calculating image mask from last probability image |
| WO2013109625A1 (en) * | 2012-01-17 | 2013-07-25 | Alibaba Group Holding Limited | Image index generation based on similarities of image features |
| JP2015095115A (en) * | 2013-11-12 | 2015-05-18 | 国立大学法人富山大学 | Area division method, area division program and image processing system |
| WO2016014040A1 (en) * | 2014-07-22 | 2016-01-28 | Hewlett-Packard Development Company, L.P. | Recovering planar projections |
| CN107220647A (en) * | 2017-06-05 | 2017-09-29 | 中国农业大学 | Crop location of the core method and system under a kind of blade crossing condition |
| JP2018161058A (en) * | 2017-03-24 | 2018-10-18 | キッセイコムテック株式会社 | Plant growth state evaluation method, plant growth state evaluation program, plant growth state evaluation apparatus, and plant monitoring system |
| WO2019000455A1 (en) * | 2017-06-30 | 2019-01-03 | 上海联影医疗科技有限公司 | Method and system for segmenting image |
| WO2019113998A1 (en) * | 2017-12-11 | 2019-06-20 | 江苏大学 | Method and device for monitoring comprehensive growth of potted lettuce |
| CN110119793A (en) * | 2019-03-27 | 2019-08-13 | 中国电建集团华东勘测设计研究院有限公司 | Forest transplants location determining method, system, storage equipment and electronic equipment |
| CN111126316A (en) * | 2019-12-27 | 2020-05-08 | 湖南省农业信息与工程研究所 | Four-leaf grass positioning and identifying method based on image processing |
| CN111194636A (en) * | 2020-02-21 | 2020-05-26 | 桂林市思奇通信设备有限公司 | Intelligent cotton bud topping system |
-
2020
- 2020-05-28 CN CN202010465318.2A patent/CN111627059B/en not_active Expired - Fee Related
Patent Citations (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0997342A (en) * | 1995-08-03 | 1997-04-08 | Sumitomo Electric Ind Ltd | Tree separation distance measurement system |
| JPH11125506A (en) * | 1997-10-22 | 1999-05-11 | Bando Chem Ind Ltd | Length measuring method and device for blades etc. |
| JP2002203242A (en) * | 2000-12-28 | 2002-07-19 | Japan Science & Technology Corp | Plant recognition system |
| CA2391670A1 (en) * | 2001-06-29 | 2002-12-29 | Samsung Electronics Co., Ltd. | Hierarchical image-based representation of still and animated three-dimensional object, method and apparatus for using this representation for the object rendering |
| US20030052878A1 (en) * | 2001-06-29 | 2003-03-20 | Samsung Electronics Co., Ltd. | Hierarchical image-based representation of still and animated three-dimensional object, method and apparatus for using this representation for the object rendering |
| JP2005182164A (en) * | 2003-12-16 | 2005-07-07 | Sanyo Electric Co Ltd | Image area determination method, and image area determination apparatus, image correction apparatus, and digital watermark extraction apparatus capable of using method |
| US20060013482A1 (en) * | 2004-06-23 | 2006-01-19 | Vanderbilt University | System and methods of organ segmentation and applications of same |
| US20060072849A1 (en) * | 2004-09-27 | 2006-04-06 | Siemens Medical Solutions Usa, Inc. | Multi-leaf collimator position sensing |
| WO2010063252A1 (en) * | 2008-12-03 | 2010-06-10 | Forschungszentrum Jülich GmbH | Method for measuring the growth of leaf disks of plants and apparatus suited therefor |
| DE102010028382A1 (en) * | 2010-04-29 | 2011-11-03 | Siemens Aktiengesellschaft | Method for processing tomographic image data from X-ray computed tomography investigation of liver for recognition of liver tumor, involves performing iterative classification, and calculating image mask from last probability image |
| WO2013109625A1 (en) * | 2012-01-17 | 2013-07-25 | Alibaba Group Holding Limited | Image index generation based on similarities of image features |
| JP2015095115A (en) * | 2013-11-12 | 2015-05-18 | 国立大学法人富山大学 | Area division method, area division program and image processing system |
| WO2016014040A1 (en) * | 2014-07-22 | 2016-01-28 | Hewlett-Packard Development Company, L.P. | Recovering planar projections |
| JP2018161058A (en) * | 2017-03-24 | 2018-10-18 | キッセイコムテック株式会社 | Plant growth state evaluation method, plant growth state evaluation program, plant growth state evaluation apparatus, and plant monitoring system |
| CN107220647A (en) * | 2017-06-05 | 2017-09-29 | 中国农业大学 | Crop location of the core method and system under a kind of blade crossing condition |
| WO2019000455A1 (en) * | 2017-06-30 | 2019-01-03 | 上海联影医疗科技有限公司 | Method and system for segmenting image |
| WO2019113998A1 (en) * | 2017-12-11 | 2019-06-20 | 江苏大学 | Method and device for monitoring comprehensive growth of potted lettuce |
| CN110119793A (en) * | 2019-03-27 | 2019-08-13 | 中国电建集团华东勘测设计研究院有限公司 | Forest transplants location determining method, system, storage equipment and electronic equipment |
| CN111126316A (en) * | 2019-12-27 | 2020-05-08 | 湖南省农业信息与工程研究所 | Four-leaf grass positioning and identifying method based on image processing |
| CN111194636A (en) * | 2020-02-21 | 2020-05-26 | 桂林市思奇通信设备有限公司 | Intelligent cotton bud topping system |
Non-Patent Citations (1)
| Title |
|---|
| 黄竹芹: "目标形状的半径三角形描述方法及其在叶片图像分类中的应用" * |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112470735A (en) * | 2020-11-11 | 2021-03-12 | 江苏大学 | Regular-shape nursery stock automatic trimming device and method based on three-dimensional positioning |
| CN112470735B (en) * | 2020-11-11 | 2022-07-22 | 江苏大学 | Regular-shape nursery stock automatic trimming device and method based on three-dimensional positioning |
| CN113298768A (en) * | 2021-05-20 | 2021-08-24 | 山东大学 | Cotton detection, segmentation and counting method and system |
| CN113298768B (en) * | 2021-05-20 | 2022-11-08 | 山东大学 | Cotton detection, segmentation and counting method and system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111627059B (en) | 2023-05-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110807355B (en) | A pointer meter detection and reading recognition method based on mobile robot | |
| CN111223133B (en) | Registration method of heterogeneous images | |
| CN110223355B (en) | Feature mark point matching method based on dual epipolar constraint | |
| CN109685078B (en) | Infrared image identification method based on automatic annotation | |
| CN111624203B (en) | A non-contact measurement method for relay contact uniformity based on machine vision | |
| CN109859101B (en) | Method and system for thermal infrared image recognition of crop canopy | |
| CN108470338B (en) | A water level monitoring method | |
| CN113469178A (en) | Electric power meter identification method based on deep learning | |
| CN107507174A (en) | Power plant's instrument equipment drawing based on hand-held intelligent inspection is as recognition methods and system | |
| CN113705564B (en) | A method for identifying and reading pointer instruments | |
| CN115294317A (en) | Pointer type instrument reading intelligent detection method for industrial production factory | |
| CN110838145A (en) | Visual positioning and mapping method for indoor dynamic scene | |
| CN114331995A (en) | A real-time localization method based on multi-template matching based on improved 2D-ICP | |
| CN114708208A (en) | Famous tea tender shoot identification and picking point positioning method based on machine vision | |
| CN110991360A (en) | Robot inspection point location intelligent configuration method based on visual algorithm | |
| CN115187612A (en) | Plane area measuring method, device and system based on machine vision | |
| CN108763575A (en) | Photo control point automatically selecting method based on photo control point database | |
| CN112488244A (en) | Method for automatically counting densely distributed small target pests in point labeling mode by utilizing thermodynamic diagram | |
| CN111627059A (en) | Method for positioning center point position of cotton blade | |
| CN116385477A (en) | A Method of Tower Image Registration Based on Image Segmentation | |
| CN110197113B (en) | A face detection method with high precision anchor point matching strategy | |
| CN111724354A (en) | Image processing-based method for measuring spike length and small spike number of multiple wheat | |
| CN114972948A (en) | Neural detection network-based identification and positioning method and system | |
| CN118196669A (en) | High-precision field crop seedling detection method based on unmanned aerial vehicle and deep learning | |
| CN111598177A (en) | An Adaptive Maximum Sliding Window Matching Method for Low Overlap Image Matching |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20230530 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |