[go: up one dir, main page]

CN102096927A - Target tracking method of independent forestry robot - Google Patents

Target tracking method of independent forestry robot Download PDF

Info

Publication number
CN102096927A
CN102096927A CN 201110028665 CN201110028665A CN102096927A CN 102096927 A CN102096927 A CN 102096927A CN 201110028665 CN201110028665 CN 201110028665 CN 201110028665 A CN201110028665 A CN 201110028665A CN 102096927 A CN102096927 A CN 102096927A
Authority
CN
China
Prior art keywords
target
image
center
cloud terrace
autonomous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201110028665
Other languages
Chinese (zh)
Inventor
阚江明
罗琴娟
李文彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Forestry University
Original Assignee
Beijing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Forestry University filed Critical Beijing Forestry University
Priority to CN 201110028665 priority Critical patent/CN102096927A/en
Publication of CN102096927A publication Critical patent/CN102096927A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种自主林业机器人目标跟踪方法,该方法应用于包括计算机视觉、数字控制云台和中央控制计算机的自主林业机器人的目标跟踪,包括运动目标检测部分和云台实时运动控制部分,运动目标检测部分包括图像预处理模块、多帧差分法的运动信息获取模块、运动目标判别模块、目标中心坐标计算模块,运动目标检测部分检测运动目标信息,并记录目标的外接矩形及目标中心坐标;云台实时运动控制部分控制目标的中心位于图像中心的附近区域。能使自主林业机器人在作业过程中,始终使作业目标始终位于林业机器人视觉系统的图像中心位置或附近区域。

Figure 201110028665

The invention discloses a target tracking method of an autonomous forestry robot, which is applied to the target tracking of an autonomous forestry robot including computer vision, a digitally controlled pan-tilt and a central control computer, including a moving target detection part and a pan-tilt real-time motion control part, The moving target detection part includes an image preprocessing module, a multi-frame difference method motion information acquisition module, a moving target discrimination module, and a target center coordinate calculation module. The moving target detection part detects moving target information, and records the circumscribed rectangle of the target and the target center coordinates ; The center of the target controlled by the real-time motion control part of the PTZ is located in the vicinity of the center of the image. During the operation process of the autonomous forestry robot, the operation target can always be located at the center of the image of the forestry robot vision system or in the vicinity.

Figure 201110028665

Description

自主林业机器人目标跟踪方法Target Tracking Method for Autonomous Forestry Robot

技术领域technical field

本发明涉及一种机器人目标跟踪技术,尤其涉及一种自主林业机器人目标跟踪方法。The invention relates to a robot target tracking technology, in particular to an autonomous forestry robot target tracking method.

背景技术Background technique

自主作业型林业机器人属于特种机器人的一种,其研究日益引人注目,这主要是由于林区这个特定的环境所决定的。自主林业机器人在作业过程中希望作业目标始终位于林业机器人视觉系统的图像中心位置或附近区域。由于目标和林业机器人都可能出于运动状态,所与林业机器人作业目标相对机器人而言处于运动状态,并且作业目标的形状大小都不确定,为了保证作业目标始终位于林业机器人视觉系统的图像中心位置或者附近区域,需要实时检测运动目标,计算目标的中心坐标,并根据目标中心坐标与图像中心位置的差别实时调整云台的方向和角度。The autonomous forestry robot is a kind of special robot, and its research is attracting more and more attention, which is mainly due to the specific environment of the forest area. During the operation process, the autonomous forestry robot hopes that the operation target will always be located in the center of the image of the forestry robot's vision system or in the vicinity. Since both the target and the forestry robot may be in motion, the target of the forestry robot is in motion relative to the robot, and the shape and size of the target are uncertain. In order to ensure that the target is always located in the image center of the vision system of the forestry robot Or in the nearby area, it is necessary to detect the moving target in real time, calculate the center coordinates of the target, and adjust the direction and angle of the gimbal in real time according to the difference between the target center coordinates and the image center position.

现有技术中,由于检测的目标具有不确定性,尤其当运动的目标较复杂,不易于进行图像颜色,形状的描述时,目标在图像视频序列中的实时检测则有一定困难;同时云台的实时自适应控制算法鲜见报道。In the prior art, due to the uncertainty of the detected target, especially when the moving target is more complex, it is not easy to describe the image color and shape, the real-time detection of the target in the image and video sequence has certain difficulties; at the same time, the pan/tilt The real-time adaptive control algorithm is rarely reported.

发明内容Contents of the invention

本发明的目的是提供一种能使自主林业机器人在作业过程中,始终使作业目标位于林业机器人视觉系统的图像中心位置或附近区域的自主林业机器人目标跟踪方法。The object of the present invention is to provide an autonomous forestry robot target tracking method that enables the autonomous forestry robot to always make the operating target located at the center of the image of the forestry robot vision system or in the vicinity during the operation.

本发明的目的是通过以下技术方案实现的:The purpose of the present invention is achieved through the following technical solutions:

本发明的自主林业机器人目标跟踪方法,该方法应用于包括计算机视觉、数字控制云台和中央控制计算机的自主林业机器人的目标跟踪,包括运动目标检测部分和云台实时运动控制部分;The target tracking method of the autonomous forestry robot of the present invention is applied to the target tracking of the autonomous forestry robot including computer vision, digital control pan-tilt and central control computer, including the moving target detection part and the real-time motion control part of the pan-tilt;

所述的运动目标检测部分包括图像预处理模块、多帧差分法的运动信息获取模块、运动目标判别模块、目标中心坐标计算模块;The moving target detection part includes an image preprocessing module, a multi-frame difference method motion information acquisition module, a moving target discrimination module, and a target center coordinate calculation module;

所述图像预处理用于对多帧图像进行图像滤波、图像灰度化和图像分段线性分段变换;The image preprocessing is used to perform image filtering, image grayscale and image segment linear segment transformation on multi-frame images;

所述多帧差分法的运动信息获取模块首先对所有图像进行两两差分,并对每一差分图像进行图像分割,然后对分割后的差分图像进行累积求和,获得运动目标信息;The motion information acquisition module of the multi-frame difference method first performs pairwise difference on all images, and performs image segmentation on each difference image, and then accumulates and sums the divided difference images to obtain moving object information;

所述运动目标判别模块根据设定的阈值来判断是否存在运动目标;The moving target discrimination module judges whether there is a moving target according to a set threshold;

若存在运动目标,则所述目标中心坐标计算模块记录目标的外接矩形及目标中心坐标;If there is a moving target, the target center coordinate calculation module records the circumscribed rectangle of the target and the target center coordinates;

所述云台实时运动控制部分控制所述目标的中心位于图像中心的附近区域。The real-time motion control part of the pan-tilt controls the center of the target to be located in the vicinity of the center of the image.

由上述本发明提供的技术方案可以看出,本发明实施例提供的自主林业机器人目标跟踪方法,由于包括运动目标检测部分和云台实时运动控制部分,运动目标检测部分检测运动目标信息,并记录目标的外接矩形及目标中心坐标;云台实时运动控制部分控制目标的中心位于图像中心的附近区域。能使自主林业机器人在作业过程中,始终使作业目标位于林业机器人视觉系统的图像中心位置或附近区域。As can be seen from the technical solutions provided by the present invention above, the autonomous forestry robot target tracking method provided by the embodiments of the present invention includes a moving target detection part and a cloud platform real-time motion control part, and the moving target detection part detects the moving target information and records The circumscribed rectangle of the target and the coordinates of the target center; the center of the target controlled by the real-time motion control part of the pan/tilt is located in the vicinity of the image center. During the operation process, the autonomous forestry robot can always keep the operation target at the center of the image of the vision system of the forestry robot or the nearby area.

附图说明Description of drawings

为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments. Obviously, the accompanying drawings in the following description are only some embodiments of the present invention. For Those of ordinary skill in the art can also obtain other drawings based on these drawings on the premise of not paying creative work.

图1为本发明实施例提供的自主林业机器人目标跟踪任务示意图;1 is a schematic diagram of an autonomous forestry robot target tracking task provided by an embodiment of the present invention;

图2为本发明实施例中基于计算机视觉的云台实时控制系统结构示意图;Fig. 2 is the structural representation of the cloud platform real-time control system based on computer vision in the embodiment of the present invention;

图3为本发明实施例中自主林业机器人目标跟踪整体流程图;Fig. 3 is the overall flowchart of target tracking of autonomous forestry robot in the embodiment of the present invention;

图4为本发明实施例中运动目标实时检测流程图;Fig. 4 is the flow chart of real-time detection of moving target in the embodiment of the present invention;

图5为本发明实施例中云台实时运动控制流程图。Fig. 5 is a flow chart of the real-time motion control of the pan/tilt in the embodiment of the present invention.

具体实施方式Detailed ways

下面结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明的保护范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

下面将结合附图对本发明实施例作进一步地详细描述。Embodiments of the present invention will be further described in detail below in conjunction with the accompanying drawings.

本发明的自主林业机器人目标跟踪方法,该方法应用于包括计算机视觉、数字控制云台和中央控制计算机的自主林业机器人的目标跟踪,其较佳的具体实施方式是:The target tracking method of the autonomous forestry robot of the present invention, the method is applied to the target tracking of the autonomous forestry robot comprising computer vision, digital control platform and central control computer, and its preferred specific implementation is:

包括运动目标检测部分和云台实时运动控制部分;Including moving target detection part and PTZ real-time motion control part;

所述的运动目标检测部分包括图像预处理模块、多帧差分法的运动信息获取模块、运动目标判别模块、目标中心坐标计算模块;The moving target detection part includes an image preprocessing module, a multi-frame difference method motion information acquisition module, a moving target discrimination module, and a target center coordinate calculation module;

所述图像预处理用于对多帧图像进行图像滤波、图像灰度化和图像分段线性分段变换;The image preprocessing is used to perform image filtering, image grayscale and image segment linear segment transformation on multi-frame images;

所述多帧差分法的运动信息获取模块首先对所有图像进行两两差分,并对每一差分图像进行图像分割,然后对分割后的差分图像进行累积求和,获得运动目标信息;The motion information acquisition module of the multi-frame difference method first performs pairwise difference on all images, and performs image segmentation on each difference image, and then accumulates and sums the divided difference images to obtain moving object information;

所述运动目标判别模块根据设定的阈值来判断是否存在运动目标;The moving target discrimination module judges whether there is a moving target according to a set threshold;

若存在运动目标,则所述目标中心坐标计算模块记录目标的外接矩形及目标中心坐标;If there is a moving target, the target center coordinate calculation module records the circumscribed rectangle of the target and the target center coordinates;

所述云台实时运动控制部分控制所述目标的中心位于图像中心的附近区域。The real-time motion control part of the pan-tilt controls the center of the target to be located in the vicinity of the center of the image.

所述多帧差分法的运动信息获取模块获得运动目标信息包括步骤:The motion information acquisition module of the multi-frame difference method obtains the motion target information including steps:

A、图像差分计算:对视频图像序列以及采样速率进行设置,以适用于准确提取不同运动速率的目标;A. Image difference calculation: set the video image sequence and sampling rate to be suitable for accurate extraction of targets with different motion rates;

B、差分图像的阈值分割:针对每一个差分图像进行阈值分割,得到二值图像;B. Threshold segmentation of differential images: perform threshold segmentation for each differential image to obtain a binary image;

C、对所有二值图像进行求和,定位持续运动的区域,得到运动目标边界信息。C. Summing all the binary images, locating the area of continuous motion, and obtaining the boundary information of the moving target.

所述步骤A中,默认设置为:每组图像序列n=3幅灰度图片,采样速率为0.2s。即每200ms秒采集一帧图片,共连续获取3帧图片为一组,利用3幅图像进行两两差分运算,得到差分图像共nΔf=3*(3-1)/2幅;In the step A, the default setting is: n=3 grayscale pictures for each group of image sequences, and the sampling rate is 0.2s. That is, one frame of pictures is collected every 200 ms, and a total of 3 frames of pictures are continuously acquired as a group, and three images are used to perform pairwise difference calculations to obtain a total of n Δf = 3*(3-1)/2 difference images;

所述步骤B中,利用最大类间方差阈值法计算得到分割阈值,具体是利用随机阈值将像素灰度分成两类,这两类灰度均值距离最远的阈值为最佳分割阈值;In the step B, the segmentation threshold is calculated by using the maximum inter-class variance threshold method, specifically using a random threshold to divide the grayscale of pixels into two categories, and the threshold with the farthest distance between the two types of grayscale mean values is the optimal segmentation threshold;

所述步骤C中,对于求和后的图像,利用数学形态学的先腐蚀后膨胀的处理方法去除噪声和干扰点,获得运动目标的边界信息。In the step C, for the summed image, noise and interference points are removed by using a mathematical morphology processing method of first erosion and then expansion, so as to obtain boundary information of the moving target.

所述设定的阈值为运动目标的面积阈值;The set threshold is the area threshold of the moving target;

所述目标中心坐标为目标图像的一阶矩和零阶矩的比值。The target center coordinates are the ratio of the first-order moment and the zero-order moment of the target image.

所述云台实时运动控制部分根据所述目标中心坐标控制数字控制云台的水平、垂直两个方向的角度调整度量,并构成闭环控制系统。The real-time motion control part of the pan-tilt controls the digital control pan-tilt's horizontal and vertical angle adjustment measurements according to the target center coordinates, and constitutes a closed-loop control system.

所述云台实时运动控制部分的控制算法包括:The control algorithm of the real-time motion control part of the cloud platform includes:

令目标的外接矩形水平方向长度为W,垂直方向长度为H,目标中心坐标为(x,y),图像中心坐标为(x0,y0),数字控制云台的调整步骤如下:Let the horizontal length of the circumscribed rectangle of the target be W, the vertical length be H, the coordinates of the center of the target be (x, y), and the coordinates of the center of the image be (x 0 , y 0 ). The adjustment steps of the digital control pan/tilt are as follows:

D、计算目标中心与图像中心位置差别(Δx,Δy),Δx=x0-x,Δy=y0-y;D. Calculate the position difference (Δx, Δy) between the target center and the image center, Δx=x 0 -x, Δy=y 0 -y;

E、若

Figure BDA0000045437900000041
则目标已经出于图像中心或附近区域,云台不需要调整,转向步骤D;若
Figure BDA0000045437900000042
云台需要进行调整,转向步骤F;E. If
Figure BDA0000045437900000041
Then the target is already in the center of the image or the nearby area, the gimbal does not need to be adjusted, turn to step D; if
Figure BDA0000045437900000042
The gimbal needs to be adjusted, turn to step F;

F、若|Δx|>δ,云台水平方向调整角度Δθx=kxΔx,其中

Figure BDA0000045437900000043
当Δθx>0云台在水平方向应该沿着逆时针方向调整,当Δθx<0云台在水平方向应该沿着顺时针方向调整;若|Δy|>δ,云台垂直方向调整角度Δθy=kyΔy,其中
Figure BDA0000045437900000044
当Δθy>0云台在水平方向应该向下调整,当Δθy<0云台在水平方向应该向上调整;云台水平、垂直两个方向调整结束后转入步骤D;F. If |Δx|>δ, adjust the pan-tilt horizontal direction angle Δθ x = k x Δx, where
Figure BDA0000045437900000043
When Δθ x > 0, the pan/tilt should be adjusted counterclockwise in the horizontal direction, and when Δθ x < 0, the pan/tilt should be adjusted clockwise in the horizontal direction; if |Δy|>δ, the vertical angle of the pan/tilt is adjusted by Δθ y = k y Δy, where
Figure BDA0000045437900000044
When Δθ y > 0, the gimbal should be adjusted downward in the horizontal direction, and when Δθ y < 0, the gimbal should be adjusted upward in the horizontal direction; after the horizontal and vertical adjustments of the gimbal are completed, go to step D;

上式中,δ为设定的阈值。In the above formula, δ is the set threshold.

本发明主要用于自主林业机器人在林区作业时自动检测目标,并实时调整云台的角度,让林业机器人的作业目标的中心坐标始终处于图像中心或附近区域,属于林业智能装备领域。The invention is mainly used for autonomous forestry robots to automatically detect targets when operating in forest areas, and to adjust the angle of the platform in real time so that the central coordinates of the forestry robot's operating targets are always in the center of the image or in the vicinity, belonging to the field of forestry intelligent equipment.

本发明的自主林业机器人,包括摄像机、数字控制云台和控制计算机等组成的基于计算机视觉的云台实时控制系统的硬件支持,包括摄像机、数字控制云台和控制计算机等三个部分。摄像机架设在数字控制云台上,摄像机与监控计算机通过IEEE1394接口相连接,通过监控计算机实时读取摄像机获取的图像,并根据本法明提出的相关算法进行图像处理与分析。数字控制云台通过RS232接口与计算机连接并进行通讯,计算机通过RS232接口向数值控制云台发送云台实时姿态控制命令,云台按照控制命令改变自身的姿态实现自主林业机器人目标跟踪。该系统中硬件类型和硬件之间连接的接口并不局限于此。The autonomous forestry robot of the present invention comprises the hardware support of the cloud platform real-time control system based on computer vision that camera, digital control cloud platform and control computer etc. are formed, comprises three parts such as camera, digital control cloud platform and control computer. The camera is set up on the digital control pan-tilt, and the camera is connected to the monitoring computer through the IEEE1394 interface, and the image obtained by the camera is read in real time through the monitoring computer, and the image is processed and analyzed according to the relevant algorithm proposed by this law. The digital control pan/tilt is connected to the computer through the RS232 interface and communicates. The computer sends the pan/tilt real-time attitude control command to the numerical control pan/tilt through the RS232 interface. The pan/tilt changes its own attitude according to the control command to realize the target tracking of the autonomous forestry robot. The type of hardware in the system and the interface connected between the hardware are not limited thereto.

本发明具有以下优点:The present invention has the following advantages:

本发明的自主林业机器人目标跟踪方法不依赖于特定的硬件系统,使用于基于计算机视觉的云台实时控制系统,只要该系统包括监控计算机、数字控制云台和摄像机,这些硬件的具体型号和接口类型并不局限于前面描述的IEEE1394接口和RS232接口,不同类型的接口只是驱动方法不同;The autonomous forestry robot target tracking method of the present invention does not depend on specific hardware system, is used in the real-time control system of cloud platform based on computer vision, as long as this system comprises monitoring computer, digital control cloud platform and video camera, the concrete model of these hardwares and interface The type is not limited to the IEEE1394 interface and RS232 interface described above, and the different types of interfaces are only driven by different methods;

本发明中自动实时跟踪的目标通过运动目标自动检测获取,并不需要与目标相关的先验知识,比如目标大小形状类型等。该发明的可以应用于不同作业类型的自主林业机器人;The target automatically tracked in real time in the present invention is obtained through automatic detection of moving targets, and does not require prior knowledge related to the target, such as the size, shape and type of the target. Autonomous forestry robots of the invention that can be applied to different types of operations;

本发明中云台的自适应实时控制是一个闭环控制,控制算法能够根据目标位置与图像中心的距离自适应调整云台水平、垂直两个方向角度调量的大小,在目标位置与图像中心位置较大的时候,云台方向角度调量大,在目标位置与图像中心位置较小的时候云台方向角度调量小。同时设计了一个容差余量δ,并不需要目标中心与图像中心完全重合,只需要目标在图像中心附近区域即可,这样可以避免当目标已经非常接近图像中心时云台还一直处于调节状态的情况。The self-adaptive real-time control of cloud platform among the present invention is a closed-loop control, and control algorithm can self-adaptively adjust the size of cloud platform level, the size of vertical two direction angle adjustments according to the distance of target position and image center, at target position and image center position When it is larger, the gimbal direction angle adjustment is large, and when the target position and the image center position are small, the gimbal direction angle adjustment amount is small. At the same time, a tolerance margin δ is designed, which does not require the center of the target to coincide completely with the center of the image, but only needs the target to be in the area near the center of the image, which can prevent the gimbal from being in an adjustment state when the target is already very close to the center of the image Case.

具体实施例,如图1至图5所示:Concrete embodiment, as shown in Figure 1 to Figure 5:

图1给出的林业机器人目标跟踪任务示意图反映了目标跟踪具体和跟踪过程;图2给出了基于计算机视觉的云台实时控制系统结构图,也就是实现本发明提出的一种自主林业机器人目标跟踪方法的硬件系统;图3、图4和图5是一种自主林业机器人目标跟踪方法的软件流程图。The forestry robot target tracking task schematic diagram that Fig. 1 provides has reflected target tracking concrete and tracking process; Fig. 2 has provided the real-time control system structural diagram of cloud platform based on computer vision, just realizes a kind of autonomous forestry robot target that the present invention proposes The hardware system of the tracking method; Fig. 3, Fig. 4 and Fig. 5 are software flowcharts of a method for target tracking of an autonomous forestry robot.

如图1所示,自主林业机器人目标跟踪就是在自主林业机器人行走或者静止的情况下自动控制云台的水平垂直方向的角度,让目标始终位于图像的中心位置或者附近区域;As shown in Figure 1, the target tracking of the autonomous forestry robot is to automatically control the horizontal and vertical angles of the pan/tilt when the autonomous forestry robot is walking or stationary, so that the target is always located in the center of the image or in the nearby area;

如图2所示,是一种基于计算机视觉的云台实时控制系统,其中对摄像机、云台监控计算以及它们之间的接口没有特别的要求;As shown in Figure 2, it is a computer vision-based real-time control system for PTZ, in which there are no special requirements for cameras, PTZ monitoring calculations and the interfaces between them;

再参见图1,在摄像机采集图像及检测到目标之后,计算目标中心坐标和图像中心位置的差别(Δx,Δy),监控计算机根据本发明提出的云台实时运动控制方法调整云台的姿态,让目标始终位于图像的中心位置或者附近区域。Referring to Fig. 1 again, after the camera captures the image and detects the target, calculate the difference (Δx, Δy) between the target center coordinates and the image center position, and the monitoring computer adjusts the posture of the cloud platform according to the cloud platform real-time motion control method proposed by the present invention, Keep the target always in the center or nearby area of the image.

如图3所示,是自主林业机器人目标跟踪整体流程图,摄像机、云台初始化分别需要利用摄像机和云台厂家提供的摄像机和云台的应用程序接口函数库中相关函数对摄像机和云台进行初始化,然后摄像机的应用程序接口函数库中相关函数进行图像采集。对采集图像进行处理分析,先检测图像中是否存在目标,若存在目标,计算目标的中心位置并判断目标是否在图像中心及其附近区域,若目标中心不在图像中心及其附近区域,根据目标中心坐标和云台实时控制算法调整云台姿态,让目标中心始终处于图像中心及其附近区域。As shown in Figure 3, it is the overall flow chart of the target tracking of the autonomous forestry robot. The initialization of the camera and the pan/tilt needs to use the relevant functions in the API function library of the camera and the pan/tilt provided by the manufacturer of the camera and the pan/tilt to perform the initialization of the camera and the pan/tilt respectively. Initialize, and then the relevant functions in the API function library of the camera carry out image acquisition. Process and analyze the collected image, first detect whether there is a target in the image, if there is a target, calculate the center position of the target and judge whether the target is in the image center and its vicinity, if the target center is not in the image center and its vicinity, according to the target center Coordinates and the real-time control algorithm of the gimbal adjust the gimbal attitude so that the target center is always in the center of the image and its vicinity.

如图4所示,是运动目标检测程序流程图,算法详细实现步骤包括:①图像差分计算,可以对视频图像序列以及采样速率进行设置,以适用于准确提取不同运动速率的目标。默认设置为:每组图像序列n=3幅灰度图片,采样速率为0.2s。即每200ms秒采集一帧图片,共连续获取3帧图片为一组。利用3幅图像进行两两差分运算,那么差分图像共nΔf=3*(3-1)/2。②差分图像的阈值分割,针对每一个差分图像进行阈值分割,得到二值图像。利用基于最大类间方差准则计算差分图像分割的阈值,其基本思想是利用随机阈值Z′将像素灰度分成两类,使这两类灰度均值距离最远的Z为最佳分割阈值。③对所有经过图像阈值化处理后的差分图像进行求和,定位持续运动的区域,得到运动目标边界信息。对于求和后的图像,利用数学形态学的先腐蚀后膨胀的处理方法去除噪声和干扰点,最终可获得较理想的运动目标的边界。As shown in Figure 4, it is the flow chart of the moving target detection program. The detailed implementation steps of the algorithm include: ① Image difference calculation, which can set the video image sequence and sampling rate to be suitable for accurate extraction of targets with different moving rates. The default setting is: n=3 grayscale pictures for each group of image sequences, and the sampling rate is 0.2s. That is, a frame of pictures is collected every 200ms, and a total of 3 frames of pictures are continuously acquired as a group. Three images are used to perform pairwise difference operations, and then the total number of difference images is n Δf =3*(3-1)/2. ②Threshold segmentation of the difference image, threshold segmentation is performed on each difference image to obtain a binary image. Using the criterion based on the maximum between-class variance to calculate the threshold of difference image segmentation, the basic idea is to use the random threshold Z' to divide the grayscale of pixels into two categories, so that Z, which is the farthest distance between the mean values of the two categories of grayscale, is the optimal segmentation threshold. ③Sum all the difference images after image thresholding processing, locate the area of continuous motion, and obtain the boundary information of the moving target. For the summed image, the noise and interference points are removed by using the mathematical morphology first erosion and then expansion processing method, and finally a more ideal boundary of the moving target can be obtained.

如图5所示,是云台实时运动控制流程图,给出了云台姿态调整的具体过程。令目标的外接矩形水平方向长度为W,垂直方向长度为H,目标中心坐标为(x,y),图像中心坐标为(x0,y0)。数字控制云台的调整算法具体实现步骤如下:(1)计算目标中心与图像中心位置差别(Δx,Δy),Δx=x0-x,Δy=y0-y。(2)若

Figure BDA0000045437900000061
则目标已经出于图像中心或附近区域,云台不需要调整,转向步骤(1);若云台需要进行调整,转向步骤(3)。(3)若|Δx|>δ,云台水平方向调整角度Δθx=kxΔx,其中
Figure BDA0000045437900000063
当Δθx>0云台在水平方向应该沿着逆时针方向调整,当Δθx<0云台在水平方向应该沿着顺时针方向调整;若|Δy|>δ,云台垂直方向调整角度Δθy=kyΔy,其
Figure BDA0000045437900000071
当Δθy>0云台在水平方向应该向下调整,当Δθy<0云台在水平方向应该向上调整;云台水平、垂直两个方向调整结束后转入步骤(1)。As shown in Figure 5, it is the flow chart of the real-time motion control of the pan/tilt, and the specific process of the attitude adjustment of the pan/tilt is given. Let the horizontal length of the bounding rectangle of the target be W, the vertical length be H, the coordinates of the center of the target be (x, y), and the coordinates of the center of the image be (x 0 , y 0 ). The specific implementation steps of the adjustment algorithm of the digital control pan/tilt are as follows: (1) Calculate the position difference (Δx, Δy) between the target center and the image center, Δx=x 0 -x, Δy=y 0 -y. (2) If
Figure BDA0000045437900000061
Then the target is already in the center of the image or the nearby area, the gimbal does not need to be adjusted, turn to step (1); if The gimbal needs to be adjusted, go to step (3). (3) If |Δx|>δ, the horizontal adjustment angle of the gimbal is Δθ x = k x Δx, where
Figure BDA0000045437900000063
When Δθ x > 0, the pan/tilt should be adjusted counterclockwise in the horizontal direction, and when Δθ x < 0, the pan/tilt should be adjusted clockwise in the horizontal direction; if |Δy|>δ, the vertical angle of the pan/tilt is adjusted by Δθ y = k y Δy, which
Figure BDA0000045437900000071
When Δθ y > 0, the gimbal should be adjusted downward in the horizontal direction, and when Δθ y < 0, the gimbal should be adjusted upward in the horizontal direction; after the horizontal and vertical adjustments of the gimbal are completed, turn to step (1).

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明披露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求书的保护范围为准。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Any person familiar with the technical field can easily conceive of changes or changes within the technical scope disclosed in the present invention. Replacement should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be determined by the protection scope of the claims.

Claims (6)

1. autonomous forestry robot target tracking, this method is applied to comprise the target following of the autonomous forestry robot of computer vision, digital control The Cloud Terrace and central control computer, it is characterized in that, comprise moving object detection part and The Cloud Terrace real time kinematics control section;
Described moving object detection partly comprises movable information acquisition module, moving target discrimination module, target's center's coordinate Calculation module of image pretreatment module, Multi Frame Difference point-score;
Described image pre-service is used for multiple image is carried out image filtering, image gray processing and the conversion of image segmentation linear segmented;
The movable information acquisition module of described Multi Frame Difference point-score at first carries out difference in twos to all images, and each difference image is carried out image segmentation, then the difference image after cutting apart is accumulated summation, obtains moving target information;
Described moving target discrimination module judges whether to exist moving target according to preset threshold;
If have moving target, boundary rectangle and target's center's coordinate of then described target's center coordinate Calculation module records target;
Described The Cloud Terrace real time kinematics control section is controlled the near zone that is centered close to picture centre of described target.
2. autonomous forestry robot target tracking according to claim 1 is characterized in that, the movable information acquisition module of described Multi Frame Difference point-score obtains moving target information and comprises step:
A, image difference calculate: sequence of video images and sampling rate are provided with, to be applicable to the target of accurate extraction different motion speed;
The Threshold Segmentation of B, difference image: carry out Threshold Segmentation at each difference image, obtain bianry image;
C, all bianry images are sued for peace, the zone of location persistent movement obtains moving object boundary information.
3. autonomous forestry robot target tracking according to claim 2 is characterized in that:
In the described steps A, default setting is: every group of image sequence n=3 width of cloth gray scale picture, sampling rate is 0.2s.Be that every 200ms gathers a frame picture, obtaining 3 frame pictures continuously altogether is one group, utilizes 3 width of cloth images to carry out calculus of differences in twos, obtains difference image n altogether Δ f=3* (3-1)/2 width of cloth;
Among the described step B, utilizing the maximum between-cluster variance threshold method to calculate segmentation threshold, specifically is to utilize at random that threshold value is divided into two classes with pixel grey scale, and this two classes gray average distance threshold value farthest is an optimal segmenting threshold;
Among the described step C,, utilize the disposal route of the after expansion of corrosion earlier of mathematical morphology to remove noise and noise spot, obtain the boundary information of moving target for the image after the summation.
4. autonomous forestry robot target tracking according to claim 1 is characterized in that described preset threshold is the area threshold of moving target;
Described target's center coordinate is the first moment of target image and the ratio of zeroth order square.
5. according to each described autonomous forestry robot target tracking of claim 1 to 4, it is characterized in that, described The Cloud Terrace real time kinematics control section is according to the level of described target's center coordinate control figure control The Cloud Terrace, the angular setting tolerance of vertical both direction, and the formation closed-loop control system.
6. autonomous forestry robot target tracking according to claim 5 is characterized in that the control algolithm of described The Cloud Terrace real time kinematics control section comprises:
The boundary rectangle cross-directional length that makes target is W, and vertical-direction length is H, and target's center's coordinate is that (x, y), the picture centre coordinate is (x 0, y 0), the set-up procedure of digital control The Cloud Terrace is as follows:
D, calculating target's center and picture centre position difference (Δ x, Δ y), Δ x=x 0-x, Δ y=y 0-y;
E, if
Figure FDA0000045437890000021
Then target is for picture centre or near zone, and The Cloud Terrace does not need to adjust, and turns to step D; If
Figure FDA0000045437890000022
The Cloud Terrace need be adjusted, and turns to step F;
F, if | Δ x|>δ, the The Cloud Terrace horizontal direction is adjusted angle delta θ x=k xΔ x, wherein
Figure FDA0000045437890000023
As Δ θ x>0 The Cloud Terrace in the horizontal direction should be along counterclockwise adjusting, as Δ θ x<0 The Cloud Terrace in the horizontal direction should be along the clockwise direction adjustment; If | Δ y|>δ, the The Cloud Terrace vertical direction is adjusted angle delta θ y=k yΔ y, wherein
Figure FDA0000045437890000024
As Δ θ y>0 The Cloud Terrace should be adjusted downwards in the horizontal direction, as Δ θ y<0 The Cloud Terrace should adjust upward in the horizontal direction; The Cloud Terrace level, vertical both direction adjustment change step D over to after finishing;
In the following formula, δ is a preset threshold.
CN 201110028665 2011-01-26 2011-01-26 Target tracking method of independent forestry robot Pending CN102096927A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110028665 CN102096927A (en) 2011-01-26 2011-01-26 Target tracking method of independent forestry robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110028665 CN102096927A (en) 2011-01-26 2011-01-26 Target tracking method of independent forestry robot

Publications (1)

Publication Number Publication Date
CN102096927A true CN102096927A (en) 2011-06-15

Family

ID=44130004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110028665 Pending CN102096927A (en) 2011-01-26 2011-01-26 Target tracking method of independent forestry robot

Country Status (1)

Country Link
CN (1) CN102096927A (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881161A (en) * 2012-09-28 2013-01-16 武汉烽火众智数字技术有限责任公司 Method and device for detecting moving vehicles on basis of multi-frame differences and cast shadow removal
CN103617631A (en) * 2013-11-11 2014-03-05 山东神戎电子股份有限公司 Tracking method based on center detection
CN105611380A (en) * 2015-12-23 2016-05-25 小米科技有限责任公司 Video image processing method and device
CN105611386A (en) * 2015-12-23 2016-05-25 小米科技有限责任公司 Video picture processing method and device
CN106688228A (en) * 2014-09-10 2017-05-17 富士胶片株式会社 Imaging control device, imaging control method, camera, camera system and program
WO2017143949A1 (en) * 2016-02-23 2017-08-31 芋头科技(杭州)有限公司 Robot monitoring system based on human body information
CN107273850A (en) * 2017-06-15 2017-10-20 上海工程技术大学 A kind of autonomous follower method based on mobile robot
CN107357318A (en) * 2017-06-16 2017-11-17 中国科学院长春光学精密机械与物理研究所 The control method and control system of stable cloud platform rotation and stable head
CN107433573A (en) * 2017-09-04 2017-12-05 上海理工大学 Intelligent binocular captures mechanical arm automatically
CN107657639A (en) * 2017-08-09 2018-02-02 武汉高德智感科技有限公司 A kind of method and apparatus of quickly positioning target
CN108198199A (en) * 2017-12-29 2018-06-22 北京地平线信息技术有限公司 Moving body track method, moving body track device and electronic equipment
CN108230366A (en) * 2017-12-28 2018-06-29 厦门市美亚柏科信息股份有限公司 A kind of method for tracing of object
CN108733083A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Control method, device, robot and the storage medium of revolute
CN109460047A (en) * 2018-10-23 2019-03-12 昆山优尼电能运动科技有限公司 The unmanned plane autonomous hierarchical landing method and system of view-based access control model navigation
CN109765939A (en) * 2018-12-21 2019-05-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Cloud platform control method, device and the storage medium of unmanned plane
CN109981972A (en) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 Target tracking method of robot, robot and storage medium
CN110068827A (en) * 2019-04-29 2019-07-30 西北工业大学 A kind of method of the autonomous object ranging of unmanned plane
CN110738689A (en) * 2019-10-22 2020-01-31 武汉工程大学 A method, system and device for a car to automatically follow and avoid a target
CN111163261A (en) * 2019-12-25 2020-05-15 上海肇观电子科技有限公司 Target detection method, circuit, visual impairment assistance device, electronic device, and medium
CN112233141A (en) * 2020-09-28 2021-01-15 国网浙江省电力有限公司杭州供电公司 A moving target tracking method and system based on UAV vision in power scene
CN114147704A (en) * 2021-11-18 2022-03-08 南京师范大学 Mechanical arm accurate positioning and grabbing method based on depth vision and increment closed loop
CN114993179A (en) * 2022-05-31 2022-09-02 福建信息职业技术学院 Non-contact object form and size measuring system based on linear regression
CN115114466A (en) * 2022-08-30 2022-09-27 成都实时技术股份有限公司 Method, system, medium and electronic device for searching target information image
CN115580713A (en) * 2022-11-07 2023-01-06 广东电网有限责任公司 Intelligent safety monitoring control auxiliary device and method
CN116563335A (en) * 2022-12-16 2023-08-08 福建信息职业技术学院 Object tracking device and method based on improved twin network algorithm

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1721144A (en) * 2004-07-13 2006-01-18 中国科学院自动化研究所 A kind of fast tracking method and device based on color of object surface

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1721144A (en) * 2004-07-13 2006-01-18 中国科学院自动化研究所 A kind of fast tracking method and device based on color of object surface

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《兵工学报》 20011130 程相权 等 一种快速目标的图像跟踪综合算法 全文 1-6 第22卷, 第4期 *
《机器人》 20010528 刘亚 等 基于主运动分析的野外视觉侦查系统--运动目标检测、跟踪及全景图的生成 全文 1-6 第23卷, 第3期 *
《计算机工程与应用》 20050521 王璐 等 移动机器人的运动目标实时检测与跟踪 全文 1-6 第41卷, 第15期 *
《计算机技术与发展》 20090410 刘伟 等 一种移动机器人对运动目标的检测跟踪方法 第106-107页第1、2节 1-6 第19卷, 第4期 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881161B (en) * 2012-09-28 2014-05-14 武汉烽火众智数字技术有限责任公司 Method and device for detecting moving vehicles on basis of multi-frame differences and cast shadow removal
CN102881161A (en) * 2012-09-28 2013-01-16 武汉烽火众智数字技术有限责任公司 Method and device for detecting moving vehicles on basis of multi-frame differences and cast shadow removal
CN103617631A (en) * 2013-11-11 2014-03-05 山东神戎电子股份有限公司 Tracking method based on center detection
CN103617631B (en) * 2013-11-11 2017-07-04 山东神戎电子股份有限公司 A kind of tracking based on Spot detection
CN106688228B (en) * 2014-09-10 2019-06-14 富士胶片株式会社 Camera control device, camera control method, camera, and camera system
CN106688228A (en) * 2014-09-10 2017-05-17 富士胶片株式会社 Imaging control device, imaging control method, camera, camera system and program
CN105611380A (en) * 2015-12-23 2016-05-25 小米科技有限责任公司 Video image processing method and device
CN105611386A (en) * 2015-12-23 2016-05-25 小米科技有限责任公司 Video picture processing method and device
US11158064B2 (en) 2016-02-23 2021-10-26 Yotou Technology (Hangzhou) Co., Ltd. Robot monitoring system based on human body information
WO2017143949A1 (en) * 2016-02-23 2017-08-31 芋头科技(杭州)有限公司 Robot monitoring system based on human body information
CN107273850A (en) * 2017-06-15 2017-10-20 上海工程技术大学 A kind of autonomous follower method based on mobile robot
CN107357318A (en) * 2017-06-16 2017-11-17 中国科学院长春光学精密机械与物理研究所 The control method and control system of stable cloud platform rotation and stable head
CN107357318B (en) * 2017-06-16 2019-12-17 中国科学院长春光学精密机械与物理研究所 Control method and control system for stabilizing pan-tilt rotation and stabilizing pan-tilt
CN107657639A (en) * 2017-08-09 2018-02-02 武汉高德智感科技有限公司 A kind of method and apparatus of quickly positioning target
CN107433573A (en) * 2017-09-04 2017-12-05 上海理工大学 Intelligent binocular captures mechanical arm automatically
CN107433573B (en) * 2017-09-04 2021-04-16 上海理工大学 Intelligent binocular automatic grasping robotic arm
CN109981972A (en) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 Target tracking method of robot, robot and storage medium
CN109981972B (en) * 2017-12-27 2021-01-08 深圳市优必选科技有限公司 Target tracking method of robot, robot and storage medium
CN108230366A (en) * 2017-12-28 2018-06-29 厦门市美亚柏科信息股份有限公司 A kind of method for tracing of object
CN108198199A (en) * 2017-12-29 2018-06-22 北京地平线信息技术有限公司 Moving body track method, moving body track device and electronic equipment
CN108733083A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Control method, device, robot and the storage medium of revolute
CN109460047A (en) * 2018-10-23 2019-03-12 昆山优尼电能运动科技有限公司 The unmanned plane autonomous hierarchical landing method and system of view-based access control model navigation
CN109460047B (en) * 2018-10-23 2022-04-12 昆山优尼电能运动科技有限公司 Unmanned aerial vehicle autonomous graded landing method and system based on visual navigation
CN109765939A (en) * 2018-12-21 2019-05-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Cloud platform control method, device and the storage medium of unmanned plane
CN110068827A (en) * 2019-04-29 2019-07-30 西北工业大学 A kind of method of the autonomous object ranging of unmanned plane
CN110738689A (en) * 2019-10-22 2020-01-31 武汉工程大学 A method, system and device for a car to automatically follow and avoid a target
CN110738689B (en) * 2019-10-22 2024-01-26 武汉工程大学 A method, system and device for a car to automatically follow and avoid targets
CN111163261B (en) * 2019-12-25 2022-03-01 上海肇观电子科技有限公司 Target detection method, circuit, visual impairment assistance device, electronic device, and medium
CN111163261A (en) * 2019-12-25 2020-05-15 上海肇观电子科技有限公司 Target detection method, circuit, visual impairment assistance device, electronic device, and medium
CN112233141A (en) * 2020-09-28 2021-01-15 国网浙江省电力有限公司杭州供电公司 A moving target tracking method and system based on UAV vision in power scene
CN112233141B (en) * 2020-09-28 2022-10-14 国网浙江省电力有限公司杭州供电公司 Moving target tracking method and system based on unmanned aerial vehicle vision in electric power scene
CN114147704A (en) * 2021-11-18 2022-03-08 南京师范大学 Mechanical arm accurate positioning and grabbing method based on depth vision and increment closed loop
CN114147704B (en) * 2021-11-18 2023-09-22 南京师范大学 Mechanical arm accurate positioning and grabbing method based on depth vision and incremental closed loop
CN114993179A (en) * 2022-05-31 2022-09-02 福建信息职业技术学院 Non-contact object form and size measuring system based on linear regression
CN115114466A (en) * 2022-08-30 2022-09-27 成都实时技术股份有限公司 Method, system, medium and electronic device for searching target information image
CN115114466B (en) * 2022-08-30 2022-12-13 成都实时技术股份有限公司 Method, system, medium and electronic device for searching target practice information image
CN115580713A (en) * 2022-11-07 2023-01-06 广东电网有限责任公司 Intelligent safety monitoring control auxiliary device and method
CN116563335A (en) * 2022-12-16 2023-08-08 福建信息职业技术学院 Object tracking device and method based on improved twin network algorithm

Similar Documents

Publication Publication Date Title
CN102096927A (en) Target tracking method of independent forestry robot
US10970859B2 (en) Monitoring method and device for mobile target, monitoring system and mobile robot
CN103149939B (en) A kind of unmanned plane dynamic target tracking of view-based access control model and localization method
CN112486171B (en) Robot obstacle avoidance method based on vision
KR101486308B1 (en) Tracking moving objects for mobile robots control devices, methods, and its robot
CN113674299A (en) A 3D printing method and device
KR100738522B1 (en) Camera / object movement classification and object extraction apparatus and method in video surveillance system
KR101257207B1 (en) Method, apparatus and computer-readable recording medium for head tracking
CN109434251B (en) Welding seam image tracking method based on particle filtering
CN105447853A (en) Flight device, flight control system and flight control method
JP2009198445A (en) Device and method for object detection
WO2003098922A1 (en) An imaging system and method for tracking the motion of an object
CN120928842B (en) Cleaning mode switching method and device for multifunctional cleaning robot for swimming pool
CN111260709B (en) Ground-assisted visual odometer method for dynamic environment
CN112862865A (en) Detection and identification method and device for underwater robot and computer storage medium
TWI641265B (en) Mobile target position tracking system
JP2010286963A (en) Moving object detection apparatus and moving object detection method
CN116310297B (en) Real-time human body fall detection system and method based on laser sensor and monocular camera
CN118605551A (en) A control method and system for a UAV landing mobile platform based on vision
TWI453698B (en) The method of automatic tracking of ball camera
CN120841384B (en) Suspended object tracking and falling detection method and device based on multi-sensor fusion
Vergés-Llahí et al. Object tracking system using colour histograms
Varcheie et al. Active people tracking by a PTZ camera in IP surveillance system
CN115410231A (en) Multi-sensor-based personnel following method in map-free scene
CN106303453A (en) A kind of active tracking based on high-speed ball-forming machine

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110615