[go: up one dir, main page]

CN101394546B - Video target contour tracking method and device - Google Patents

Video target contour tracking method and device Download PDF

Info

Publication number
CN101394546B
CN101394546B CN2007101541207A CN200710154120A CN101394546B CN 101394546 B CN101394546 B CN 101394546B CN 2007101541207 A CN2007101541207 A CN 2007101541207A CN 200710154120 A CN200710154120 A CN 200710154120A CN 101394546 B CN101394546 B CN 101394546B
Authority
CN
China
Prior art keywords
target
particle
profile
barycenter
random particles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2007101541207A
Other languages
Chinese (zh)
Other versions
CN101394546A (en
Inventor
于纪征
曾贵华
赵光耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN2007101541207A priority Critical patent/CN101394546B/en
Publication of CN101394546A publication Critical patent/CN101394546A/en
Application granted granted Critical
Publication of CN101394546B publication Critical patent/CN101394546B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明实施例公开了一种视频目标轮廓跟踪方法,通过跟踪目标的质心得到的目标质心的位置来调整轮廓跟踪过程中对各个随机粒子进行状态转移的参数,使对随机粒子进行状态转移的参数能够随着目标质心位置的变化进行相应的变化,从而使目标轮廓的跟踪更加准确。本发明实施例公开了另一种视频目标轮廓跟踪方法,通过跟踪目标的质心得到目标质心的位置来对各个随机粒子进行评估,按照各粒子轮廓的质心与跟踪得到的目标质心的远近程度来调整粒子的权重值,使各个粒子加权累加得到的目标轮廓更加接近真实的目标轮廓,从而使目标轮廓的跟踪更加准确。本发明实施例还公开了两种视频目标轮廓跟踪的装置。

Figure 200710154120

The embodiment of the present invention discloses a video target contour tracking method. The position of the target center of mass obtained by tracking the mass center of the target is used to adjust the parameters of the state transfer of each random particle in the contour tracking process, so that the parameters of the state transfer of the random particles are It can make corresponding changes with the change of the position of the center of mass of the target, so that the tracking of the target contour is more accurate. The embodiment of the present invention discloses another video target contour tracking method, which evaluates each random particle by tracking the centroid of the target to obtain the position of the target centroid, and adjusts according to the distance between the centroid of each particle contour and the tracked target centroid The weight value of the particles makes the target contour obtained by the weighted accumulation of each particle closer to the real target contour, so that the tracking of the target contour is more accurate. The embodiment of the invention also discloses two devices for contour tracking of video objects.

Figure 200710154120

Description

视频目标轮廓跟踪方法及装置 Video target contour tracking method and device

技术领域technical field

本发明实施例涉及计算机视觉技术和图像处理技术领域,特别涉及一种视频目标轮廓跟踪方法和装置。Embodiments of the present invention relate to the fields of computer vision technology and image processing technology, and in particular, to a video object contour tracking method and device.

背景技术Background technique

目前,人类的视觉系统是获取外界信息的主要途径,而运动目标的检测和跟踪则是视觉领域的一个重要课题,实时的目标跟踪更是计算机视觉中的关键技术。目前视频安全监控系统在银行、交通等各部门得到了越来越多的应用,实时地进行视频目标物体的跟踪更是能够起到预警的作用,所以对目标物体进行实时跟踪得到了人们越来越多的关注。At present, the human visual system is the main way to obtain external information, and the detection and tracking of moving objects is an important topic in the field of vision, and real-time object tracking is a key technology in computer vision. At present, the video security monitoring system has been more and more applied in various departments such as banking and transportation, and the real-time tracking of video target objects can play an early warning role. Therefore, real-time tracking of target objects has become more and more popular. more attention.

视频目标跟踪方法有多种,根据是否在帧间进行模式匹配,可以分为基于检测的方法和基于识别的方法。基于检测的方法是根据目标的特征直接在每一帧图像中提取目标,不需要在帧间传递目标的运动参数并进行匹配,比如差分检测的方法;基于识别的方法通常首先提取目标的某种特征,然后在每一帧图像中搜寻出与此特征最为匹配的区域即为所跟踪的目标。根据跟踪所得的结果可分为跟踪轮廓的方法和跟踪目标局部点的方法。常见的跟踪目标轮廓的方法主要是粒子滤波跟踪方法;跟踪目标局部点的方法主要有均值漂移跟踪方法等。There are many methods for video object tracking, which can be divided into detection-based methods and recognition-based methods according to whether pattern matching is performed between frames. The detection-based method is to directly extract the target in each frame of the image according to the characteristics of the target, without the need to transfer the motion parameters of the target between frames and perform matching, such as the method of differential detection; the recognition-based method usually first extracts some kind of target feature, and then search for the area that best matches this feature in each frame of image, which is the target to be tracked. According to the tracking results, it can be divided into the method of tracking the contour and the method of tracking the local point of the target. The common method of tracking the contour of the target is mainly the particle filter tracking method; the method of tracking the local points of the target mainly includes the mean shift tracking method and so on.

跟踪轮廓的方法中,粒子滤波跟踪方法最为常用。粒子滤波又称为序列蒙特卡罗(SMC,Sequential Monte Carlo)方法,是以蒙特卡罗方法实现贝叶斯递推滤波的一种方法。根据贝叶斯滤波理论,给定当前时刻观察序列z1∶k,状态xk的后验概率可利用(k-1)时刻的后验概率p(xk-1|zk-1)以递归的方式估计得到,即Among the contour tracking methods, the particle filter tracking method is the most commonly used. Particle filter is also known as Sequential Monte Carlo (SMC, Sequential Monte Carlo) method, which is a method to realize Bayesian recursive filtering by Monte Carlo method. According to Bayesian filter theory, given the observation sequence z 1∶k at the current moment, the posterior probability of state x k can be obtained by using the posterior probability p(x k-1 |z k-1 ) at (k-1) time It is estimated in a recursive manner, that is,

pp (( xx kk || zz kk )) ∝∝ pp (( zz kk || xx kk )) ∫∫ xx kk -- 11 pp (( xx kk || xx kk -- 11 )) pp (( xx kk -- 11 || zz kk -- 11 )) -- -- -- (( 11 ))

其中p(zk|xk)为似然概率。Where p(z k |x k ) is the likelihood probability.

粒子滤波不需要获得概率函数的具体形式,而是利用Ns个带有权重的随机样本(粒子)xk-1 i,wk-1 i(i=1,…,Ns)表示后验概率函数p(xk-1|zk-1),这样,式(1)中的积分就能用样本集的加权求和来估计,即Particle filtering does not need to obtain the specific form of the probability function, but uses N s random samples (particles) with weights x k-1 i , w k-1 i (i=1,..., N s ) to represent the posterior probability function p(x k-1 |z k-1 ), so that the integral in formula (1) can be estimated by the weighted summation of the sample set, namely

pp (( xx kk || zz kk )) ≈≈ pp (( zz kk || xx kk )) ΣΣ ii ww kk -- 11 ii pp (( xx kk || xx kk -- 11 ii )) -- -- -- (( 22 ))

当样本数量足够多时,这种概率估算等同于后验概率密度函数。When the sample size is large enough, this probability estimate is equivalent to the posterior probability density function.

下面,以条件概率密度传播跟踪方法为例介绍利用粒子滤波进行视频目标跟踪的方法。Next, take the conditional probability density propagation tracking method as an example to introduce the method of video target tracking using particle filter.

条件概率密度传播跟踪方法是基于条件概率密度传播(Condensation,Conditional Density Propagation)算法的。Condensation算法是粒子滤波方法中的一种。利用Condensation算法进行轮廓跟踪时可以采用一种基于活动轮廓模型和形状空间的轮廓表征方法,例如,可以用B样条(B-Snake)的控制点来表征轮廓曲线,用形状空间来表征轮廓曲线可能的变化。目标轮廓的运动状态为T=(TX,TY,θ,SX,SY),TX和TY分别是x方向和y方向目标轮廓质心的位置,θ为目标轮廓旋转的角度,SX和SY分别为目标在x方向和y方向的尺度。目标的形状空间参数S表示为:The conditional probability density propagation tracking method is based on the conditional probability density propagation (Condensation, Conditional Density Propagation) algorithm. The Condensation algorithm is one of the particle filter methods. When using the Condensation algorithm for contour tracking, a contour characterization method based on the active contour model and shape space can be used. For example, the control points of B-Snake can be used to represent the contour curve, and the shape space can be used to represent the contour curve. possible changes. The motion state of the target contour is T=(TX, TY, θ, SX, SY), TX and TY are the positions of the center of mass of the target contour in the x direction and y direction respectively, θ is the rotation angle of the target contour, and SX and SY are the target contours respectively. The scale in the x-direction and y-direction. The shape space parameter S of the target is expressed as:

S=(TX,TY,SX cosθ-1,SYcosθ-1,-SYsinθ,SX sinθ)    (3)S=(TX, TY, SX cosθ-1, SYcosθ-1, -SYsinθ, SX sinθ) (3)

这样,就可以表示出目标的轮廓曲线及其变化了。In this way, the contour curve of the target and its changes can be expressed.

图1为现有技术中采用Condensation算法实现视频目标轮廓跟踪的流程图。如图1所示,采用Condensation算法根据目标轮廓的过程主要包含以下的步骤。FIG. 1 is a flow chart of implementing contour tracking of a video object by using the Condensation algorithm in the prior art. As shown in Figure 1, the process of using the Condensation algorithm according to the target contour mainly includes the following steps.

步骤101,判断输入的图像数据及跟踪目标信息是否是新对象,即是否需要建立新的跟踪目标,如果是,则执行步骤102,否则执行步骤104。Step 101 , judging whether the input image data and tracking target information are new objects, that is, whether a new tracking target needs to be established, if so, go to step 102 , otherwise go to step 104 .

输入的图像数据可以是经过背景分割的图像数据,可以在初始帧图像中选定某个运动物体作为跟踪目标。The input image data may be image data after background segmentation, and a certain moving object may be selected as a tracking target in the initial frame image.

步骤102,利用现有的轮廓提取技术得到目标的轮廓向量,计算轮廓的质心位置,根据B样条技术求得B样条控制点QX0和QY0,根据目标的轮廓向量得到目标的运动状态初始值:Step 102, use the existing contour extraction technology to obtain the contour vector of the target, calculate the centroid position of the contour, obtain the B-spline control points QX 0 and QY 0 according to the B-spline technology, and obtain the motion state of the target according to the contour vector of the target Initial value:

T0=(TX0,TY0,θ0,SX0,SY0)  (4)T 0 =(TX 0 , TY 0 , θ 0 , SX 0 , SY 0 ) (4)

其中,TX0和TY0分别是x方向和y方向目标轮廓质心的位置,θ0为目标轮廓旋转角度的初始值0,SX0和SY0分别为目标轮廓在x方向和y方向的尺度。Among them, TX 0 and TY 0 are the positions of the centroid of the target contour in the x-direction and y-direction respectively, θ 0 is the initial value 0 of the rotation angle of the target contour, and SX 0 and SY 0 are the scales of the target contour in the x-direction and y-direction respectively.

步骤103,初始化Ns个粒子,各个粒子的初始权重w0 i均为1/Ns,运动状态和形状空间参数分别为T0 i,S0 i(i=1,2,…Ns):Step 103, initialize N s particles, the initial weight w 0 i of each particle is 1/N s , and the motion state and shape space parameters are T 0 i , S 0 i (i=1, 2, ... Ns):

TXTX 00 ii == TXTX 00 ++ BB 11 ×× ξξ -- -- -- (( 55 ))

TYTY 00 ii == TYTY 00 ++ BB 22 ×× ξξ -- -- -- (( 66 ))

θθ 00 ii == θθ 00 ++ BB 33 ×× ξξ -- -- -- (( 77 ))

SXSX 00 ii == SXSX 00 ++ BB 44 ×× ξξ -- -- -- (( 88 ))

SYSy 00 ii == SYSy 00 ++ BB 55 ×× ξξ -- -- -- (( 99 ))

其中,B1,B2,B3,B4,B5为常数,ξ为[-1,+1]的随机数;Among them, B 1 , B 2 , B 3 , B 4 , B 5 are constants, and ξ is a random number of [-1, +1];

SS 00 ii == (( TXTX 00 ii ,, TYTY 00 ii ,, SXSX 00 ii coscos θθ 00 ii -- 11 ,, SYSy 00 ii coscos θθ 00 ii -- 11 ,, -- SYSy 00 ii sinsin θθ 00 ii ,, SXSX 00 ii sinsin θθ 00 ii )) -- -- -- (( 1010 ))

步骤104,输入第k帧图像数据时,对各粒子状态进行状态转移,系统状态转移方程为:Step 104, when inputting the image data of the kth frame, carry out state transition to each particle state, the system state transition equation is:

TXTX kk ii == TXTX kk -- 11 ii ++ BB 11 ×× ξξ 11 -- kk ii -- -- -- (( 1111 ))

TYTY kk ii == TYTY kk -- 11 ii ++ BB 22 ×× ξξ 22 -- kk ii -- -- -- (( 1212 ))

θθ kk ii == θθ kk -- 11 ii ++ BB 33 ×× ξξ 33 -- kk ii -- -- -- (( 1313 ))

SXSX kk ii == SXSX kk -- 11 ii ++ BB 44 ×× ξξ 44 -- kk ii -- -- -- (( 1414 ))

SYSy kk ii == SYSy kk -- 11 ii ++ BB 55 ×× ξξ 55 -- kk ii -- -- -- (( 1515 ))

其中,B1,B2,B3,B4,B5为常数,ξ为[-1,+1]的随机数。Wherein, B 1 , B 2 , B 3 , B 4 , and B 5 are constants, and ξ is a random number of [-1, +1].

步骤105,利用步骤104得到的各个粒子的运动状态计算各个粒子的形状空间参数:Step 105, using the state of motion of each particle obtained in step 104 to calculate the shape space parameters of each particle:

SS kk ii == (( TXTX kk ii ,, TYTY kk ii ,, SXSX kk ii coscos θθ kk ii -- 11 ,, SYSy kk ii coscos θθ kk ii -- 11 ,, -- SYSy kk ii sinsin θθ kk ii ,, SXSX kk ii sinsin θθ kk ii )) -- -- -- (( 1616 ))

步骤106,计算各个粒子的B样条控制点向量。Step 106, calculating the B-spline control point vector of each particle.

对于粒子Ni,可以由其运动参数Ti和形状空间参数Si求得其B样条的控制点向量:For particle N i , its B-spline control point vector can be obtained from its motion parameter T i and shape space parameter S i :

(( QXQX kk ii QYQY kk ii )) TT == WW ii SS kk ii ++ (( QXQX 00 ii QYQY 00 ii )) TT -- -- -- (( 1717 ))

其中, W i = 1 0 QX 0 i 0 0 QY 0 i 0 1 0 QY 0 i QX 0 i 0 , 每个元素都是Nc×1的矩阵,Nc为控制点的数目,初始控制点QX0 i和QY0 i由各个粒子的形状空间参数得到。in, W i = 1 0 QX 0 i 0 0 QY 0 i 0 1 0 QY 0 i QX 0 i 0 , Each element is a matrix of Nc×1, Nc is the number of control points, and the initial control points QX 0 i and QY 0 i are obtained from the shape space parameters of each particle.

步骤107,得到各个粒子的控制点向量后,就可以用B样条的方法拟合出各个粒子对应的轮廓曲线,拟合公式如下:Step 107, after obtaining the control point vectors of each particle, the contour curve corresponding to each particle can be fitted by the B-spline method, and the fitting formula is as follows:

ythe y kk ii (( xx )) == ΣΣ kk == 00 NcNc -- 11 PP kk BB kk ,, mm (( xx )) -- -- -- (( 1818 ))

其中Pk(k=0,1,…,Nc-1)为第k个控制点的坐标,Bk,m(k=0,1,…,Nc-1)为m次规范B样条基函数。Among them, P k (k=0, 1,..., Nc-1) is the coordinate of the kth control point, B k, m (k=0, 1,..., Nc-1) is the normative B-spline basis of degree m function.

步骤108,在轮廓曲线上随机抽取N个样点,计算当前帧的图像数据中各样点法线方向上灰度的梯度值最大的象素点,该点就是真实轮廓点的一个观测值,是根据当前帧图像数据计算获得的最接近目标真实轮廓的象素点。Step 108, randomly extracting N sample points on the contour curve, calculating the pixel point with the largest gradient value of the grayscale on the normal direction of each sample point in the image data of the current frame, this point is exactly an observed value of the real contour point, It is the pixel point closest to the real outline of the target calculated based on the image data of the current frame.

对粒子Ni,可由其运动参数Ti和形状空间参数Si求得轮廓曲线,在轮廓曲线上取样N个点,在各样点法线两边按法线方向每隔一定距离抽取一个象素点来计算其在当前帧图像数据中的灰度的梯度值。选择的象素点的数目可以根据需要在样点周围一定范围内选取,因为真实轮廓点不会偏离样点太远。象素点取的越多,得到的真实轮廓点的观测值就越接近真实轮廓点,但是对设备的计算能力的要求就更高。For particle N i , the contour curve can be obtained from its motion parameter T i and shape space parameter S i , and N points are sampled on the contour curve, and a pixel is extracted at a certain distance on both sides of the normal line of each sample point according to the normal direction point to calculate the gradient value of its grayscale in the current frame image data. The number of selected pixel points can be selected within a certain range around the sample point as required, because the real contour points will not deviate too far from the sample point. The more pixels are taken, the closer the observed value of the real contour point is to the real contour point, but the requirement for the computing power of the device is higher.

再求得各样点与该点处的真实轮廓点的观测值之间的距离DISi(n)(n=1,2,...,N)。由于目标真实轮廓点处的灰度的梯度值较大,因此求得的粒子轮廓点与该点处的真实轮廓点的观测值之间的距离可以作为衡量各个粒子权重的标准,距离大表示该粒子的轮廓与真实轮廓差距较大,距离小则表示该粒子的轮廓与真实轮廓较接近。Then calculate the distance DIS i (n) (n=1, 2, . . . , N) between each sample point and the observed value of the real contour point at the point. Since the gradient value of the gray level at the real contour point of the target is relatively large, the distance between the calculated particle contour point and the observed value of the real contour point at this point can be used as a standard to measure the weight of each particle. A larger distance between the particle's outline and the real outline, and a smaller distance means that the particle's outline is closer to the real outline.

步骤109,通过所求得的当前帧的图像数据中各样点与该点处的真实轮廓点的观测值之间的距离DISi(n)(n=1,2,...,N)可以得到各个粒子的观测概率密度函数pk iStep 109, through the distance DIS i (n) (n=1, 2, . The observation probability density function p k i of each particle can be obtained:

pp kk ii == expexp {{ -- 11 22 (( 11 σσ 22 ΦΦ )) }} -- -- -- (( 1919 ))

其中, Φ = 1 N Σ n = 1 N DIS i ( n ) . in, Φ = 1 N Σ no = 1 N DIS i ( no ) .

步骤110,对前一帧中各粒子的权重值进行权值更新,得到当前帧中各粒子的权重值:Step 110, update the weight value of each particle in the previous frame to obtain the weight value of each particle in the current frame:

ww kk ii == ww kk -- 11 ii pp kk ii -- -- -- (( 2020 ))

其中,pk i为第k帧第i个粒子的观测概率密度函数,wk i为第k帧第i个粒子的权重值。Among them, p k i is the observation probability density function of the i-th particle in the k-th frame, and w k i is the weight value of the i-th particle in the k-th frame.

步骤111,由各个粒子的运动状态参数和权值进行加权求和得到期望的运动状态参数:Step 111, performing weighted summation by the motion state parameters and weights of each particle to obtain the desired motion state parameters:

TXTX kk == ΣΣ ii == 11 NsNS ww kk ii TXTX kk ii -- -- -- (( 21twenty one ))

TYTY kk == ΣΣ ii == 11 NsNS ww kk ii TYTY kk ii -- -- -- (( 22twenty two ))

θθ kk == ΣΣ ii == 11 NsNS ww kk ii θθ kk ii -- -- -- (( 23twenty three ))

SXSX kk == ΣΣ ii == 11 NsNS ww kk ii SXSX kk ii -- -- -- (( 24twenty four ))

SYSy kk == ΣΣ ii == 11 NsNS ww kk ii SYSy kk ii -- -- -- (( 2525 ))

其中,wk i为第k帧第i个粒子的权重值,Tk=(TXk,TYk,θk,SXk,SYk)为第k帧目标轮廓的运动状态参数, T k i = ( TX k i , TY k i , θ k i , SX k i , SY k i ) 为第k帧第i个粒子的运动状态参数,Ns为粒子总数。Among them, w k i is the weight value of the i-th particle in the k-th frame, T k = (TX k , TY k , θ k , SX k , SY k ) is the motion state parameter of the target contour in the k-th frame, T k i = ( TX k i , TY k i , θ k i , SX k i , Sy k i ) is the motion state parameter of the i-th particle in the k-th frame, and Ns is the total number of particles.

步骤112,由运动状态参数就可以得到第k帧目标轮廓的形状空间参数:Step 112, the shape space parameters of the kth frame target contour can be obtained from the motion state parameters:

Sk=(TXk,TYk,SXk cosθk-1,SYk cosθk-1,-SYk sinθk,SXk sinθk)  (26)S k =(TX k , TY k , SX k cosθ k -1, SY k cosθ k -1, -SY k sinθ k , SX k sinθ k ) (26)

步骤113,由Sk算得轮廓的控制点向量QXk和QYkStep 113, calculate the control point vectors QX k and QY k of the contour from S k :

(QXkQYk)T=WSk+(QX0QY0)T  (27)(QX k QY k ) T =WS k +(QX 0 QY 0 ) T (27)

其中, W = 1 0 QX 0 0 0 QY 0 0 1 0 QY 0 QX 0 0 , 每个元素都是Nc×1的矩阵,Nc为控制点的数目,Sk为第k帧目标轮廓的形状空间参数,(QXkQYk)T为第k帧目标轮廓的B样条的控制点向量。in, W = 1 0 QX 0 0 0 QY 0 0 1 0 QY 0 QX 0 0 , Each element is a matrix of Nc×1, Nc is the number of control points, S k is the shape space parameter of the target contour of the kth frame, (QX k QY k ) T is the B-spline control of the target contour of the kth frame point vector.

步骤114,拟合目标的轮廓曲线yk(x):Step 114, fitting the contour curve y k (x) of the target:

ythe y kk (( xx )) == ΣΣ kk == 00 NcNc -- 11 PP kk BB kk ,, mm (( xx )) -- -- -- (( 2828 ))

其中Pk(k=0,1,…,Nc-1)为第k个控制点的坐标,Bk,m(k=0,1,...,Nc-1)为m次规范B样条基函数。Among them, P k (k=0, 1, ..., Nc-1) is the coordinate of the kth control point, B k, m (k = 0, 1, ..., Nc-1) is the normative B-sample of m times Basis function.

这样就完成了一次对目标物体轮廓的跟踪过程。In this way, a process of tracking the outline of the target object is completed.

常见的跟踪目标轮廓的粒子滤波方法还有序列重要重采样(SIR,Sequential Importance Resampling),辅助采样重要重采样滤波器(ASIR,Auxiliary Sampling Importance Resampling),正则化粒子滤波器(RPF,Regularized Particle Filter)。这些算法在粒子状态转移(粒子传播)时具有相同形式的状态转移方程,因此跟踪目标轮廓的过程是类似的。Common particle filter methods for tracking target contours include sequence important resampling (SIR, Sequential Importance Resampling), auxiliary sampling important resampling filter (ASIR, Auxiliary Sampling Importance Resampling), regularized particle filter (RPF, Regularized Particle Filter ). These algorithms have the same form of state transition equation when particle state transition (particle propagation), so the process of tracking the target contour is similar.

上述跟踪目标轮廓的方法能够实现对目标物体轮廓的跟踪,但是由于只从视频数据中提取目标物体的轮廓信息来进行分析和计算,因此当视频目标的运动速度频繁变化时跟踪得到的轮廓会出现“超前”或者“滞后”的抖动现象,跟踪不准确;另外,当视频目标的周围出现类似轮廓信息时,不能够进行准确的跟踪。The above-mentioned method of tracking the outline of the target can realize the tracking of the outline of the target object, but because only the outline information of the target object is extracted from the video data for analysis and calculation, when the moving speed of the video target changes frequently, the tracked outline will appear "Advanced" or "lag" jitter phenomenon, tracking is not accurate; in addition, when similar contour information appears around the video target, accurate tracking cannot be performed.

发明内容Contents of the invention

有鉴于此,本发明实施例提供了两种视频目标轮廓跟踪方法,该方法对视频目标的轮廓跟踪更加准确。In view of this, the embodiment of the present invention provides two video target contour tracking methods, which are more accurate for video target contour tracking.

本发明实施例还提供了两种视频目标轮廓跟踪装置,该装置跟踪视频目标的轮廓的结果更加准确。The embodiment of the present invention also provides two video object contour tracking devices, and the result of the device tracking the contour of the video object is more accurate.

一方面,本发明的实施例提供了一种视频目标轮廓跟踪方法,包含下列步骤:On the one hand, the embodiment of the present invention provides a kind of video object contour tracking method, comprises the following steps:

从初始帧图像数据提取目标轮廓,利用所提取的目标轮廓产生多个随机粒子;extracting the target contour from the initial frame image data, and generating a plurality of random particles by using the extracted target contour;

对当前帧图像数据中各个随机粒子进行状态转移,得到各个随机粒子的轮廓;Perform state transfer for each random particle in the current frame image data to obtain the outline of each random particle;

在各个随机粒子的轮廓上随机抽取多个样点,对各样点计算其真实轮廓点的观测值;Randomly select a plurality of sample points on the contour of each random particle, and calculate the observation value of the real contour point for each sample point;

计算各样点与其真实轮廓点的观测值之间的距离;Calculate the distance between the observations of each sample point and its true contour point;

根据得到的各个随机粒子轮廓上各样点与其真实轮廓点的观测值之间的距离确定各个随机粒子的权重值;和Determine the weight value of each random particle according to the distance between each sample point on the obtained contour of each random particle and the observed value of its true contour point; and

将所有随机粒子加权累加获得所跟踪的目标在当前帧图像中的轮廓;All random particles are weighted and accumulated to obtain the outline of the tracked target in the current frame image;

在对当前帧图像数据中各个随机粒子进行状态转移之前,进一步包括:Before performing state transfer for each random particle in the current frame image data, it further includes:

跟踪目标的质心获得目标质心的位置;Track the center of mass of the target to obtain the position of the center of mass of the target;

利用所获得的目标质心的位置对随机粒子状态转移的参数进行调整。The parameters of the random particle state transition are adjusted by using the obtained position of the target center of mass.

本发明的实施例提供了另一种视频目标轮廓跟踪方法,包含下列步骤:Embodiments of the present invention provide another method for tracking the outline of a video object, comprising the following steps:

从初始帧图像数据提取目标轮廓,利用所提取的目标轮廓产生多个随机粒子;extracting the target contour from the initial frame image data, and generating a plurality of random particles by using the extracted target contour;

对当前帧图像数据中各个随机粒子进行状态转移,得到各个随机粒子的轮廓;Perform state transfer for each random particle in the current frame image data to obtain the outline of each random particle;

在各个随机粒子的轮廓上随机抽取多个样点,对各样点计算其真实轮廓点的观测值;Randomly select a plurality of sample points on the contour of each random particle, and calculate the observation value of the real contour point for each sample point;

计算各样点与其真实轮廓点的观测值之间的距离;Calculate the distance between the observations of each sample point and its true contour point;

根据得到的各个随机粒子轮廓上各样点与其真实轮廓点的观测值之间的距离确定各个随机粒子的权重值;和Determine the weight value of each random particle according to the distance between each sample point on the obtained contour of each random particle and the observed value of its true contour point; and

将所有随机粒子加权累加获得所跟踪的目标在当前帧图像中的轮廓;All random particles are weighted and accumulated to obtain the outline of the tracked target in the current frame image;

所述根据得到的各个随机粒子轮廓上各样点与其真实轮廓点的观测值之间的距离确定各个随机粒子的权重值进一步包括:Determining the weight value of each random particle according to the distance between each sample point on the obtained contour of each random particle and the observed value of its real contour point further includes:

跟踪目标的质心获得目标质心的位置;Track the center of mass of the target to obtain the position of the center of mass of the target;

根据得到的各个随机粒子的轮廓获得各个粒子轮廓的质心;Obtain the centroid of each particle profile according to the obtained profile of each random particle;

利用得到的目标质心的位置计算所述目标质心与各个粒子轮廓的质心的距离;Using the obtained position of the target center of mass to calculate the distance between the target center of mass and the center of mass of each particle profile;

根据获得的目标质心与各个粒子的轮廓质心之间的距离调整所述各个粒子的权重值。The weight value of each particle is adjusted according to the obtained distance between the target centroid and the contour centroid of each particle.

另一方面,本发明实施例提供了一种视频目标轮廓跟踪装置,包括:On the other hand, an embodiment of the present invention provides a video target contour tracking device, including:

轮廓提取模块,用于从初始帧图像数据提取目标轮廓;Contour extraction module, is used for extracting target contour from initial frame image data;

随机粒子产生模块,用于利用所述轮廓提取模块所提取的目标轮廓产生多个随机粒子;A random particle generation module, configured to generate a plurality of random particles using the target contour extracted by the contour extraction module;

粒子状态转移模块,用于对当前帧图像数据中各个随机粒子进行状态转移,得到各个随机粒子的轮廓;The particle state transfer module is used to perform state transfer to each random particle in the current frame of image data to obtain the outline of each random particle;

粒子权重计算模块,用于在各个随机粒子的轮廓上随机抽取多个样点,对各样点计算其真实轮廓点的观测值,计算各样点与其真实轮廓点的观测值之间的距离,并根据得到的各个随机粒子轮廓上各样点与其真实轮廓点的观测值之间的距离确定各个随机粒子的权重值;The particle weight calculation module is used to randomly select a plurality of sample points on the contour of each random particle, calculate the observed value of its real contour point for each sample point, and calculate the distance between each sample point and the observed value of its real contour point, And determine the weight value of each random particle according to the distance between each sample point on the contour of each random particle obtained and the observed value of its real contour point;

轮廓拟合模块,用于将所有随机粒子加权累加获得所跟踪的目标在当前帧图像中的轮廓;The contour fitting module is used to weight and accumulate all random particles to obtain the contour of the tracked target in the current frame image;

该装置进一步包括:The device further includes:

质心计算模块,用于跟踪目标的质心获得目标质心的位置;The center of mass calculation module is used to track the center of mass of the target to obtain the position of the center of mass of the target;

所述粒子状态转移模块进一步用于在对当前帧图像数据中各个随机粒子进行状态转移之前,利用所述质心计算模块所获得的目标质心的位置对随机粒子状态转移的参数进行调整。The particle state transfer module is further configured to use the position of the target centroid obtained by the centroid calculation module to adjust the parameters of the random particle state transfer before performing the state transfer on each random particle in the current frame of image data.

本发明实施例还提供了另一种视频目标轮廓跟踪装置,包括:The embodiment of the present invention also provides another video target contour tracking device, including:

轮廓提取模块,用于从初始帧图像数据提取目标轮廓;Contour extraction module, is used for extracting target contour from initial frame image data;

随机粒子产生模块,用于利用所述轮廓提取模块所提取的目标轮廓产生多个随机粒子;A random particle generation module, configured to generate a plurality of random particles using the target contour extracted by the contour extraction module;

粒子状态转移模块,用于对当前帧图像数据中各个随机粒子进行状态转移,得到各个随机粒子的轮廓;The particle state transfer module is used to perform state transfer to each random particle in the current frame of image data to obtain the outline of each random particle;

粒子权重计算模块,用于在各个随机粒子的轮廓上随机抽取多个样点,对各样点计算其真实轮廓点的观测值,计算各样点与其真实轮廓点的观测值之间的距离,并根据得到的各个随机粒子轮廓上各样点与其真实轮廓点的观测值之间的距离确定各个随机粒子的权重值;The particle weight calculation module is used to randomly select a plurality of sample points on the contour of each random particle, calculate the observed value of its real contour point for each sample point, and calculate the distance between each sample point and the observed value of its real contour point, And determine the weight value of each random particle according to the distance between each sample point on the contour of each random particle obtained and the observed value of its real contour point;

轮廓拟合模块,用于将所有随机粒子加权累加获得所跟踪的目标在当前帧图像中的轮廓;The contour fitting module is used to weight and accumulate all random particles to obtain the contour of the tracked target in the current frame image;

该装置进一步包括质心计算模块,用于跟踪目标的质心获得目标质心的位置;The device further includes a center of mass calculation module for tracking the center of mass of the target to obtain the position of the center of mass of the target;

粒子权重计算模块进一步用于根据所述粒子状态转移模块得到的各个随机粒子的轮廓获得各个粒子轮廓的质心,利用质心计算模块得到的目标质心的位置计算所述目标质心与各个粒子轮廓的质心的距离,并根据获得的目标质心与各个粒子的轮廓质心之间的距离调整所述各个粒子的权重值。The particle weight calculation module is further used to obtain the centroid of each particle profile according to the profile of each random particle obtained by the particle state transfer module, and use the position of the target centroid obtained by the centroid calculation module to calculate the distance between the target centroid and the centroid of each particle profile distance, and adjust the weight value of each particle according to the obtained distance between the target centroid and the outline centroid of each particle.

由上述的技术方案可见,本发明实施例提供的一种视频目标轮廓跟踪方法,通过跟踪目标的质心得到的目标质心的位置来调整轮廓跟踪过程中各个随机粒子的状态转移参数,使随机粒子的状态转移参数能够随着目标质心位置的变化进行相应的变化,目标的质心位置变化较大时随机粒子的位置变化随之变大,从而使对目标轮廓的跟踪更准确。It can be seen from the above technical solution that in the method for tracking the contour of a video object provided by the embodiment of the present invention, the position of the center of mass of the object obtained by tracking the center of mass of the object is used to adjust the state transition parameters of each random particle in the contour tracking process, so that the random particle's The state transition parameters can change accordingly with the change of the center of mass position of the target. When the position of the center of mass of the target changes greatly, the position change of random particles becomes larger, so that the tracking of the target contour is more accurate.

本发明的实施例提供的另一种视频目标轮廓跟踪方法,通过跟踪目标的质心得到跟踪目标质心的位置来对各个随机粒子进行评估,按照各粒子的轮廓的质心与跟踪目标质心的远近程度来调整粒子的权重值,使得接近真实轮廓的粒子的权重值增大,偏离真实轮廓的粒子的权重值减小,各个粒子加权累加得到的目标轮廓更加接近真实的目标轮廓,使得对目标轮廓的跟踪更准确。Another video target contour tracking method provided by an embodiment of the present invention evaluates each random particle by tracking the center of mass of the target to obtain the position of the center of mass of the tracking target, and evaluates each random particle according to the distance between the centroid of the contour of each particle and the center of mass of the tracking target. Adjust the weight value of the particles so that the weight value of the particles close to the real contour increases, and the weight value of the particles deviating from the real contour decreases. The target contour obtained by the weighted accumulation of each particle is closer to the real target contour, making the tracking of the target contour more acurrate.

本发明实施例提供的一种视频目标轮廓跟踪装置,用于跟踪目标的质心得到的目标质心的位置,并利用该质心位置调整轮廓跟踪所得的各个随机粒子的状态转移参数,使随机粒子的状态转移参数随着目标质心位置的变化进行相应的变化,目标的质心位置变化较大时随机粒子的位置变化随之变大,得到的目标轮廓的跟踪结果更准确。An embodiment of the present invention provides a video object contour tracking device, which is used to track the center of mass of the target to obtain the position of the center of mass of the target, and use the position of the center of mass to adjust the state transition parameters of each random particle obtained by contour tracking, so that the state of the random particle The transfer parameters change accordingly with the position of the center of mass of the target. When the position of the center of mass of the target changes greatly, the position change of the random particles becomes larger, and the tracking result of the target contour is more accurate.

本发明的实施例提供的另一种视频目标轮廓跟踪方法装置,用于跟踪目标的质心得到跟踪目标质心的位置,并利用该质心位置来对各个随机粒子进行评估,按照各粒子的轮廓的质心与跟踪目标质心的远近程度来调整粒子的权重值,使得接近真实轮廓的粒子的权重值增大,偏离真实轮廓的粒子的权重值减小,各个粒子加权累加得到的目标轮廓更加接近真实的目标轮廓,得到的目标轮廓的跟踪结果更准确。The embodiment of the present invention provides another video object contour tracking method device, which is used to track the center of mass of the target to obtain the position of the center of mass of the tracking target, and use the position of the center of mass to evaluate each random particle, according to the center of mass of the contour of each particle Adjust the weight value of the particles according to the distance from the center of mass of the tracking target, so that the weight value of the particles close to the real outline increases, and the weight value of the particles deviating from the real outline decreases, and the target outline obtained by the weighted accumulation of each particle is closer to the real target. contour, the tracking result of the obtained target contour is more accurate.

附图说明Description of drawings

图1为现有技术中采用Condensation算法实现视频目标轮廓跟踪的流程图。FIG. 1 is a flow chart of implementing contour tracking of a video object by using the Condensation algorithm in the prior art.

图2为本发明实施例一中视频目标轮廓跟踪方法的流程图。FIG. 2 is a flow chart of a video object contour tracking method in Embodiment 1 of the present invention.

图3为本发明实施例一中视频目标轮廓跟踪装置的结构图。FIG. 3 is a structural diagram of a video object contour tracking device in Embodiment 1 of the present invention.

图4为本发明实施例二中视频目标轮廓跟踪方法的流程图。FIG. 4 is a flow chart of a video object contour tracking method in Embodiment 2 of the present invention.

图5为本发明实施例二中视频目标轮廓跟踪装置的结构图。FIG. 5 is a structural diagram of a video object contour tracking device in Embodiment 2 of the present invention.

图6为本发明实施例三中视频目标轮廓跟踪方法的流程图。FIG. 6 is a flow chart of a video object contour tracking method in Embodiment 3 of the present invention.

图7为采用现有技术的Condensation算法进行视频目标轮廓跟踪的效果图。FIG. 7 is an effect diagram of tracking the contour of a video object using the Condensation algorithm in the prior art.

图8为采用本发明实施例三的视频目标轮廓跟踪方法后跟踪效果图。Fig. 8 is a tracking effect diagram after adopting the video object contour tracking method according to the third embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的实施例的目的、技术方案及优点更加清楚明白,以下参照附图并举实施例,对本发明实施例进一步详细说明。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention more clear, the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings and examples.

根据本发明的实施例,在对目标轮廓进行跟踪的同时,对目标的质心进行跟踪,并且利用质心跟踪得到的目标质心的位置及其变化信息对轮廓跟踪过程中目标轮廓质心的位置转移参数进行调整,使得到的目标轮廓减少因为目标物体移动速度的变化而产生“抖动”,从而使得对目标轮廓的跟踪更加准确;或者,利用质心跟踪得到的目标质心的位置及其变化信息对轮廓跟踪过程中的各个随机粒子(随机样本)的权重进行调整,从而使得对随机样本加权累加得到的目标轮廓更加准确。According to an embodiment of the present invention, while the target contour is being tracked, the center of mass of the target is tracked, and the position transfer parameters of the target contour centroid during the contour tracking process are carried out using the position of the target center of mass obtained from the center of mass tracking and its change information Adjustment, so that the obtained target contour reduces the "jitter" caused by the change of the moving speed of the target object, so that the tracking of the target contour is more accurate; or, the position of the target center of mass and its change information obtained by the center of mass tracking are used to track the contour. The weight of each random particle (random sample) in is adjusted, so that the target profile obtained by weighting and accumulating the random samples is more accurate.

常见的跟踪目标轮廓的SIR算法,Condensation算法,ASIR算法,RPF算法。Common SIR algorithm for tracking target contour, Condensation algorithm, ASIR algorithm, RPF algorithm.

以下的实施例中以采用Condensation算法来跟踪目标的轮廓,利用均值漂移(Mean Shift)跟踪算法来跟踪目标的质心为例来说明本发明实施例的具体实施过程。In the following embodiments, the Condensation algorithm is used to track the contour of the target, and the Mean Shift (Mean Shift) tracking algorithm is used to track the center of mass of the target as an example to illustrate the specific implementation process of the embodiments of the present invention.

上面的跟踪目标轮廓的算法在粒子状态转移(粒子传播)时具有相同形式的状态转移方程,所以本领域技术人员可以容易地用其他的几个算法替代Condensation算法来实施本发明实施例。The above algorithm for tracking the target outline has the same form of state transition equation when particle state transition (particle propagation), so those skilled in the art can easily replace the Condensation algorithm with other several algorithms to implement the embodiment of the present invention.

均值漂移跟踪算法能够比较准确地对视频中目标物体进行跟踪,得到较准确的目标质心值,首先简单介绍该算法的原理。The mean shift tracking algorithm can more accurately track the target object in the video and obtain a more accurate target centroid value. First, the principle of the algorithm is briefly introduced.

假设{xi *}i=1,...,n 表示跟踪目标模型的归一化像素位置,其质心坐标为O;将颜色灰度值进一步量化为m个等级,b(x)为将位置为x的像素的颜色灰度进行量化的函数,则颜色u出现的概率为:Assume that {x i * } i=1,...,n represents the normalized pixel position of the tracking target model, and its centroid coordinate is O; the color gray value is further quantized into m levels, and b(x) is the The function of quantizing the color grayscale of the pixel at position x, then the probability of color u appearing is:

qq uu ‾‾ == CC ΣΣ ii == 11 nno kk (( || || xx ii ** || || 22 )) δδ [[ bb (( xx ii ** )) -- uu ]] -- -- -- (( 2929 ))

其中,k(x)为任意一种核函数,使得距离质心较远的像素的权重较小;Among them, k(x) is any kind of kernel function, so that the weight of pixels farther away from the centroid is smaller;

C为常数,其表达式为:C is a constant whose expression is:

CC == 11 ΣΣ ii == 11 nno kk (( || || xx ii ** || || 22 )) -- -- -- (( 3030 ))

则跟踪目标模型表示为:Then the tracking target model is expressed as:

qq ‾‾ == {{ qq uu ‾‾ }} uu == 11 ,, .. .. .. ,, mm ,, ΣΣ uu == 11 mm qq uu ‾‾ == 11 -- -- -- (( 3131 ))

假设{xi}i=1,...,nh为当前帧中候选目标的像素位置,其质心位置为y,在范围h中运用同样的核函数k(x),则候选目标中颜色u出现的概率可以表示成:Suppose {x i } i=1,...,nh is the pixel position of the candidate target in the current frame, its centroid position is y, and the same kernel function k(x) is used in the range h, then the color u in the candidate target The probability of occurrence can be expressed as:

pp uu ‾‾ (( ythe y )) == CC hh ΣΣ ii == 11 nno hh kk (( || || ythe y -- xx ii hh || || 22 )) δδ [[ bb (( xx ii )) -- uu ]] -- -- -- (( 3232 ))

其中,Ch为常数,其表达式为:Among them, C h is a constant, and its expression is:

CC hh == 11 ΣΣ ii == 11 nno hh kk (( || || ythe y -- xx ii hh || || 22 )) -- -- -- (( 3333 ))

则候选目标模型表示为:Then the candidate target model is expressed as:

pp ‾‾ (( ythe y )) == {{ pp uu ‾‾ (( ythe y )) }} uu == 11 ,, .. .. .. ,, mm ,, ΣΣ uu == 11 mm pp uu ‾‾ =1---=1--- (( 3434 ))

由以上定义的跟踪目标模型与候选目标模型,它们之间的距离为:The distance between the tracking target model and the candidate target model defined above is:

dd (( ythe y )) == 11 -- ρρ [[ pp ‾‾ (( ythe y )) ,, qq ‾‾ ]] -- -- -- (( 3535 ))

其中, ρ ‾ ( y ) ≡ ρ [ p ‾ ( y ) , q ‾ ] = Σ u = 1 m p u ‾ ( y ) q u ‾ ,

Figure S2007101541207D00132
为候选目标中颜色u出现的概率,
Figure S2007101541207D00133
为跟踪目标中颜色u出现的概率。in, ρ ‾ ( the y ) ≡ ρ [ p ‾ ( the y ) , q ‾ ] = Σ u = 1 m p u ‾ ( the y ) q u ‾ ,
Figure S2007101541207D00132
is the probability of color u appearing in the candidate target,
Figure S2007101541207D00133
is the probability of color u appearing in the tracking target.

最佳候选目标即为与跟踪目标模型距离最近的候选目标,也就是使d(y)最小的候选区域,因此要求得使d(y)取得最小值的目标质心y。可以使用如下的迭代公式来求得d(y)的最小值:The best candidate target is the candidate target closest to the tracking target model, that is, the candidate area that minimizes d(y), so the target centroid y that minimizes d(y) is required. You can use the following iterative formula to find the minimum value of d(y):

ythe y ‾‾ 11 == ΣΣ ii == 11 nno hh xx ii ww ii gg (( || || ythe y ‾‾ 00 -- xx ii hh || || 22 )) ΣΣ ii == 11 nno hh ww ii gg (( || || ythe y ‾‾ 00 -- xx ii hh || || 22 )) -- -- -- (( 3636 ))

其中,

Figure S2007101541207D00135
为当前位置,
Figure S2007101541207D00136
为下一时刻的新位置{xi}i=1,...,nh为当前帧中候选目标的像素位置,h为目标所在范围,函数g(x)为核函数k(x)的导数,wi的表达式为:in,
Figure S2007101541207D00135
is the current position,
Figure S2007101541207D00136
is the new position at the next moment {xi } i=1,..., nh is the pixel position of the candidate target in the current frame, h is the range of the target, and the function g(x) is the derivative of the kernel function k(x) , the expression of w i is:

ww ii == ΣΣ uu == 11 mm qq uu ‾‾ pp uu ‾‾ (( ythe y 00 )) δδ [[ bb (( xx ii )) -- uu ]] -- -- -- (( 3737 ))

接下来便可以在每一帧图片中应用此迭代公式,来求得使d(y)取得最小值的候选目标及其质心的位置,此候选目标即为跟踪目标的最佳候选目标,这样也就实现了对视频中目标物体质心的跟踪。Next, this iterative formula can be applied in each frame of pictures to find the position of the candidate target and its center of mass that makes d(y) obtain the minimum value. This candidate target is the best candidate target for the tracking target. The tracking of the center of mass of the target object in the video is realized.

实施例一Embodiment one

本实施例通过利用均值漂移算法得到的目标质心的位置来调整粒子状态转移的参数,使得粒子状态转移的参数能够随着目标质心的位置改变进行相应的调整,从而使目标轮廓的跟踪更加准确。In this embodiment, the parameters of the particle state transition are adjusted by using the position of the center of mass of the target obtained by the mean shift algorithm, so that the parameters of the particle state transition can be adjusted accordingly with the change of the position of the target center of mass, so that the tracking of the target contour is more accurate.

图2为本实施例进行视频目标轮廓跟踪的流程图,主要包含下面的步骤。FIG. 2 is a flow chart for tracking the contour of a video object in this embodiment, which mainly includes the following steps.

步骤201,由输入的图像数据判断是否是新的跟踪对象,即是否需要建立新的跟踪目标,如果是,则执行步骤202;如果不是,则执行步骤215。Step 201, judging from the input image data whether it is a new tracking target, that is, whether a new tracking target needs to be established, if yes, go to step 202; if not, go to step 215.

输入的图像数据可以是经过背景分割技术得到的图像数据和选定的跟踪目标的索引,或者是数据图像以及在其中由用户手工输入的跟踪目标所在区域。The input image data may be the image data obtained through the background segmentation technology and the index of the selected tracking target, or the data image and the area where the tracking target is manually input by the user.

步骤202,根据输入的跟踪目标的索引或区域利用现有的轮廓提取技术得到目标的轮廓向量和B样条控制点QX0和QY0,根据得到的轮廓向量计算轮廓的质心TX0和TY0,计算目标的初始状态向量T0,其计算公式为公式(4),本步骤与背景技术中的步骤102相同。Step 202, according to the input index or area of the tracking target, use the existing contour extraction technology to obtain the contour vector of the target and the B-spline control points QX 0 and QY 0 , and calculate the centroid TX 0 and TY 0 of the contour according to the obtained contour vector , to calculate the initial state vector T 0 of the target, the calculation formula of which is formula (4), this step is the same as step 102 in the background technology.

步骤203,由初始的运动状态向量T0进行随机粒子初始化,初始化Ns个粒子,各个粒子的初始权重w0 i均为1/Ns,运动状态分别为 T 0 i ( i = 1,2 , · · · Ns ) , 其计算公式为公式(5)至(9)Step 203, initialize random particles from the initial motion state vector T 0 , initialize N s particles, the initial weight w 0 i of each particle is 1/N s , and the motion states are respectively T 0 i ( i = 1,2 , &Center Dot; &Center Dot; &Center Dot; NS ) , Its calculation formula is formula (5) to (9)

B3,B4,B5均为常数,B4,B5取[0.15,0.5],B1,B2初始值取[3,15],ξ为[-1,+1]的随机数。B 3 , B 4 , and B 5 are all constants, B 4 and B 5 take [0.15, 0.5], B 1 and B 2 take [3, 15] as initial values, and ξ is a random number of [-1, +1] .

本步骤与背景技术中的步骤103相同。This step is the same as step 103 in the background art.

步骤215,利用均值漂移算法计算目标质心在前M帧图片中的位置坐标{(CXt,CYt)}t=1,2,...,MStep 215, using the mean shift algorithm to calculate the position coordinates {(CX t , CY t )} t=1, 2, . . . , M of the target centroid in the previous M frames of pictures.

步骤216,根据步骤215得到的目标质心的坐标可以得到由第t帧到第t+1帧时目标质心在x方向和y方向的运动速度:Step 216, according to the coordinates of the target center of mass obtained in step 215, the movement speed of the target center of mass in the x direction and y direction from the tth frame to the t+1th frame can be obtained:

VV tt xx == fabsfabs (( CXCX tt ++ 11 -- CXCX tt )) -- -- -- (( 3838 ))

VV tt ythe y == fabsfabs (( CYCy tt ++ 11 -- CYCy tt )) -- -- -- (( 3939 ))

步骤217,调整参数B1,B2Step 217, adjust parameters B 1 and B 2 .

状态转移方程 TX k i = TX k - 1 i + B 1 × ξ 1 - k i TY k i = TY k - 1 i + B 2 × ξ 2 - k i , 即公式(11)和(12)为由当前帧的目标运动状态(目标轮廓的质心位置)向下一帧的目标运动状态(目标轮廓的质心位置)的转移或称为预测,所以状态转移方程中的轮廓质心位置转移参数应该随着目标轮廓质心的运动速度的改变而改变。如果目标轮廓的质心在x和y方向的运动速度增快,则参数B1,B2就要相应增大,反之,就要减小,这样才能使跟踪过程稳定,不会出现跟踪“超前”和“滞后”的抖动现象。state transition equation TX k i = TX k - 1 i + B 1 × ξ 1 - k i and TY k i = TY k - 1 i + B 2 × ξ 2 - k i , That is, the formulas (11) and (12) are the transition or prediction from the target motion state (the centroid position of the target contour) of the current frame to the target motion state (the centroid position of the target contour) of the next frame, so the state transition equation The contour centroid position transfer parameter in should change with the movement velocity of the target contour centroid. If the movement speed of the center of mass of the target contour in the x and y directions increases, the parameters B 1 and B 2 will increase accordingly, otherwise, they will decrease, so that the tracking process will be stable and there will be no tracking "leading" and "lag" jitter phenomenon.

由均值漂移跟踪算法可以较准确地跟踪得到目标的质心,因此这里引入该算法的跟踪结果来对上述的参数B1,B2进行调整,以使对目标轮廓的跟踪更加稳定。The center of mass of the target can be tracked more accurately by the mean shift tracking algorithm, so the tracking result of the algorithm is introduced here to adjust the above-mentioned parameters B 1 and B 2 to make the tracking of the target outline more stable.

根据目标物体运动速度集合{(Vt x,Vt y)}t=0,1,...,M-1(t=0时表示为初始帧)进行预测即可得到第k帧的参数B1 k和B2 k,预测公式为:According to the target object movement speed set {(V t x , V t y )} t=0, 1, ..., M-1 (when t=0 is represented as the initial frame), the parameters of the kth frame can be obtained B 1 k and B 2 k , the prediction formula is:

(( BB 11 kk ,, BB 22 kk )) == ff (( {{ (( VV tt xx ,, VV tt ythe y )) }} tt == 0,10,1 ,, .. .. .. ,, Mm -- 11 ,, BB 11 iniini ,, BB 22 iniini )) -- -- -- (( 4040 ))

其中,f(·)为预测函数,即B1 k和B2 k取决于初始的B1 ini,B2 ini以及前面帧中目标轮廓质心的运动速度集合,Vt x和Vt y为目标由第t帧到第t+1帧时在x方向和y方向的运动速度,B1 ini和B2 ini为初始时设定的转移方程参数。Among them, f( ) is the prediction function, that is, B 1 k and B 2 k depend on the initial B 1 ini , B 2 ini and the movement speed set of the center of mass of the target contour in the previous frame, and V t x and V t y are the target The movement speed in the x-direction and y-direction from the tth frame to the t+1th frame, B 1 ini and B 2 ini are the transfer equation parameters set initially.

步骤204,根据步骤217得到的经过调整的参数B1 k和B2 k,对各个粒子进行状态转移得到各粒子的运动状态向量参数:Step 204, according to the adjusted parameters B 1 k and B 2 k obtained in step 217, perform state transition on each particle to obtain the motion state vector parameters of each particle:

TXTX kk ii == TXTX kk -- 11 ii ++ BB 11 kk ×× ξξ 11 -- kk ii -- -- -- (( 4141 ))

TYTY kk ii == TYTY kk -- 11 ii ++ BB 22 kk ×× ξξ 22 -- kk ii -- -- -- (( 4242 ))

θθ kk ii == θθ kk -- 11 ii ++ BB 33 ×× ξξ 33 -- kk ii -- -- -- (( 1313 ))

SXSX kk ii == SXSX kk -- 11 ii ++ BB 44 ×× ξξ 44 -- kk ii -- -- -- (( 1414 ))

SYSy kk ii == SYSy kk -- 11 ii ++ BB 55 ×× ξξ 55 -- kk ii -- -- -- (( 1515 ))

其中B1 k和B2 k由步骤217得到;ξ1-k i、ξ2-k i、ξ3-k i、ξ4-k i、ξ5-k i为[-1,+1]的随机数。Where B 1 k and B 2 k are obtained by step 217; ξ 1-ki , ξ 2- ki , ξ 3-ki , ξ 4-ki , ξ 5-ki are [-1, +1] of random numbers.

步骤205,根据步骤204得到的各个粒子的运动状态向量参数按照公式(16)计算各个粒子对应的形状空间参数Sk iStep 205, according to the motion state vector parameters of each particle obtained in step 204, the shape space parameter S k i corresponding to each particle is calculated according to the formula (16).

步骤206根据得到的各个粒子的形状空间参数按照公式(17)计算各自的B样条控制点向量。Step 206 calculates respective B-spline control point vectors according to the obtained shape space parameters of each particle according to formula (17).

步骤207,用公式(18)拟合出各个粒子对应的轮廓曲线,其中Bk,m为m次规范B样条基函数,m可以取为3。Step 207 , using the formula (18) to fit the contour curves corresponding to each particle, where B k,m is the m-degree canonical B-spline basis function, and m can be taken as 3.

步骤208,计算各样点与该点处的真实轮廓点的观测值之间的距离,本步骤与背景技术的步骤108相同。Step 208, calculate the distance between each sample point and the observed value of the real contour point at the point, this step is the same as step 108 in the background technology.

步骤209,通过所求得的当前帧的图像数据中各样点与该点处的真实轮廓点的观测值之间的距离DISi(n)(n=1,2,...,N)按照公式(19)可以得到各个粒子的观测概率密度函数pk iStep 209, the distance DIS i (n) (n=1, 2, . The observation probability density function p k i of each particle can be obtained according to formula (19).

步骤210,对前一帧中各粒子的权重值进行权值更新,按照公式(20)得到当前帧中各粒子的权重值。Step 210, update the weight value of each particle in the previous frame, and obtain the weight value of each particle in the current frame according to formula (20).

步骤211,按照公式(21)到(25)由各个粒子的运动状态参数和权值进行加权求和得到期望的运动状态参数。Step 211, according to the formulas (21) to (25), perform weighted summation from the motion state parameters and weights of each particle to obtain the desired motion state parameters.

步骤212,由运动状态值按照公式(26)就可以得到形状空间参数SkStep 212, the shape space parameter S k can be obtained from the motion state value according to the formula (26).

步骤213,由Sk根据公式(27)算得轮廓的控制点向量QXk和QYkStep 213, calculate the control point vectors QX k and QY k of the contour from S k according to the formula (27).

步骤214,根据公式(28)拟合目标的轮廓曲线yk(x)。Step 214, fitting the contour curve y k (x) of the target according to formula (28).

这样就完成了一次对目标轮廓的跟踪。In this way, the tracking of the target contour is completed once.

图3为本实施例视频目标轮廓跟踪装置的结构图。如图3所示,本装置包括:存储模块301、轮廓提取模块302、随机粒子产生模块303、粒子状态转移模块304、质心计算模块305、控制模块306、粒子权重计算模块307和轮廓拟合模块308。该装置所利用的方法已经给出了详细的描述,因此,下面对于该装置的功能只作简单的介绍。FIG. 3 is a structural diagram of a video object contour tracking device in this embodiment. As shown in Figure 3, the device includes: storage module 301, contour extraction module 302, random particle generation module 303, particle state transfer module 304, centroid calculation module 305, control module 306, particle weight calculation module 307 and contour fitting module 308. The method used by the device has been described in detail, so the function of the device will only be briefly introduced below.

存储模块301用于存储输入的图像数据;。The storage module 301 is used for storing the input image data;

控制模块306控制各模块完成相应的操作。The control module 306 controls each module to complete corresponding operations.

收到来自控制模块306的建立新跟踪目标的命令,轮廓提取模块302用于利用现有的轮廓提取技术根据提取存储模块301中的初始帧图像帧数据中目标的轮廓,计算目标的轮廓向量和B样条控制点QX0和QY0,根据得到的轮廓向量计算轮廓的质心TX0和TY0,计算目标的初始状态向量T0;。Receive the command from the control module 306 to set up a new tracking target, the contour extraction module 302 is used to utilize the existing contour extraction technology to calculate the contour vector and B-spline control points QX 0 and QY 0 , calculate the centroids TX 0 and TY 0 of the contour according to the obtained contour vector, and calculate the initial state vector T 0 of the target;

随机粒子产生模块303用于由利用轮廓提取模块302计算得到的初始的运动状态向量T0初始化Ns个粒子,各个粒子的初始权重w0 i均为1/Ns,运动状态分别为T0 i(i=1,2,…Ns) The random particle generation module 303 is used to initialize N s particles from the initial motion state vector T 0 calculated by the contour extraction module 302, the initial weight w 0 i of each particle is 1/N s , and the motion state is T 0 i (i=1,2,…Ns)

这里,将初始图像帧之后的每一帧图像帧数据依次作为当前图像帧数据进行下面的处理。Here, each frame of image frame data subsequent to the initial image frame is sequentially used as the current image frame data for the following processing.

质心计算模块305用于利用均值漂移算法跟踪目标的质心计算存储模块301中保存的当前图像帧数据中目标质心的位置,和/或计算目标质心在x方向和y方向的运动速度。The centroid calculation module 305 is used to track the position of the centroid of the target in the current image frame data saved in the storage module 301 by means of the mean shift algorithm, and/or calculate the movement speed of the centroid of the target in the x direction and the y direction.

粒子状态转移模块304用于根据质心计算模块305计算得到的目标质心的位置和/或目标质心在x方向和y方向的运动速度对粒子状态转移方程中的参数B1,B2进行调整,并利用调整后的参数B1 k和B2 k对各个粒子进行状态转移,得到各个粒子的轮廓;。The particle state transfer module 304 is used to adjust the parameters B 1 and B 2 in the particle state transfer equation according to the position of the target center of mass calculated by the center of mass calculation module 305 and/or the moving speed of the target center of mass in the x direction and the y direction, and Use the adjusted parameters B 1 k and B 2 k to carry out the state transfer of each particle, and obtain the profile of each particle;

控制模块306用于控制各模块完成相应的操作;The control module 306 is used to control each module to complete corresponding operations;

粒子权重计算模块307用于在各个随机粒子的轮廓上随机抽取多个样点,在当前图像帧数据中对各样点计算其真实轮廓点的观测值,计算各样点与其真实轮廓点的观测值之间的距离,根据得到的各个随机粒子轮廓上各样点与其真实轮廓点的观测值之间的距离确定各个随机粒子的权重值。;The particle weight calculation module 307 is used to randomly extract a plurality of sample points on the contour of each random particle, calculate the observation value of its real contour point for each sample point in the current image frame data, and calculate the observation value of each sample point and its real contour point The weight value of each random particle is determined according to the distance between each sample point on the contour of each random particle and the observed value of its real contour point. ;

轮廓拟合模块308用于将所有粒子根据其权重值拟合得到当前图像帧数据中跟踪目标的轮廓曲线并输出。The contour fitting module 308 is used to fit all particles according to their weight values to obtain and output the contour curve of the tracking target in the current image frame data.

只要继续向该装置输入新的图像帧数据或该装置的存储模块中还有未处理的图像帧数据,该装置就针对这些图像帧数据计算跟踪目标的轮廓曲线。As long as new image frame data continues to be input to the device or there are unprocessed image frame data in the storage module of the device, the device calculates the contour curve of the tracking target based on these image frame data.

本实施例通过利用均值漂移算法得到的目标质心的运动速度来调整状态转移方程中轮廓质心的位置转移参数,使得轮廓质心位置能够随着目标的运动速度变化而相应地变化,在一定程度上减少了跟踪的目标轮廓相对真实目标轮廓“超前”或者“滞后”的抖动现象的发生,增加了轮廓跟踪的稳定性,使得轮廓跟踪结果更准确。In this embodiment, the position transfer parameters of the center of mass of the contour in the state transition equation are adjusted by using the moving speed of the target center of mass obtained by the mean shift algorithm, so that the position of the center of mass of the contour can change correspondingly with the change of the moving speed of the target, reducing to a certain extent It prevents the occurrence of the jitter phenomenon that the tracked target contour is "advanced" or "lag" relative to the real target contour, increases the stability of contour tracking, and makes the contour tracking result more accurate.

实施例二Embodiment two

本实施例通过利用均值漂移算法得到的跟踪目标质心的位置和运动速度来对候选粒子进行评估,按照各粒子的轮廓的质心与跟踪目标质心的远近程度来影响粒子的权重值,使得各个粒子加权累加得到的目标轮廓更加接近真实的目标轮廓,从而达到使目标轮廓的跟踪更加准确的目的。In this embodiment, candidate particles are evaluated by using the position and motion velocity of the center of mass of the tracking target obtained by the mean shift algorithm. The accumulated target contour is closer to the real target contour, so as to achieve the purpose of making the tracking of the target contour more accurate.

图4为本实施例进行视频目标轮廓跟踪的流程图,主要包含下面的步骤。FIG. 4 is a flowchart of video object contour tracking in this embodiment, which mainly includes the following steps.

步骤401,判断输入的图像数据及跟踪目标信息是否是新对象,即是否需要建立新的跟踪目标,如果是,则执行步骤402,否则执行步骤415。Step 401 , judging whether the input image data and tracking target information are new objects, that is, whether a new tracking target needs to be established, if so, go to step 402 , otherwise go to step 415 .

输入的图像数据可以是经过背景分割的图像数据,可以在初始帧图像中选定某个运动物体作为跟踪目标。The input image data may be image data after background segmentation, and a certain moving object may be selected as a tracking target in the initial frame image.

步骤402,根据输入的跟踪目标的索引或区域利用现有的轮廓提取技术得到目标的轮廓向量和B样条控制点QX0和QY0,根据得到的轮廓向量计算轮廓的质心TX0和TY0,计算目标的初始状态向量T0,其计算公式为公式(4),本步骤与背景技术中的步骤102相同。Step 402: Obtain the contour vector of the target and the B-spline control points QX 0 and QY 0 by using the existing contour extraction technology according to the input index or area of the tracking target, and calculate the centroid TX 0 and TY 0 of the contour according to the obtained contour vector , to calculate the initial state vector T 0 of the target, the calculation formula of which is formula (4), this step is the same as step 102 in the background art.

步骤403,由初始的运动状态向量T0进行随机粒子初始化,初始化Ns个粒子,各个粒子的初始权重w0 i均为1/Ns,运动状态分别为Ti 0(i=1,2…Ns) 其计算公式为公式(5)至(9)Step 403, initialize random particles from the initial motion state vector T 0 , initialize N s particles, the initial weight w 0 i of each particle is 1/N s , and the motion states are T i 0 (i=1, 2 …Ns) whose calculation formulas are formulas (5) to (9)

B1,B2,B3,B4,B5均为常数,B1,B2取[3,15],B4,B5取[0.15,0.5],ξ为[-1,+1]的随机数。B 1 , B 2 , B 3 , B 4 , and B 5 are all constants, B 1 and B 2 take [3, 15], B 4 and B 5 take [0.15, 0.5], and ξ is [-1, +1 ] random number.

本步骤与背景技术中的步骤103相同。This step is the same as step 103 in the background art.

步骤415,利用均值漂移算法计算目标质心在前M帧图片中的位置坐标{(CXt,CYt)}t=1,2,...,MStep 415, using the mean shift algorithm to calculate the position coordinates {(CX t , CY t )} t=1, 2, .

步骤404,输入第k帧图像数据时,对各粒子状态进行状态转移,系统状态转移方程为公式(11)到(15),本步骤与背景技术中步骤104相同。Step 404, when the kth frame of image data is input, perform state transition for each particle state, the system state transition equation is formula (11) to (15), this step is the same as step 104 in the background technology.

步骤405,利用步骤404得到的各个粒子的运动状态按照公式(16)计算各个粒子的形状空间参数Sk iStep 405, using the motion state of each particle obtained in step 404 to calculate the shape space parameter S k i of each particle according to formula (16).

步骤406,按照公式(17)计算各个粒子的B样条控制点向量。Step 406, calculate the B-spline control point vector of each particle according to formula (17).

步骤407,得到各个粒子的控制点向量后,就可以用B样条的方法拟合出各个粒子对应的轮廓曲线,拟合公式为公式(18):Step 407, after obtaining the control point vectors of each particle, the contour curve corresponding to each particle can be fitted by the B-spline method, and the fitting formula is formula (18):

步骤408,对候选粒子进行评估。Step 408, evaluating the candidate particles.

本实施例的方法通过当前帧的两种衡量因素对各个粒子进行评估:a.粒子轮廓点与真实轮廓点的观测值之间的距离;b.粒子质心位置与跟踪目标质心位置之间的距离。其中,衡量因素a是现有的技术。The method of this embodiment evaluates each particle through two measurement factors of the current frame: a. the distance between the particle contour point and the observed value of the real contour point; b. the distance between the particle centroid position and the tracking target centroid position . Among them, the measuring factor a is the existing technology.

按照背景技术中的步骤108求得轮廓各样点与该点处的真实轮廓点的观测值之间的距离DISi(n),此距离作为第一个衡量因素。According to step 108 in the background technology, the distance DIS i (n) between each contour point and the observed value of the real contour point at this point is obtained, and this distance is used as the first weighing factor.

现有技术中只利用了上述衡量因素a进行粒子的评估,为了使轮廓跟踪更加准确,本实施例在这里引入了利用均值漂移跟踪算法得到的较准确的质心位置来对各个候选粒子进行评估。In the prior art, only the above measurement factor a is used to evaluate particles. In order to make contour tracking more accurate, this embodiment introduces a more accurate centroid position obtained by using a mean shift tracking algorithm to evaluate each candidate particle.

由均值漂移跟踪算法可以比较准确的跟踪到每一帧中目标轮廓质心位置(CXt,CYt),可用此质心(跟踪目标质心),即步骤415得到的质心与由各个粒子轮廓的质心之间的距离DISi(Ck)作为另一个衡量因素,其值越大说明该粒子质心位置与目标真实质心位置偏差越大;反之,说明它们之间的偏差就越小。The target contour centroid position (CX t , CY t ) in each frame can be tracked relatively accurately by the mean shift tracking algorithm, and this centroid (tracking target centroid) can be used, that is, the centroid obtained in step 415 and the centroid of each particle contour The distance DIS i (C k ) between is another measuring factor, the larger the value, the greater the deviation between the position of the particle's center of mass and the true position of the target's center of mass; otherwise, the smaller the deviation between them.

步骤409,由步骤408中得到的两个距离衡量因素可得各个粒子的观测概率密度函数:Step 409, the observation probability density function of each particle can be obtained by the two distance measurement factors obtained in step 408:

pp kk ii == expexp {{ -- 11 22 (( 11 σσ 11 22 ΦΦ 11 ++ 11 σσ 22 22 ΦΦ 22 )) }} -- -- -- (( 4343 ))

其中, Φ 1 = 1 N Σ n = 1 N DIS i ( n ) , Φ2=DISi(Ck),DISi(n)轮廓各样点与该点处的真实轮廓点的观测值之间的距离,DISi(Ck)为跟踪目标的质心与由为第i个粒子轮廓的质心之间的距离,所述σ1为粒子轮廓点与真实轮廓点的观测值之间的离散度,所述σ2为粒子质心位置与跟踪目标质心位置之间的离散度。in, Φ 1 = 1 N Σ no = 1 N DIS i ( no ) , Φ 2 =DIS i (C k ), the distance between each point of the contour of DIS i (n) and the observed value of the real contour point at this point, DIS i (C k ) is the center of mass of the tracking target and is The distance between the centroids of i particle contours, the σ 1 is the degree of dispersion between the particle contour point and the observed value of the real contour point, and the σ 2 is the degree of dispersion between the position of the particle center of mass and the position of the center of mass of the tracking target .

步骤410,根据步骤409中得到的各个粒子的观测概率密度函数Pk i按照公式(20)进行各个粒子的权值更新。Step 410 , according to the observation probability density function P k i of each particle obtained in step 409 , update the weight of each particle according to formula (20).

步骤411,按照公式(21)到(25)由各个粒子的运动状态参数以及它们的权重值进行加权平均,得到第k帧跟踪目标轮廓的运动状态参数Tk=(TXk,TYk,θk,SXk,SYk)。Step 411, according to the formula (21) to (25), carry out weighted average by the motion state parameter of each particle and their weight value, obtain the motion state parameter T k of the kth frame tracking target contour =(TX k , TY k , θ k , SX k , SY k ).

步骤412,由运动状态参数Tk得到形状空间参数SkStep 412, obtain the shape space parameter S k from the motion state parameter T k .

步骤413,按照公式(27)由Sk算得轮廓的控制点向量QXk和QYkStep 413, calculate the control point vectors QX k and QY k of the contour from S k according to formula (27).

步骤414,按照公式(28)拟合出物体的轮廓曲线yk(x)。这样就完成了对跟踪目标的一次轮廓跟踪。Step 414, fitting the contour curve y k (x) of the object according to formula (28). In this way, a contour tracking of the tracking target is completed.

图5为本实施例视频目标轮廓跟踪装置的结构图。如图5所示,本装置包括:存储模块501、轮廓提取模块502、随机粒子产生模块503、粒子状态转移模块504、质心计算模块505、控制模块506、粒子权重计算模块507和轮廓拟合模块508。该装置所利用的方法已经给出了详细的描述,因此,下面对于该装置的功能只作简单的介绍。FIG. 5 is a structural diagram of a video object contour tracking device in this embodiment. As shown in Figure 5, the device includes: storage module 501, contour extraction module 502, random particle generation module 503, particle state transfer module 504, centroid calculation module 505, control module 506, particle weight calculation module 507 and contour fitting module 508. The method used by the device has been described in detail, so the function of the device will only be briefly introduced below.

存储模块301存储输入的图像数据。The storage module 301 stores input image data.

控制模块306控制各模块完成相应的操作。The control module 306 controls each module to complete corresponding operations.

收到来自控制模块306的建立新跟踪目标的命令,轮廓提取模块502用于利用现有的轮廓提取技术根据存储模块501中的初始帧图像帧数据,计算目标的轮廓向量和B样条控制点QX0和QY0,根据得到的轮廓向量计算轮廓的质心TX0和TY0,计算目标的初始状态向量T0Receive the command from the control module 306 to establish a new tracking target, the contour extraction module 502 is used to calculate the contour vector and B-spline control points of the target according to the initial frame image frame data in the storage module 501 using the existing contour extraction technology QX 0 and QY 0 , calculate the centroid TX 0 and TY 0 of the contour according to the obtained contour vector, and calculate the initial state vector T 0 of the target.

随机粒子产生模块503用于由利用轮廓提取模块502计算得到的初始的运动状态向量T0初始化Ns个粒子,各个粒子的初始权重w0 i均为1/Ns,运动状态分别为T0 i(i=1,2,…Ns)。 The random particle generation module 503 is used to initialize N s particles from the initial motion state vector T 0 calculated by the contour extraction module 502, the initial weight w 0 i of each particle is 1/N s , and the motion state is T 0 i (i=1, 2, . . . Ns).

这里,将初始图像帧之后的每一帧图像帧数据依次作为当前图像帧数据进行下面的处理。Here, each frame of image frame data subsequent to the initial image frame is sequentially used as the current image frame data for the following processing.

质心计算模块505用于利用均值漂移算法计算目标质心在前M帧图片中的位置坐标{(CXt,CYt)}t=1,2,...,MThe centroid calculation module 505 is used to calculate the position coordinates {(CX t , CY t )} t=1, 2, .

粒子状态转移模块504用于根据现有技术对各个粒子进行状态转移。The particle state transfer module 504 is used to perform state transfer for each particle according to the prior art.

质心计算模块505利用均值漂移算法计算目标质心在当前图像帧数据中的位置。The centroid calculation module 505 uses the mean shift algorithm to calculate the position of the target centroid in the current image frame data.

粒子权重计算模块507用于通过当前图像帧数据中粒子轮廓点与观测轮廓点之间的距离,和粒子质心位置与质心计算模块505计算得到的跟踪目标质心位置之间的距离两个因素计算各个粒子的观测概率密度函数,根据获得的各个粒子的观测概率密度函数确定各粒子的权重值。The particle weight calculation module 507 is used to calculate the distance between the particle contour point and the observed contour point in the current image frame data, and the distance between the particle centroid position and the tracking target centroid position calculated by the centroid calculation module 505. The observation probability density function of the particle is used to determine the weight value of each particle according to the obtained observation probability density function of each particle.

轮廓拟合模块508用于利用各粒子及其权重值拟合得到当前图像帧数据中目标的轮廓曲线并输出。The contour fitting module 508 is used to obtain and output the contour curve of the target in the current image frame data by fitting each particle and its weight value.

只要继续向该装置输入新的图像帧数据或者该装置的存储模块中还有未处理的图像帧数据,该装置就针对这些图像帧数据计算跟踪目标的轮廓曲线。As long as new image frame data continues to be input to the device or there are unprocessed image frame data in the storage module of the device, the device calculates the contour curve of the tracking target with respect to these image frame data.

本实施例通过利用均值漂移算法得到的跟踪目标质心来参与评估各个候选粒子,使得各个粒子的权重与其轮廓质心位置与跟踪目标质心间的距离相关,使得由各个粒子加权累加得到的目标轮廓更接近真实目标轮廓,增加了轮廓跟踪的准确性。In this embodiment, the center of mass of the tracking target obtained by the mean shift algorithm is used to participate in the evaluation of each candidate particle, so that the weight of each particle is related to the distance between the position of the center of mass of its contour and the center of mass of the tracking target, so that the target contour obtained by the weighted accumulation of each particle is closer to Real target contours, increasing the accuracy of contour tracking.

实施例三Embodiment three

本实施例通过利用均值漂移算法得到的目标质心的位置及其变化来调整粒子状态转移的参数,使得粒子的状态转移能够随着目标质心的位置变化进行相应的调整;同时,还使用均值漂移算法得到的跟踪目标质心的位置和运动速度来对候选粒子进行评估,按照各粒子的轮廓的质心与跟踪目标质心的远近程度来影响粒子的权重值,使得各个粒子加权累加得到的目标轮廓更加接近真实的目标轮廓,以上两个方面使得目标轮廓的跟踪更加准确。In this embodiment, the parameters of the particle state transition are adjusted by using the position of the target center of mass obtained by the mean shift algorithm and its change, so that the state transition of the particle can be adjusted accordingly with the change of the position of the target center of mass; meanwhile, the mean shift algorithm is also used Candidate particles are evaluated based on the position and velocity of the center of mass of the tracking target, and the weight value of the particle is affected according to the distance between the center of mass of the outline of each particle and the center of mass of the tracking target, so that the target outline obtained by the weighted accumulation of each particle is closer to the real The above two aspects make the tracking of the target contour more accurate.

图6为本实施例进行视频目标轮廓跟踪的流程图,主要包含下面的步骤。FIG. 6 is a flowchart of video object contour tracking in this embodiment, which mainly includes the following steps.

步骤601,由输入的图像数据判断是否是新的跟踪对象,即是否需要建立新的跟踪目标,如果是,则执行步骤602;如果不是,则执行步骤615。Step 601, judging from the input image data whether it is a new tracking target, that is, whether a new tracking target needs to be established, if yes, go to step 602; if not, go to step 615.

输入的图像数据可以是经过背景分割技术得到的图像数据和选定的跟踪目标的索引,或者是数据图像以及在其中由用户手工输入的跟踪目标所在区域。The input image data may be the image data obtained through the background segmentation technology and the index of the selected tracking target, or the data image and the area where the tracking target is manually input by the user.

步骤602,根据输入的跟踪目标的索引或区域利用现有的轮廓提取技术得到目标的轮廓向量和B样条控制点QX0和QY0,根据得到的轮廓向量计算轮廓的质心TX0和TY0,计算目标的初始状态向量T0,其计算公式为公式(4),本步骤与背景技术中的步骤102相同。Step 602: Obtain the contour vector of the target and B-spline control points QX 0 and QY 0 by using the existing contour extraction technology according to the input index or area of the tracking target, and calculate the centroid TX 0 and TY 0 of the contour according to the obtained contour vector , to calculate the initial state vector T 0 of the target, the calculation formula of which is formula (4), this step is the same as step 102 in the background art.

步骤603,由初始的运动状态向量T0进行随机粒子初始化,初始化Ns个粒子,各个粒子的初始权重w0 i均为1/Ns,运动状态分别为T0 i(i=1,2,…Ns),其计算公式为公式(5)至(9)Step 603: Initialize random particles from the initial motion state vector T 0 , initialize N s particles, the initial weight w 0 i of each particle is 1/N s , and the motion states are T 0 i (i=1, 2 ,…Ns), its calculation formula is formula (5) to (9)

B3,B4,B5均为常数,B4,B5取[0.15,0.5],B1,B2初始值取[3,15],ξ为[-1,+1]的随机数。本步骤与背景技术中的步骤103相同。B 3 , B 4 , and B 5 are all constants, B 4 and B 5 take [0.15, 0.5], B 1 and B 2 take [3, 15] as initial values, and ξ is a random number of [-1, +1] . This step is the same as step 103 in the background art.

步骤615,利用均值漂移算法计算目标质心在前M帧图片中的位置坐标{(CXt,CYt)}t=1,2,...,MStep 615, using the mean shift algorithm to calculate the position coordinates {(CX t , CY t )} t=1, 2, . . . , M of the target centroid in the previous M frames of pictures.

步骤616,根据步骤615得到的目标质心的坐标可以按照公式(38)和(39)得到目标由第t帧到第t+1帧时在x方向和y方向的运动速度Vt x和Vt yIn step 616, the coordinates of the center of mass of the target obtained in step 615 can be obtained according to formulas (38) and (39) to obtain the moving speed V t x and V t of the target in the x direction and the y direction when the tth frame to the t+1th frame y .

步骤617,按照公式(60)调整参数B1,B2,本步骤与实施例一的步骤217相同。Step 617, adjust parameters B 1 and B 2 according to formula (60), this step is the same as step 217 in the first embodiment.

步骤604,根据步骤617得到的经过自适应调整的参数B1 k和B2 k,对各个粒子进行状态转移得到各粒子的运动状态向量参数,本步骤与实施例一的步骤204相同。Step 604 , according to the adaptively adjusted parameters B 1 k and B 2 k obtained in step 617 , perform state transition on each particle to obtain the motion state vector parameters of each particle. This step is the same as step 204 in the first embodiment.

步骤605,根据步骤604得到的各个粒子的运动状态向量参数按照公式(16)计算各个粒子对应的形状空间参数Sk iStep 605, according to the motion state vector parameters of each particle obtained in step 604, the shape space parameter S k i corresponding to each particle is calculated according to the formula (16).

步骤606根据得到的各个粒子的形状空间参数按照公式(17)计算各自的B样条控制点向量。Step 606 calculates respective B-spline control point vectors according to the obtained shape space parameters of each particle according to formula (17).

步骤607,用公式(18)拟合出各个粒子对应的轮廓曲线,其中Bk,m为m次规范B样条基函数,m可以取为3。Step 607, using the formula (18) to fit the contour curves corresponding to each particle, where B k, m is the m-degree canonical B-spline basis function, and m can be taken as 3.

步骤608,对候选粒子进行评估,计算轮廓各样点与该点处的真实轮廓点的观测值之间的距离DISi(n)和粒子质心位置与跟踪目标质心位置之间的距离DISi(Ck)。本步骤与实施例二中的步骤408相同。Step 608, evaluate the candidate particles, calculate the distance DIS i (n) between each contour sample point and the observed value of the real contour point at this point and the distance DIS i (n) between the particle centroid position and the tracking target centroid position C k ). This step is the same as step 408 in the second embodiment.

步骤609,由步骤608中得到的两个距离衡量因素可按照公式(43)得到各个粒子的观测概率密度函数pk i。本步骤与实施例二的步骤409相同。Step 609, from the two distance measurement factors obtained in step 608, the observation probability density function p k i of each particle can be obtained according to formula (43). This step is the same as step 409 in the second embodiment.

步骤610,根据步骤609中得到的各个粒子的观测概率密度函数pk i按照公式(20)进行各个粒子的权值更新。Step 610 , according to the observation probability density function p k i of each particle obtained in step 609 , update the weight of each particle according to formula (20).

步骤611,按照公式(21)到(25)由各个粒子的运动状态参数以及它们的权重值进行加权平均,得到跟踪目标的运动状态参数Step 611, according to formulas (21) to (25), carry out weighted average by the motion state parameters of each particle and their weight values, and obtain the motion state parameters of the tracking target

Tk=(TXk,TYk,θk,SXk,SYk)。T k = (TX k , TY k , θ k , SX k , SY k ).

步骤612,由运动状态参数Tk得到形状空间参数SkStep 612, obtain the shape space parameter S k from the motion state parameter T k .

步骤613,按照公式(27)由Sk算得轮廓的控制点向量QXk和QYkStep 613, calculate the control point vectors QX k and QY k of the contour from S k according to formula (27).

步骤614,按照公式(28)拟合出物体的轮廓曲线yk(x)。这样就完成了对跟踪目标的一次轮廓跟踪。Step 614, fitting the contour curve y k (x) of the object according to formula (28). In this way, a contour tracking of the tracking target is completed.

本实施例实际上是通过同时使用实施例一和实施例二中的两种手段来使对目标轮廓的跟踪同时具有实施例一的跟踪稳定性高的优点和实施例二的跟踪准确性高的优点。In this embodiment, the tracking of the target contour has the advantages of the high tracking stability of the first embodiment and the high tracking accuracy of the second embodiment by using the two methods of the first and second embodiments at the same time. advantage.

本领域技术人员应该容易地将实施例一和实施例二中的两种装置结合起来组成同时具有所述两种装置的功能的新装置,该装置用于实现本实施例中的视频目标轮廓的跟踪方法,对视频目标进行准确地跟踪。Those skilled in the art should easily combine the two devices in embodiment one and embodiment two to form a new device with the functions of the two devices at the same time, and this device is used to realize the video target outline in this embodiment. The tracking method accurately tracks the video target.

图7为采用现有技术的Condensation算法进行视频目标轮廓跟踪的效果图;图8为采用本发明实施例三的视频目标轮廓跟踪方法的跟踪效果图。我们选取了开始、中间和结束时的几帧代表图片。图片中用白色曲线标示出的汽车轮廓即为计算机利用轮廓跟踪方法“看”到的汽车的轮廓。可以看到,没有引入均值漂移算法时,跟踪过程中出现了跟踪不稳定的现象,这是因为没有引入均值漂移算法时,状态转移参数固定不变;引入了均值漂移算法实现了状态转移参数对汽车运动速度变化的自适应调整和加入了用均值漂移方法跟踪的汽车质心对各个粒子进行评估的方法,使跟踪效果有了很大改善,跟踪更为稳定和准确。FIG. 7 is an effect diagram of video object contour tracking using the Condensation algorithm in the prior art; FIG. 8 is a tracking effect diagram of the video object contour tracking method according to Embodiment 3 of the present invention. We selected a few representative images at the beginning, middle and end. The outline of the car marked with a white curve in the picture is the outline of the car "seen" by the computer using the outline tracking method. It can be seen that when the mean shift algorithm is not introduced, the tracking instability phenomenon occurs in the tracking process. This is because the state transition parameters are fixed when the mean shift algorithm is not introduced; the introduction of the mean shift algorithm realizes the state transition parameters. The adaptive adjustment of the speed change of the car and the method of evaluating each particle by adding the center of mass of the car tracked by the mean shift method have greatly improved the tracking effect and made the tracking more stable and accurate.

下表为使用实施例三的方法时状态转移参数B1,B2与汽车质心运动速度的变化关系(此处采用的是线性变化关系)。The table below shows the variation relationship between the state transition parameters B 1 and B 2 and the moving speed of the center of mass of the vehicle when the method of Embodiment 3 is used (the linear variation relationship is adopted here).

表1B1跟随Vx的变化Table 1B 1 Follow V x Variation

    V<sub>x</sub>V<sub>x</sub>     1 1     2 2     55     8 8     1010     B<sub>1</sub>B<sub>1</sub>     3.23.2     4.54.5     5.85.8     7.17.1     8.48.4

表2B2跟随Vy的变化Table 2B 2 following the change of V y

    V<sub>y</sub>V<sub>y</sub>     1 1     2 2     33     55     66     B<sub>2</sub>B<sub>2</sub>     3.23.2     4.54.5     5.85.8     7.17.1     8.48.4

本实施例中是用粒子滤波跟踪算法中的Condensation算法和质心跟踪算法中的均值漂移算法为例的。In this embodiment, the Condensation algorithm in the particle filter tracking algorithm and the mean shift algorithm in the centroid tracking algorithm are used as examples.

常见的其它粒子滤波方法有SIR算法,ASIR算法,RPF算法等。Other common particle filter methods include SIR algorithm, ASIR algorithm, RPF algorithm, etc.

上面的这些粒子滤波算法在粒子状态转移(粒子传播)时与Condensation算法具有相同形式的状态转移方程,所以本领域技术人员应当能够将本发明实施例的方法应用在其它粒子滤波算法当中得到更优的轮廓跟踪结果。The above particle filter algorithms have the same form of state transition equations as the Condensation algorithm when the particle state transition (particle propagation), so those skilled in the art should be able to apply the method of the embodiment of the present invention to other particle filter algorithms to obtain better Contour tracking results.

另外,任何能够得到目标质心位置的跟踪算法都可以代替均值漂移算法应用在本发明实施例的实施之中。In addition, any tracking algorithm that can obtain the position of the center of mass of the target can replace the mean shift algorithm and be used in the implementation of the embodiments of the present invention.

由上述的实施例可见,本发明实施例的这种视频目标轮廓跟踪方法通过跟踪目标的质心得到的目标质心的位置和运动速度来调整轮廓跟踪过程中粒子状态转移的参数,使粒子的状态转移能够随着目标质心的位置变化进行相应的调整,从而使对目标轮廓的跟踪更准确。It can be seen from the above-mentioned embodiments that the video target contour tracking method in the embodiment of the present invention adjusts the parameters of the particle state transition in the contour tracking process by tracking the position of the target center of mass and the moving speed of the target center of mass, so that the state transition of the particle Corresponding adjustments can be made as the position of the center of mass of the target changes, so that the tracking of the target contour is more accurate.

本发明实施例的另一种视频目标轮廓跟踪方法通过跟踪目标的质心得到跟踪目标质心的位置和运动速度来对候选粒子进行评估,按照各粒子的轮廓的质心与得到的目标质心的远近程度来影响粒子的权重值,使得各个粒子加权累加得到的目标轮廓更加接近真实的目标轮廓,使得对目标轮廓的跟踪更加准确。Another video target contour tracking method in the embodiment of the present invention evaluates the candidate particles by tracking the center of mass of the target to obtain the position and movement speed of the tracked target center of mass, according to the distance between the centroid of the contour of each particle and the obtained target center of mass. Affect the weight value of the particles, so that the target contour obtained by the weighted accumulation of each particle is closer to the real target contour, making the tracking of the target contour more accurate.

本发明实施例的一种视频目标轮廓跟踪装置,用于跟踪目标的质心得到的目标质心的位置,并利用该质心位置调整轮廓跟踪所得的各个随机粒子的状态转移参数,使随机粒子的状态转移参数随着目标质心位置的变化进行相应的变化,目标的质心位置变化较大时随机粒子的位置变化随之变大,得到的目标轮廓的跟踪结果更准确。A video target contour tracking device according to an embodiment of the present invention is used to track the position of the target center of mass obtained by tracking the center of mass of the target, and use the position of the center of mass to adjust the state transition parameters of each random particle obtained by contour tracking, so that the state transition of the random particle The parameters change accordingly with the position of the center of mass of the target. When the position of the center of mass of the target changes greatly, the position change of the random particles becomes larger, and the tracking result of the target contour is more accurate.

本发明实施例的另一种视频目标轮廓跟踪方法,用于跟踪目标的质心得到跟踪目标质心的位置,并利用该质心位置来对各个随机粒子进行评估,按照各粒子的轮廓的质心与跟踪目标质心的远近程度来调整粒子的权重值,使得接近真实轮廓的粒子的权重值增大,偏离真实轮廓的粒子的权重值减小,各个粒子加权累加得到的目标轮廓更加接近真实的目标轮廓,得到的目标轮廓的跟踪结果更准确。Another video object contour tracking method in the embodiment of the present invention is used to track the center of mass of the target to obtain the position of the center of mass of the tracking target, and use the position of the center of mass to evaluate each random particle, according to the centroid of the outline of each particle and the tracking target The distance of the center of mass is used to adjust the weight value of the particle, so that the weight value of the particle close to the real contour increases, and the weight value of the particle deviating from the real contour decreases, and the target contour obtained by the weighted accumulation of each particle is closer to the real target contour. The tracking result of the target contour is more accurate.

综上所述,以上仅为本发明的部分实施例而已,并非用于限定本发明的保护范围。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。To sum up, the above are only some embodiments of the present invention, and are not intended to limit the protection scope of the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.

Claims (17)

1. video target profile tracing method comprises:
From initial frame image data extraction objective contour, utilize the objective contour that is extracted to produce a plurality of random particles;
Each random particles in the current frame image data is carried out state transitions, obtain the profile of each random particles;
On the profile of each random particles, randomly draw a plurality of sampling points, each sampling point is calculated the measured value of its true profile point;
Calculate the distance between the measured value of each sampling point and its true profile point;
Determine the weighted value of each random particles according to the distance between the measured value of each sampling point and its true profile point on each random particles profile that obtains; With
All random particles weighted accumulations are obtained the profile of target in current frame image followed the tracks of;
It is characterized in that each random particles carries out further comprising before the state transitions in to the current frame image data:
The barycenter of tracking target obtains the position of target barycenter;
Utilize the position of the target barycenter that is obtained that the parameter of random particles state transitions is adjusted.
2. method according to claim 1 is characterized in that, the parameter of described random particles state transitions is for carrying out the parameter of position transfer to the barycenter of random particles profile;
The method that the position of the target barycenter that described utilization obtained is adjusted the parameter of random particles state transitions comprises:
Make barycenter carry out the parameter of position transfer along with the alternate position spike of described target barycenter in adjacent two interframe increases and increase, along with the alternate position spike of described target barycenter in adjacent two interframe reduces and reduce to the random particles profile.
3. method according to claim 1 is characterized in that, the position that the barycenter of described tracking target obtains the target barycenter comprises:
Utilize the movement velocity set of position calculation target barycenter on all directions between consecutive frame of target barycenter in each frame before of following the tracks of the barycenter acquisition;
The method that the position of the target barycenter that described utilization obtained is adjusted the parameter of random particles state transitions comprises:
Utilize described sets of speeds the parameter of described random particles state transitions is predicted and to be adjusted, obtain the parameter of the state transitions of adjusted random particles.
4. method according to claim 3 is characterized in that, describedly carries out that each random particles is carried out state transitions and comprises:
Utilize following state transition equation that each random particles is carried out state transitions:
Figure FA20192431200710154120701C00013
Figure FA20192431200710154120701C00014
Figure FA20192431200710154120701C00015
Wherein, TX and TY are respectively the positions of x direction and y direction objective contour barycenter, and θ is the angle of objective contour rotation, and SX and SY are respectively the yardstick of target in x direction and y direction, and k is the frame number of view data, i=1, and 2 ... N s, described N sBe initialization particle number, ξ 1-k i, ξ 2-k i, ξ 3-k i, ξ 4-k i, ξ 5-k iBe the random number of [1 ,+1], B 3, B 4, B 5Be constant, the parameter of the described state transitions of adjusting is B 1, B 2
Described utilization follow the tracks of that barycenter obtains before in each frame the position calculation of target barycenter obtain the movement velocity set of target barycenter on all directions between consecutive frame and be { (V t x, V t y) T=0,1 ..., M-1, wherein, V t xThe target barycenter is at the speed of x direction, V for from the t frame to the t+1 frame time t yThe target barycenter is in the speed of y direction for from the t frame to the t+1 frame time, and initial frame is the 0th frame;
The described sets of speeds of utilizing predicts to the parameter of described random particles state transitions and adjusts that the parameter that obtains the state transitions of adjusted random particles comprises:
According to movement velocity the set { (V of described target barycenter in each interframe t x, V t y) T=0,1 ..., M-1Predict the B parameter of the random particles state transitions when obtaining the k frame 1 kAnd B 2 k,
Figure FA20192431200710154120701C00021
Wherein, f () is an anticipation function, B 1 IniAnd B 2 IniThe B that sets when initial 1, B 2Value.
5. method according to claim 1 is characterized in that, the barycenter of described tracking target comprises: the barycenter that adopts average drifting track algorithm tracking target.
6. method according to claim 1 is characterized in that, the profile of described objective contour and random particles utilizes the B batten to characterize.
7. according to the described method of the arbitrary claim of claim 1 to 6, it is characterized in that, further comprise:
Obtain the barycenter of each particle profile according to the profile of each random particles;
Distance between this target barycenter of position calculation of the target barycenter that utilization obtains and the barycenter of each particle profile;
Utilize distance between the barycenter of described target barycenter and each particle profile to adjust the weighted value of described each random particles.
8. method according to claim 7 is characterized in that, the weighted value of described each particle of adjustment comprises:
Calculate the observation probability density function of each particle according to following formula:
Figure FA20192431200710154120701C00022
Wherein,
Figure FA20192431200710154120701C00023
Φ 2=DIS i(C k), DIS i(n) be distance between the measured value of each sampling point of described each particle profile and its true profile point, DIS i(C k) being the distance between the barycenter of the described target barycenter that calculates and described each particle profile, described N is the number of sampling point on contour curve, described σ 1Be the dispersion between the measured value of particle profile point and true profile point, described σ 2Be the dispersion between particle centroid position and the tracking target centroid position;
Weighted value to each particle in the former frame carries out right value update, obtains the weighted value of each particle in the present frame:
Figure FA20192431200710154120701C00031
Wherein, p k iBe the observation probability density function of i particle of k frame, w k iIt is the weighted value of i particle of k frame.
9. method according to claim 1 is characterized in that, the quantity of described random particles is more than 50.
10. video target profile tracing method comprises:
From initial frame image data extraction objective contour, utilize the objective contour that is extracted to produce a plurality of random particles;
Each random particles in the current frame image data is carried out state transitions, obtain the profile of each random particles;
On the profile of each random particles, randomly draw a plurality of sampling points, each sampling point is calculated the measured value of its true profile point;
Calculate the distance between the measured value of each sampling point and its true profile point;
Determine the weighted value of each random particles according to the distance between the measured value of each sampling point and its true profile point on each random particles profile that obtains; With
All random particles weighted accumulations are obtained the profile of target in current frame image followed the tracks of;
It is characterized in that the distance on each random particles profile that described basis obtains between the measured value of each sampling point and its true profile point determines that the weighted value of each random particles further comprises:
The barycenter of tracking target obtains the position of target barycenter;
Obtain the barycenter of each particle profile according to the profile of each random particles that obtains;
The distance of the barycenter of the described target barycenter of the position calculation of the target barycenter that utilization obtains and each particle profile;
Adjust the weighted value of described each particle according to the distance between the profile barycenter of the target barycenter that obtains and each particle.
11. method according to claim 10 is characterized in that, the weighted value of described each particle of adjustment comprises:
Calculate the observation probability density function of each particle according to following formula:
Figure FA20192431200710154120701C00041
Wherein,
Figure FA20192431200710154120701C00042
Φ 2=DIS i(C k), DIS i(n) be distance between the measured value of the sampling point of described each particle profile and its true profile point, DIS i(C k) be the distance between the barycenter of the target barycenter of described acquisition and described each particle profile, described N is the number of sampling point on contour curve, described σ 1Be the dispersion between the measured value of particle profile point and true profile point, described σ 2Be the dispersion between particle centroid position and the tracking target centroid position;
Weighted value to each particle in the former frame carries out right value update, obtains the weighted value of each particle in the present frame:
Figure FA20192431200710154120701C00043
Wherein, p k iBe the observation probability density function of i particle of k frame, w k iIt is the weighted value of i particle of k frame.
12. method according to claim 10 is characterized in that, the barycenter of described tracking target comprises: the barycenter that adopts average drifting track algorithm tracking target.
13. method according to claim 10 is characterized in that, the profile of described objective contour and random particles utilizes the B batten to characterize.
14. method according to claim 10 is characterized in that, the quantity of described random particles is more than 50.
15. a video target profile tracking means comprises:
The profile extraction module is used for from initial frame image data extraction objective contour;
The random particles generation module, the objective contour that is used to utilize described profile extraction module to be extracted produces a plurality of random particles;
The particle state shift module is used for each random particles of current frame image data is carried out state transitions, obtains the profile of each random particles;
The granular Weights Computing module, be used on the profile of each random particles, randomly drawing a plurality of sampling points, each sampling point is calculated the measured value of its true profile point, calculate the distance between the measured value of each sampling point and its true profile point, and determine the weighted value of each random particles according to the distance between the measured value of each sampling point and its true profile point on each random particles profile that obtains;
The profile fitting module is used for all random particles weighted accumulations are obtained the target of the being followed the tracks of profile at current frame image;
It is characterized in that, further comprise:
The centroid calculation module, the barycenter that is used for tracking target obtains the position of target barycenter;
Described particle state shift module is further used in to the current frame image data each random particles to carry out before the state transitions, utilizes the position of the target barycenter that described centroid calculation module obtained that the parameter of random particles state transitions is adjusted.
16. device according to claim 15 is characterized in that,
The profile that the granular Weights Computing module is further used for each random particles of obtaining according to described particle state shift module obtains the barycenter of each particle profile, utilize the distance of the barycenter of the described target barycenter of position calculation of the target barycenter that the centroid calculation module obtains and each particle profile, and according to the weighted value of described each particle of distance adjustment between the profile barycenter of the target barycenter that obtains and each particle.
17. a video target profile tracking means comprises:
The profile extraction module is used for from initial frame image data extraction objective contour;
The random particles generation module, the objective contour that is used to utilize described profile extraction module to be extracted produces a plurality of random particles;
The particle state shift module is used for each random particles of current frame image data is carried out state transitions, obtains the profile of each random particles;
The granular Weights Computing module, be used on the profile of each random particles, randomly drawing a plurality of sampling points, each sampling point is calculated the measured value of its true profile point, calculate the distance between the measured value of each sampling point and its true profile point, and determine the weighted value of each random particles according to the distance between the measured value of each sampling point and its true profile point on each random particles profile that obtains;
The profile fitting module is used for all random particles weighted accumulations are obtained the target of the being followed the tracks of profile at current frame image;
It is characterized in that, further comprise the centroid calculation module, the barycenter that is used for tracking target obtains the position of target barycenter;
The profile that the granular Weights Computing module is further used for each random particles of obtaining according to described particle state shift module obtains the barycenter of each particle profile, utilize the distance of the barycenter of the described target barycenter of position calculation of the target barycenter that the centroid calculation module obtains and each particle profile, and according to the weighted value of described each particle of distance adjustment between the profile barycenter of the target barycenter that obtains and each particle.
CN2007101541207A 2007-09-17 2007-09-17 Video target contour tracking method and device Active CN101394546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2007101541207A CN101394546B (en) 2007-09-17 2007-09-17 Video target contour tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2007101541207A CN101394546B (en) 2007-09-17 2007-09-17 Video target contour tracking method and device

Publications (2)

Publication Number Publication Date
CN101394546A CN101394546A (en) 2009-03-25
CN101394546B true CN101394546B (en) 2010-08-25

Family

ID=40494580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007101541207A Active CN101394546B (en) 2007-09-17 2007-09-17 Video target contour tracking method and device

Country Status (1)

Country Link
CN (1) CN101394546B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831423B (en) * 2012-07-26 2014-12-03 武汉大学 SAR (synthetic aperture radar) image road extracting method
KR20140031613A (en) * 2012-09-05 2014-03-13 삼성전자주식회사 Apparatus and method for processing image
CN103559723B (en) * 2013-10-17 2016-04-20 同济大学 A kind of human body tracing method based on self-adaptive kernel function and mean shift
CN104376576B (en) * 2014-09-04 2018-06-05 华为技术有限公司 A kind of method for tracking target and device
CN104392469B (en) * 2014-12-15 2017-05-31 辽宁工程技术大学 A kind of method for tracking target based on soft characteristic theory
CN106297292A (en) * 2016-08-29 2017-01-04 苏州金螳螂怡和科技有限公司 Based on highway bayonet socket and the Trajectory System of comprehensively monitoring
CN106791294A (en) * 2016-11-25 2017-05-31 益海芯电子技术江苏有限公司 Motion target tracking method
CN106875426B (en) * 2017-02-21 2020-01-21 中国科学院自动化研究所 Visual tracking method and device based on related particle filtering
CN107818651A (en) * 2017-10-27 2018-03-20 华润电力技术研究院有限公司 A method and device for illegal border crossing alarm based on video surveillance
CN108010032A (en) * 2017-12-25 2018-05-08 北京奇虎科技有限公司 Video landscape processing method and processing device based on the segmentation of adaptive tracing frame
CN110830846B (en) * 2018-08-07 2022-02-22 阿里巴巴(中国)有限公司 Video clipping method and server
CN109212480B (en) * 2018-09-05 2020-07-28 浙江理工大学 A Sound Source Tracking Method Based on Distributed Auxiliary Particle Filtering
CN112214535A (en) * 2020-10-22 2021-01-12 上海明略人工智能(集团)有限公司 Similarity calculation method and system, electronic device and storage medium
CN114004891B (en) * 2021-11-04 2025-03-21 广东电网有限责任公司 A distribution network line inspection method based on target tracking and related devices
CN119022794B (en) * 2024-07-17 2025-06-10 清研达维汽车科技(苏州)有限公司 Geometric parameter calibration system for whole vehicle in-loop test vehicle to be tested

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1606033A (en) * 2004-11-18 2005-04-13 上海交通大学 Weak target detecting and tracking method in infrared image sequence
CN101026759A (en) * 2007-04-09 2007-08-29 华为技术有限公司 Visual tracking method and system based on particle filtering
JP2007233798A (en) * 2006-03-02 2007-09-13 Nippon Hoso Kyokai <Nhk> Video object tracking device and video object tracking program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1606033A (en) * 2004-11-18 2005-04-13 上海交通大学 Weak target detecting and tracking method in infrared image sequence
JP2007233798A (en) * 2006-03-02 2007-09-13 Nippon Hoso Kyokai <Nhk> Video object tracking device and video object tracking program
CN101026759A (en) * 2007-04-09 2007-08-29 华为技术有限公司 Visual tracking method and system based on particle filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙俊喜等.基于粒子滤波的目标实时跟踪系统.电视技术31 3.2000,31(3),85-87. *
孟勃等.粒子滤波算法在非线性目标跟踪系统中的应用.光学精密工程15 9.2007,15(9),1421-1426. *

Also Published As

Publication number Publication date
CN101394546A (en) 2009-03-25

Similar Documents

Publication Publication Date Title
CN101394546B (en) Video target contour tracking method and device
CN104766320B (en) Many Bernoulli Jacob under thresholding is measured filter Faint target detection and tracking
CN105405151B (en) Anti-Occlusion Target Tracking Method Based on Particle Filter and Weighted Surf
CN105809693B (en) SAR image registration method based on deep neural network
CN107633226B (en) Human body motion tracking feature processing method
Liu et al. Optical flow based urban road vehicle tracking
CN103632382B (en) A kind of real-time multiscale target tracking based on compressed sensing
CN105046717B (en) A kind of video object method for tracing object of robustness
CN101493889B (en) Method and apparatus for tracking video object
CN105279772B (en) A kind of trackability method of discrimination of infrared sequence image
CN110659591A (en) SAR image change detection method based on twin network
CN105279769B (en) A kind of level particle filter tracking method for combining multiple features
CN106548462A (en) Non-linear SAR image geometric correction method based on thin-plate spline interpolation
CN105549009B (en) A kind of SAR image CFAR object detection methods based on super-pixel
CN105590325B (en) High-resolution remote sensing image dividing method based on blurring Gauss member function
CN103150738A (en) Detection method of moving objects of distributed multisensor
CN101951464A (en) Real-time video image stabilizing method based on integral image characteristic block matching
CN106952274A (en) Pedestrian detection and ranging method based on stereo vision
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN112016454B (en) A detection method for face alignment
CN104036526A (en) Gray target tracking method based on self-adaptive window
CN101877134A (en) A Robust Tracking Method for Airport Surveillance Video Targets
CN107610156A (en) Infrared small object tracking based on guiding filtering and core correlation filtering
CN115294398A (en) SAR image target recognition method based on multi-attitude angle joint learning
CN112991394B (en) KCF target tracking method based on cubic spline interpolation and Markov chain

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant