[go: up one dir, main page]

CN103839274A - Extended target tracking method based on geometric proportion relation - Google Patents

Extended target tracking method based on geometric proportion relation Download PDF

Info

Publication number
CN103839274A
CN103839274A CN201410114293.6A CN201410114293A CN103839274A CN 103839274 A CN103839274 A CN 103839274A CN 201410114293 A CN201410114293 A CN 201410114293A CN 103839274 A CN103839274 A CN 103839274A
Authority
CN
China
Prior art keywords
fuselage
point
wing
image
length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410114293.6A
Other languages
Chinese (zh)
Other versions
CN103839274B (en
Inventor
胡锦龙
彭先蓉
李红川
魏宇星
祁小平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Optics and Electronics of CAS
Original Assignee
Institute of Optics and Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Optics and Electronics of CAS filed Critical Institute of Optics and Electronics of CAS
Priority to CN201410114293.6A priority Critical patent/CN103839274B/en
Publication of CN103839274A publication Critical patent/CN103839274A/en
Application granted granted Critical
Publication of CN103839274B publication Critical patent/CN103839274B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明提供一种基于几何比例关系的扩展目标跟踪方法,首先采用高斯平滑滤波对待处理图像进行预处理以去除噪声对后续算法的影响,其次采用模糊C均值聚类算法FCM对上述平滑后的图像进行分割,获得二值图像,然后对分割后获得的二值图像进行骨架提取,从提取的骨架点中选出机头和机尾特征点,计算图像平面中机身的长度,根据已知的飞行器实际机身上机头到机身与机翼交点与机身长度的比值关系,当飞行器出现自遮挡时,根据这种几何比例不变性,获得机身与机翼的交点,从而获得跟踪点。本发明解决了当飞行器出现自遮挡时提取不到机翼的骨架从而导致跟踪点丢失的问题,实现了飞行器在自遮挡时的稳定跟踪。

The present invention provides an extended target tracking method based on geometric proportion relationship. Firstly, Gaussian smoothing filter is used to preprocess the image to be processed to remove the influence of noise on subsequent algorithms. Carry out segmentation to obtain a binary image, then perform skeleton extraction on the binary image obtained after segmentation, select the nose and tail feature points from the extracted skeleton points, and calculate the length of the fuselage in the image plane, according to the known The ratio relationship between the nose to the intersection of the fuselage and the wing on the actual fuselage of the aircraft and the length of the fuselage. When the aircraft appears self-occlusion, according to this geometric ratio invariance, the intersection point of the fuselage and the wing is obtained, thereby obtaining the tracking point . The invention solves the problem that the skeleton of the wing cannot be extracted when self-occlusion occurs in the aircraft, which leads to the loss of tracking points, and realizes the stable tracking of the aircraft in self-occlusion.

Description

A kind of expansion method for tracking target based on geometric proportion relation
Technical field
The present invention relates to a kind of motor-driven expansion method for tracking target, particularly one is followed the tracks of motor-driven expansion order calibration method according to geometric proportion relation, is mainly used in image processing, computer vision.Belong to target detection tracing technical field in photoeletric measuring system.
Background technology
In photoeletric measuring system, in order to improve tracking accuracy, the visual field of detector is all smaller, and target size is bigger than normal again.Therefore in detector, target presents the form of expansion.Distant object imaging, because the degraded factors such as the aberration of atmospheric turbulence, thrashing and optical system cause target very fuzzy in the imaging of system, poor contrast; In addition, target is without texture information, different, without characterizing and identification clarification of objective information.Target also exists attitude to change obvious feature, and along with the variation of targeted attitude, trace point also can drift about thereupon.Choosing stable unique point and carry out locking tracking, is a great problem that expansion target following faces.
At present, the conventional algorithm for expansion target is coupling, comprises the coupling of the aspect such as gray scale, feature.Due to the motion of target, may there is the variations such as size, shape, attitude in target, add the various interference such as background, illumination, and image is processed the precision problem of minimum measurement unit, coupling is followed the tracks of and be can not get definitely best matched position, and this can bring the drift of trace point.Because target is without texture and notable feature information, attitude changes greatly, and traditional tracking based on gray feature is easy of losing target in the time that target occurs that larger attitude changes, for this situation, there is again afterwards adopting skeletal extraction feature point tracking expansion target.Although this method can be processed the situation that attitude changes, but, for this class expansion target of aircraft, when target occurs in the time blocking (as vertical with imaging surface with wing place plane in fuselage), the simple method that relies on skeletal extraction can not be obtained fuselage or wing place axis, thereby cause trace point to be lost, can not meet actual needs.Therefore in the urgent need to studying the engineering application demand of new method to adapt to follow the tracks of.
Summary of the invention
The technology of the present invention is dealt with problems: for the deficiencies in the prior art, a kind of expansion method for tracking target based on geometric proportion relation is provided, from in essence by out abstract the geometry information of motor-driven expansion target, simultaneously according to known prior imformation, according to certain geometric proportion relation, realize target is at larger attitude variation and the tenacious tracking under circumstance of occlusion.
For realizing such object, technical scheme of the present invention: a kind of expansion method for tracking target based on geometric proportion relation, comprises the steps:
Step 1, image pre-service: adopt Gaussian smoothing filtering to process pending image, remove the impact of noise, obtain filtered smoothed image;
Step 2, use Fuzzy C-Means Cluster Algorithm FCM(Fuzzy C-Means Cluster) step 1 is obtained level and smooth after Image Segmentation Using, obtain bianry image;
Step 3, the image that utilizes the method for skeletal extraction to obtain step 2 are processed, and extract the skeleton point on aircraft;
Step 4, the skeleton point obtaining according to step 3, choose frame head and tail unique point according to certain rule, is calculated to be the length of image planes middle machine body;
In step 5, known reality, head is to the proportionate relationship of fuselage and wing intersection point and fuselage length, and the fuselage length of obtaining according to step 4 is calculated to be image planes middle machine body and wing intersection point;
Step 6, intersection point that step 5 is obtained are in conjunction with interframe continuity correction trace point position, and last trace point is trace point in present frame.
Wherein, in described step 2, use Fuzzy C-Means Cluster Algorithm FCM(Fuzzy C-Means Cluster) step 1 is obtained level and smooth after Image Segmentation Using, the method that obtains bianry image is:
Step (21), initialization: given cluster classification is counted C=2 in C(the present invention), set iteration stopping threshold epsilon, initialization fuzzy partition matrix U (0), iterations l=0, Fuzzy Weighting Exponent m (m=2 in the present invention);
Step (22), by U (l)substitution formula (5), calculates cluster centre matrix V (l):
v i = 1 Σ k = 1 n ( u ik ) m Σ k = 1 n ( u ik ) m x k , i = 1 , . . . , c - - - ( 5 )
Wherein n is number of pixels to be clustered, and m is FUZZY WEIGHTED index, and c is cluster classification number, u ikfuzzy classification matrix U while being the l time iteration (l)in the capable k column element of i, x kfor k pixel value in image to be clustered, v icluster centre matrix V while being the l time iteration (l)in i cluster centre value;
Step (23), according to formula (6), utilize V (l)upgrade U (l), obtain new fuzzy classification matrix U (l+1):
u ik = 1 Σ j = 1 c ( d ik d jk ) 2 m - 1 , i = 1 , . . . , c - - - ( 6 )
Wherein d ikfor the Euclidean distance of k element and i cluster centre in image to be clustered, d jkfor the Euclidean distance of k element and j cluster centre in image to be clustered;
Step (24) if || U (l)-U l+1|| < ε, iteration stopping.Otherwise, put l=l+1, return to step (22);
Step (25), calculate the Euclidean distance of the cluster centre value that each pixel distance above-mentioned steps (21)-(24) in image to be clustered obtain, the pixel value nearest apart from cluster centre is set to 1, otherwise is set to 0, the bianry image after being cut apart thus.
Wherein, in described step 3, the image that utilizes the method for skeletal extraction to obtain step 2 is processed, and extracts the skeleton point on aircraft, and the present invention adopts successively the iterative refinement algorithm of cancellation frontier point to extract skeleton, and algorithm is as follows:
If known target point is labeled as 1, background dot is labeled as 0, and definition frontier point is that itself is labeled as 1, and in its 8-connected region, has at least a point to be labeled as 0 point.Algorithm is considered the 8-neighborhood centered by frontier point, and note central point is p 1, 8 points of its neighborhood are designated as respectively p around central point clockwise 2, p 3..., p 9, wherein p 2at p 1top;
Comprise frontier point carried out to two step operations:
(3.1) mark meets the frontier point of following condition simultaneously:
(3.1.1)2≤N(p 1)≤6;
(3.1.2)S(p 1)=1;
(3.1.3)p 2·p 4·p 6=0;
(3.1.4)p 4·p 6·p 8=0;
Wherein N (p 1) be p 1non-zero adjoint point number, S (p 1) be with p 2, p 3..., p 9, p 2the number of the value of these points from 0 → 1 during for order.When to after all boundary point check, by all marks point remove.(42) mark meets the frontier point of following condition simultaneously:
(3.2) mark meets the frontier point of following condition simultaneously:
(3.2.1)1≤N(p 1)≤6
(3.2.2)S(p 1)=1;
(3.2.3)p 2·p 4·p 8=0;
(3.2.4)p 2·p 6·p 8=0;
Above two steps operations form an iteration, algorithm iterate until not point meet again flag condition, at this moment remaining some composition skeleton point.
Wherein, in described step 4, the skeleton point obtaining according to step 3, chooses frame head and tail unique point according to certain rule, is calculated to be the length of image planes middle machine body, and its method is:
Skeleton end points in the skeleton point that step (41), determining step three obtain, calculates its position;
Step (42), the skeleton end points obtaining for step (41), be considered as head and tail unique point (head is on a left side) by minimum horizontal ordinate and maximum end points;
Step (43), the head and the tail unique point that obtain according to step (42), calculate this 2 distances in imaging surface according to Euclidean distance, is also the fuselage length in imaging surface;
Wherein, in described step 5, whole word changes into above: in described step 5, suppose in body axis system, head place place is labeled as an A, and fuselage and wing place crossing point of axes are labeled as B, tail place place is labeled as C, the ratio of the length of known AB and AC length, the fuselage length of obtaining according to step 4 is calculated to be image planes middle machine body and wing intersection point, and its method is:
The ratio of the length of known above-mentioned AB and AC length is R, and in the imaging surface being obtained by step (42), head and tail unique point coordinate are respectively (x h, y h), (x t, y t), desired fuselage and wing point of crossing coordinate are (x c, y c), take head on a left side as example, can obtain according to the geometric proportion relation between line segment:
x c - x h x t - x h = y c - y h y t - y h = R - - - ( 7 )
Can obtain thus, fuselage and wing point of crossing transverse and longitudinal coordinate are respectively suc as formula shown in (8) and formula (9):
x c=x h+R·(x t-x h) (8)
y c=y h+R·(y t-y h) (9)
Wherein, in described step 6, the intersection point that step 5 is obtained is in conjunction with interframe continuity correction trace point position, and last trace point is trace point in present frame.The trace point position that previous frame obtains is P o, the position of intersecting point obtaining according to step 5 in present frame is P n, as follows according to trace point method in interframe continuity correction present frame:
P c=(1-α)·P o+α·P n (10)
Wherein, α is modifying factor, and the present invention gets 0.95, P cfor the trace point position through revised present frame.
The present invention's beneficial effect is compared with prior art:
(1) the present invention adopts the method for skeletal extraction to obtain the architectural feature point on Aircraft Targets to the image after cutting apart, and has solved traditional dependence gray-scale value and has carried out the problem that feature point extraction causes poor robustness.
(2) the present invention, according to certain Rule Extraction head and wing unique point from the skeleton point extracting, has solved traditional gray scale angle point and has extracted the problem of operator to illumination and attitude sensitive.
(3) the invention provides a kind of motor-driven expansion method for tracking target that utilizes geometric proportion relation, compared with the method for carrying out aircraft tracking with the skeleton point of simple dependence extraction, the present invention has incorporated the aircraft geometry information of priori, obtain aircraft according to this geometry constant rate and change the trace point under blocking more greatly and certainly in attitude, solve aircraft the problem that trace point is lost in the time blocking (as vertical with imaging surface with wing place plane in fuselage) occurs, realized aircraft and changed the tenacious tracking under blocking more greatly and certainly in attitude.
Accompanying drawing explanation
Fig. 1 is the inventive method realization flow figure;
Fig. 2 is the present invention carries out track and localization result to the 1st two field picture of sequence used;
Fig. 3 is the present invention carries out track and localization result to the 86th two field picture of sequence used;
Fig. 4 is the present invention carries out track and localization result to the 215th two field picture of sequence used;
Fig. 5 is the present invention carries out track and localization result to the 945th two field picture of sequence used;
Fig. 6 is that the present invention follows the tracks of the each trace point geometric locus obtaining to sequence used.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the invention are elaborated.The present embodiment is implemented under take technical solution of the present invention as prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
The present invention is based on a kind of expansion method for tracking target based on geometric proportion relation, input picture is single station flash ranging model plane image sequence.
As shown in Figure 1, the invention provides a kind of expansion method for tracking target based on geometric proportion relation, comprise following steps:
Step 1, image pre-service.Due to the defect of illumination or imaging system, the pending image obtaining can be subject to the impact of noise, thereby affects follow-up processing.Therefore,, before carrying out follow-up Processing Algorithm, pending image is carried out to pre-service.This method adopts Gaussian smoothing filtering to remove the impact of noise, obtains filtered smoothed image.
Step 2, use Fuzzy C-Means Cluster Algorithm FCM(Fuzzy C-Means Cluster) step 1 is obtained level and smooth after Image Segmentation Using, obtain bianry image.In essence, to cut apart be a process of pixel being classified based on certain attribute to image.The property complicated and changeable of natural image has determined that it is uncertain that many pixels belong in the problem of which cluster at it, thereby considers that from the angle of fuzzy clustering image ratio of division is more reasonable.Fuzzy C-Means Cluster Algorithm (FCM, Fuzzy C-Means Cluster) be from hard C mean algorithm (HCM, Hard C-Means Cluster) develop, its essence is a kind of nonlinear iteration optimization method of based target function, the Weighted Similarity in objective function employing image between each pixel and each cluster centre is estimated.The task of FCM algorithm is exactly by iteration, selects a rational fuzzy membership matrix cluster centre, makes objective function reach minimum, thereby obtains optimal segmentation result.
Fuzzy C-Means Cluster Algorithm is divided by the iteration optimization of objective function is realized to set, and it can express the degree that each pixel of image belongs to a different category.If n is pixel count to be clustered, c is classification number (c=2 in the present invention), and m is FUZZY WEIGHTED index (getting m=2 in the present invention), and it controls degree of membership all kinds of shared degree.The value of objective function is that in image, each pixel, to the weighted sum of squares of C cluster centre, can be expressed as:
J m ( U , V ) = &Sigma; i = 1 c &Sigma; k = 1 n u ik m ( d ik ) 2 - - - ( 11 )
Wherein, u ikbe the degree of membership of k pixel to i class, d ikbe the distance of k pixel to i class, U is fuzzy classification matrix, and V is cluster centre set.
Clustering criteria will be sought best group exactly to (U, V), makes J m(U, V) minimum.J mminimization can be realized by iterative algorithm below:
(2.1) initialization: given cluster classification is counted C=2 in C(the present invention), set iteration stopping threshold epsilon, initialization fuzzy partition matrix U (0), iterations l=0, Fuzzy Weighting Exponent m (m=2 in the present invention);
(2.2) by U (l)substitution formula (12), calculates cluster centre matrix V (l):
v i = 1 &Sigma; k = 1 n ( u ik ) m &Sigma; k = 1 n ( u ik ) m x k , i = 1 , . . . , c - - - ( 12 )
Wherein n is number of pixels to be clustered, and m is FUZZY WEIGHTED index, and c is cluster classification number, u ikfuzzy classification matrix U while being the l time iteration (l)in the capable k column element of i, x kfor k pixel value in image to be clustered, v icluster centre matrix V while being the l time iteration (l)in i cluster centre value;
(2.3) according to formula (13), utilize V (l)upgrade U (l), obtain new fuzzy classification matrix U (l+1):
u ik = 1 &Sigma; j = 1 c ( d ik d jk ) 2 m - 1 , i = 1 , . . . , c - - - ( 13 )
Wherein, d ikfor the Euclidean distance of k element and i cluster centre in image to be clustered, similarly, d jkfor the Euclidean distance of k element and j cluster centre in image to be clustered;
(2.4) if || U (l)-U l+1|| < ε, iteration stopping.Otherwise put l=l+1, return to step (2.2);
(2.5) calculate the Euclidean distance of the cluster centre value that each pixel distance above-mentioned steps (2.1)-(2.4) in image to be clustered obtain, the pixel value nearest apart from cluster centre is set to 1, otherwise is set to 0, the bianry image after being cut apart thus.
Experiment is found, adopts fuzzy C-means clustering method to cut apart image and can obtain than the better effect of method that adopts Threshold segmentation, especially obvious to natural image.This is because natural image is complicated and changeable, level complexity, and each pixel belongs to the definite boundary of which kind of neither one.The degree that fuzzy C-means clustering belongs to each pixel which kind of shows with the form of probability, and directly think unlike hard C mean cluster (HCM) method that each pixel determines which kind of belongs to, therefore fuzzy C-means clustering method is cut apart to the property complicated and changeable that can better embody natural image for image.
Step 3, the image that utilizes the method for skeletal extraction to obtain step 2 are processed, and extract the skeleton point on aircraft.Skeleton has the topological sum shape information identical with the original, can effectively describe object, is a kind of geometric properties of function admirable.The method that realizes skeletal extraction has multiple thinking, and Medial-Axis Transformation (medial axis transform, MAT) is the more effective technology of one.But the method need to be calculated the distance of all frontier points to All Ranges internal point, calculated amount is very large.Therefore, the present invention adopts successively the iterative refinement algorithm of cancellation frontier point to extract skeleton.
If known target point is labeled as 1, background dot is labeled as 0.Definition frontier point is that itself is labeled as 1, and in its 8-connected region, has at least a point to be labeled as 0 point.Algorithm is considered the 8-neighborhood centered by frontier point, and note central point is p 1, 8 points of its neighborhood are designated as respectively p around central point clockwise 2, p 3..., p 9, wherein p 2at p 1top.
Algorithm comprises frontier point is carried out to two step operations:
(3.1) mark meets the frontier point of following condition simultaneously:
(3.1.1)2≤N(p 1)≤6;
(3.1.2)S(p 1)=1;
(3.1.3)p 2·p 4·p 6=0;
(3.1.4)p 4·p 6·p 8=0;
Wherein N (p 1) be p 1non-zero adjoint point number, S (p 1) be with p 2, p 3..., p 9, p 2the number of the value of these points from 0 → 1 during for order.When to after all boundary point check, by all marks point remove.
(3.2) mark meets the frontier point of following condition simultaneously:
(3.2.1)1≤N(p 1)≤6
(3.2.2)S(p 1)=1;
(3.2.3)p 2·p 4·p 8=0;
(3.2.4)p 2·p 6·p 8=0;
Above two steps operations form an iteration, algorithm iterate until not point meet again flag condition, at this moment remaining some composition skeleton point.
Step 4, the skeleton point obtaining according to step 3, choose frame head and tail unique point according to certain rule, is calculated to be the length of image planes middle machine body, and method is as follows:
(4.1) the skeleton end points, in the skeleton point that obtains of determining step three, calculates its position.Its rule is that all skeleton points that obtain for step 3, if only have a skeleton point in its eight neighborhood, judge that this point is skeleton end points;
(4.2), the skeleton end points that obtains for (4.1), minimum horizontal ordinate and maximum end points are considered as to head and tail unique point (head is on a left side);
(4.3), according to (4.2) obtain head and tail unique point, calculate this 2 distances in imaging surface according to Euclidean distance, be also the fuselage length in imaging surface;
Step 5, suppose in body axis system, head place place is labeled as an A, and fuselage and wing place crossing point of axes are labeled as B, and tail place place is labeled as C, the ratio of the length of known AB and AC length, the fuselage length of obtaining according to step 4 is calculated to be image planes middle machine body and wing intersection point.For the different aircraft of specific model, the geometry of itself is similar.No matter which kind of attitude occurs aircraft change, this specific geometric relationship is constant.The invention reside in and probe into so a kind of unchangeability, thereby solve the aircraft problem that trace point is lost under larger attitude variation and circumstance of occlusion.Method according to these geometric proportion Relation acquisition tenacious tracking points is as follows:
The ratio of the length of known above-mentioned AB and AC length is R, and in the imaging surface being obtained by step (4.2), head and tail unique point coordinate are respectively (x h, y h), (x t, y t), desired fuselage and wing point of crossing coordinate are (x c, y c), take head on a left side as example, can obtain according to the geometric proportion relation between line segment:
x c - x h x t - x h = y c - y h y t - y h = R - - - ( 14 )
Can obtain thus, fuselage and wing point of crossing transverse and longitudinal coordinate are respectively suc as formula shown in (15) and formula (16):
x c=x h+R·(x t-x h) (15)
y c=y h+R·(y t-y h) (16)
Wherein, in described step 6, the intersection point that step 5 is obtained is in conjunction with interframe continuity correction trace point position, and last trace point is trace point in present frame.The trace point position that previous frame obtains is P o, the position of intersecting point obtaining according to step 5 in present frame is P n, as follows according to trace point method in interframe continuity correction present frame:
P c=(1-α)·P o+α·P n (17)
Wherein, α is modifying factor, and the present invention gets 0.95, P cfor the trace point position through revised present frame.
In order to verify the robustness of the inventive method, experiment adopts model plane image sequence, totally 1321 frames, and the intercepting tracking results that wherein the 1st frame, the 86th frame, the 215th frame and the 945th two field picture obtain is respectively as shown in Fig. 2,3,4,5.In figure, three grey rectangle frames are the tracking frame being respectively centered by head, fuselage and wing intersection point and tail unique point, and the grey cross at rectangle frame center represents last head, fuselage and wing intersection point and the tail unique point of extracting.As can be seen from the figure, when target generation attitude changes (driftage, pitching and rolling, as shown in Figure 2 and Figure 5) time, utilize the method for skeletal extraction can accurately obtain stable head and tail unique point, utilize geometric proportion relation can obtain the intersection point of fuselage and wing simultaneously; When target occurs, from blocking, (fuselage is vertical with imaging surface with wing place plane, as shown in Figure 3 and Figure 4) time, as long as can extract fuselage place axis, still can arrive according to head the constant rate of fuselage and wing intersection point and fuselage length, obtain stable trace point position, change the tenacious tracking greatly and under circumstance of occlusion thereby realize aircraft in attitude.
In order to verify the stability of the inventive method, experiment is tested totally to front 400 frames of above-mentioned model plane image sequence (1321 frames), Fig. 6 is that fuselage and the positioning error of wing intersection point between consecutive frame are respectively along the curve of X and Y-direction, stdx is the interframe error standard deviation of trace point along directions X, and stdy is the interframe error standard deviation of trace point along directions X.As can be seen from the figure, trace point is all less than a pixel along the error to standard deviation of X and Y-direction respectively, when target occurs, from blocking, (fuselage is vertical with imaging surface with wing place plane, as shown in Figure 3 and Figure 4) time, still can obtain the intersection point of fuselage and wing, thereby realize the tenacious tracking under blocking.
Non-elaborated part of the present invention belongs to those skilled in the art's known technology.
Those of ordinary skill in the art will be appreciated that, above embodiment is only for the present invention is described, and be not used as limitation of the invention, as long as within the scope of connotation of the present invention, the above embodiment is changed, and modification all will drop in the scope of the claims in the present invention book.

Claims (4)

1.一种基于几何比例关系的扩展目标跟踪方法,其特征包括如下步骤:1. a kind of extended target tracking method based on geometric ratio relationship, its feature comprises the steps: 步骤一、图像预处理:采用高斯平滑滤波对待处理的图像进行处理,去除噪声的影响,得到滤波后的平滑图像;Step 1, image preprocessing: use Gaussian smoothing filter to process the image to be processed, remove the influence of noise, and obtain a smoothed image after filtering; 步骤二、使用模糊C均值聚类算法FCM(Fuzzy C-Means Cluster)对步骤一得到的平滑后的图像进行分割,获得二值图像;Step 2. Use the fuzzy C-means clustering algorithm FCM (Fuzzy C-Means Cluster) to segment the smoothed image obtained in step 1 to obtain a binary image; 步骤三、利用骨架提取的方法对步骤二得到的图像进行处理,提取飞机上的骨架点;Step 3, using the method of skeleton extraction to process the image obtained in step 2, and extract the skeleton points on the plane; 步骤四、根据步骤三获得的骨架点,依据一定规则选取机头和机尾特征点,计算成像面中机身的长度;Step 4. According to the skeleton points obtained in step 3, select the feature points of the nose and tail according to certain rules, and calculate the length of the fuselage in the imaging plane; 步骤五、假设在机体坐标系中,机头所在处标记为点A,机身与机翼所在轴线交点标记为B,机尾所在处标记为C,则已知AB的长度与AC长度的比值,根据步骤四获取的机身长度计算成像面中机身与机翼交点;Step 5. Assuming that in the body coordinate system, the position of the nose is marked as point A, the intersection point of the axis of the fuselage and the wing is marked as B, and the position of the tail is marked as C, then the ratio of the length of AB to the length of AC is known , calculate the intersection point of the fuselage and the wing in the imaging plane according to the length of the fuselage obtained in step 4; 步骤六、将步骤五得到的机身与机翼交点结合帧间连续性修正跟踪点位置,最后的跟踪点为当前帧中跟踪点。Step 6. Combine the intersecting points of the fuselage and wings obtained in Step 5 with the inter-frame continuity to correct the position of the tracking point, and the final tracking point is the tracking point in the current frame. 2.根据权利要求1所述的基于几何比例关系的扩展目标跟踪方法,其特征在于:所述步骤四中依据一定规则选取机头和机尾特征点,计算成像面中机身的长度具体实现如下:2. the extended target tracking method based on geometric proportion relationship according to claim 1, is characterized in that: in described step 4, select nose and tail feature point according to certain rule, calculate the length of fuselage in the imaging plane and specifically realize as follows: 步骤(41)、判断步骤三所获得的骨架点中的骨架端点,计算其位置;Step (41), judging the skeleton endpoints among the skeleton points obtained in step 3, and calculating their positions; 步骤(42)、对于步骤(41)获得的骨架端点,将横坐标最小和最大的端点视为机头和机尾特征点;Step (42), for the skeleton endpoints obtained in step (41), regard the endpoints with the minimum and maximum abscissa coordinates as the nose and tail feature points; 步骤(43)、根据步骤(42)获得的机头和机尾特征点,利用欧式距离计算这两点在成像面中的距离,也即成像面中的机身长度。In step (43), according to the characteristic points of the nose and tail obtained in step (42), the distance between these two points in the imaging plane is calculated using the Euclidean distance, that is, the length of the fuselage in the imaging plane. 3.根据权利要求1所述的基于几何比例关系的扩展目标跟踪方法,其特征在于:所述步骤五中计算成像面中机身与机翼交点具体实现如下:3. the extended target tracking method based on geometric proportional relationship according to claim 1, is characterized in that: in the described step 5, calculating the intersection point of fuselage and wing in the imaging plane is specifically realized as follows: 假设在机体坐标系中,机头所在处标记为点A,机身与机翼所在轴线交点标记为B,机尾所在处标记为C,则已知AB的长度与AC长度的比值为R,所要求的机身与机翼交叉点坐标为(xc,yc),依据线段间的几何比例关系得:Assuming that in the body coordinate system, the position of the nose is marked as point A, the intersection of the fuselage and the wing axis is marked as B, and the position of the tail is marked as C, then the ratio of the length of AB to the length of AC is known as R, The required coordinates of the intersection point of the fuselage and the wing are (x c , y c ), according to the geometric ratio between the line segments: xx cc -- xx hh xx tt -- xx hh == ythe y cc -- ythe y hh ythe y tt -- ythe y hh == RR -- -- -- (( 11 )) 由此得,机身与机翼交叉点横纵坐标分别如式(2)和式(3)所示:Thus, the horizontal and vertical coordinates of the intersection point of the fuselage and the wing are shown in formula (2) and formula (3): xc=xh+R·(xt-xh)  (2)x c =x h +R·(x t -x h ) (2) yc=yh+R·(yt-yh)  (3)y c =y h +R·(y t -y h ) (3) (xh,yh),(xt,yt)分别为成像面中机头和机尾特征点坐标。(x h , y h ), (x t , y t ) are the coordinates of the nose and tail feature points in the imaging plane, respectively. 4.根据权利要求1所述的基于几何比例关系的扩展目标跟踪方法,其特征在于:所述步骤六当前帧中跟踪点确定如下:4. The extended target tracking method based on geometric proportional relationship according to claim 1, characterized in that: the tracking point in the current frame of said step 6 is determined as follows: Pc=(1-α)·Po+α·Pn  (4)P c =(1-α)·P o +α·P n (4) 其中,Po为上一帧获得的跟踪点位置,Pn为机身与机翼交点位置,α为修正因子,本发明取0.95,Pc为经过修正后的当前帧的跟踪点位置。Wherein, P o is the tracking point position obtained in the previous frame, P n is the intersection point position of the fuselage and the wing, α is a correction factor, the present invention takes 0.95, and P c is the tracking point position of the current frame after correction.
CN201410114293.6A 2014-03-25 2014-03-25 Extended target tracking method based on geometric proportion relation Active CN103839274B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410114293.6A CN103839274B (en) 2014-03-25 2014-03-25 Extended target tracking method based on geometric proportion relation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410114293.6A CN103839274B (en) 2014-03-25 2014-03-25 Extended target tracking method based on geometric proportion relation

Publications (2)

Publication Number Publication Date
CN103839274A true CN103839274A (en) 2014-06-04
CN103839274B CN103839274B (en) 2016-09-21

Family

ID=50802740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410114293.6A Active CN103839274B (en) 2014-03-25 2014-03-25 Extended target tracking method based on geometric proportion relation

Country Status (1)

Country Link
CN (1) CN103839274B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105044906A (en) * 2015-09-10 2015-11-11 淮海工学院 Image information-based fast extended target imaging correction method
CN106443661A (en) * 2016-09-08 2017-02-22 河南科技大学 Maneuvering extended target tracking method based on unscented Kalman filter
CN107943102A (en) * 2017-12-28 2018-04-20 南京工程学院 A kind of aircraft of view-based access control model servo and its autonomous tracing system
CN108256258A (en) * 2018-01-31 2018-07-06 西安科技大学 Cemented fill mechanical response characteristic Forecasting Methodology based on SEM image
CN110189375A (en) * 2019-06-26 2019-08-30 中国科学院光电技术研究所 An Image Target Recognition Method Based on Monocular Vision Measurement
CN110490903A (en) * 2019-08-12 2019-11-22 中国科学院光电技术研究所 Multiple target fast Acquisition and tracking in a kind of Binocular vision photogrammetry
CN116740180A (en) * 2023-06-30 2023-09-12 西北工业大学 Two-machine near vision relative positioning method based on feature points

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6590999B1 (en) * 2000-02-14 2003-07-08 Siemens Corporate Research, Inc. Real-time tracking of non-rigid objects using mean shift
CN103247032A (en) * 2013-04-26 2013-08-14 中国科学院光电技术研究所 Weak extended target positioning method based on attitude compensation
CN103632381A (en) * 2013-12-08 2014-03-12 中国科学院光电技术研究所 Extended target tracking method for extracting feature points by using skeleton

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6590999B1 (en) * 2000-02-14 2003-07-08 Siemens Corporate Research, Inc. Real-time tracking of non-rigid objects using mean shift
CN103247032A (en) * 2013-04-26 2013-08-14 中国科学院光电技术研究所 Weak extended target positioning method based on attitude compensation
CN103632381A (en) * 2013-12-08 2014-03-12 中国科学院光电技术研究所 Extended target tracking method for extracting feature points by using skeleton

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TIMOR KADIR 等: "An Affine Invariant Salient Region Detector", 《LECTURE NOTES IN COMPUTER SCIENCE 3021》 *
简小明: "基于SIFT补偿和粒子滤波的航拍目标跟踪", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
郑世友 等: "基于仿射不变量的运动目标遮挡检测和跟踪", 《信号处理》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105044906A (en) * 2015-09-10 2015-11-11 淮海工学院 Image information-based fast extended target imaging correction method
CN105044906B (en) * 2015-09-10 2016-06-08 淮海工学院 A kind of Quick Extended target imaging bearing calibration based on image information
CN106443661A (en) * 2016-09-08 2017-02-22 河南科技大学 Maneuvering extended target tracking method based on unscented Kalman filter
CN106443661B (en) * 2016-09-08 2019-07-19 河南科技大学 A Maneuvering Extended Target Tracking Method Based on Unscented Kalman Filtering
CN107943102A (en) * 2017-12-28 2018-04-20 南京工程学院 A kind of aircraft of view-based access control model servo and its autonomous tracing system
CN108256258A (en) * 2018-01-31 2018-07-06 西安科技大学 Cemented fill mechanical response characteristic Forecasting Methodology based on SEM image
CN108256258B (en) * 2018-01-31 2019-01-25 西安科技大学 Prediction method of mechanical response characteristics of cemented backfill based on SEM images
CN110189375A (en) * 2019-06-26 2019-08-30 中国科学院光电技术研究所 An Image Target Recognition Method Based on Monocular Vision Measurement
CN110189375B (en) * 2019-06-26 2022-08-23 中国科学院光电技术研究所 Image target identification method based on monocular vision measurement
CN110490903A (en) * 2019-08-12 2019-11-22 中国科学院光电技术研究所 Multiple target fast Acquisition and tracking in a kind of Binocular vision photogrammetry
CN110490903B (en) * 2019-08-12 2022-11-11 中国科学院光电技术研究所 Multi-target rapid capturing and tracking method in binocular vision measurement
CN116740180A (en) * 2023-06-30 2023-09-12 西北工业大学 Two-machine near vision relative positioning method based on feature points

Also Published As

Publication number Publication date
CN103839274B (en) 2016-09-21

Similar Documents

Publication Publication Date Title
CN103839274A (en) Extended target tracking method based on geometric proportion relation
CN103617328B (en) Aircraft three-dimensional attitude calculation method
Romero-Ramire et al. Fractal markers: A new approach for long-range marker pose estimation under occlusion
CN103714541B (en) A Method for Identifying and Locating Buildings Using Mountain Contour Area Constraints
CN103425988B (en) Real-time positioning and matching method with arc geometric primitives
CN100545865C (en) An Automatic Segmentation Method for Optimizing the Initial Segmentation Boundary of Image
CN108022262A (en) A kind of point cloud registration method based on neighborhood of a point center of gravity vector characteristics
CN101334263A (en) Method for locating the center of a circular target
CN104299260A (en) Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration
CN106500594B (en) Merge the railroad track method for semi-automatically detecting of reflected intensity and geometric properties
CN108460780A (en) A kind of adhesion grain of rice image partition method based on background framework characteristic
CN101515365A (en) Method for automatically separating adherent hyaline-vascular type lung nodule in CT image
CN104913784B (en) A kind of autonomous extracting method of planetary surface navigation characteristic
CN110866934A (en) Method and system for segmentation of complex point cloud based on normative coding
CN105139015A (en) Method for extracting water body from remote sensing image
CN107886541B (en) Real-time monocular moving target pose measuring method based on back projection method
CN103697883A (en) Aircraft horizontal attitude determination method based on skyline imaging
CN110619368A (en) Planet surface navigation feature imaging matching detection method
CN106845482A (en) A kind of license plate locating method
CN103247032B (en) Weak extended target positioning method based on attitude compensation
CN104077775A (en) Shape matching method and device combining skeleton feature points and shape context
CN104156976A (en) Multiple characteristic point tracking method for detecting shielded object
CN109543498A (en) A kind of method for detecting lane lines based on multitask network
CN117710458A (en) A method and system for relative position measurement of carrier-based aircraft during landing based on binocular vision
CN106980845A (en) The crucial independent positioning method of face based on structured modeling

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant