[go: up one dir, main page]

CN119224750B - A SAR Image Target Detection and Tracking Method - Google Patents

A SAR Image Target Detection and Tracking Method Download PDF

Info

Publication number
CN119224750B
CN119224750B CN202411134989.5A CN202411134989A CN119224750B CN 119224750 B CN119224750 B CN 119224750B CN 202411134989 A CN202411134989 A CN 202411134989A CN 119224750 B CN119224750 B CN 119224750B
Authority
CN
China
Prior art keywords
target
change
tracking
morphological
selected target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411134989.5A
Other languages
Chinese (zh)
Other versions
CN119224750A (en
Inventor
姬忠远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202411134989.5A priority Critical patent/CN119224750B/en
Publication of CN119224750A publication Critical patent/CN119224750A/en
Application granted granted Critical
Publication of CN119224750B publication Critical patent/CN119224750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • G01S13/9027Pattern recognition for feature extraction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • G01S13/9029SAR image post-processing techniques specially adapted for moving target detection within a single SAR image or within multiple SAR images taken at the same time
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种SAR图像目标检测跟踪方法,涉及目标检测跟踪技术领域,包括以下步骤:通过合成孔径雷达发射微波信号,并接收地面反射回来的信号来生成SAR图像;对生成的SAR图像进行预处理,提高图像质量和目标的可检测性;从预处理后的SAR图像中识别并定位选定目标;按照时间序列连续获取每一帧SAR图像。本发明利用卷积神经网络评估相邻帧间目标形态变化,识别急剧与正常形态变化,对于正常形态变化,采用最佳恒定速度跟踪,确保效率和精度;对于急剧形态变化,降低跟踪速度保证连续性和稳定性,该方法能及时调整策略,减少预测误差,有效防止目标丢失,在军事监视和灾害监测中提供可靠的目标控制和监视,避免重大损失。

The present invention discloses a SAR image target detection and tracking method, which relates to the technical field of target detection and tracking, and includes the following steps: emitting microwave signals through synthetic aperture radar and receiving signals reflected from the ground to generate SAR images; preprocessing the generated SAR images to improve image quality and target detectability; identifying and locating selected targets from the preprocessed SAR images; and continuously acquiring each frame of SAR images in a time series. The present invention uses a convolutional neural network to evaluate the target morphological changes between adjacent frames, identify sharp and normal morphological changes, and for normal morphological changes, adopts the best constant speed tracking to ensure efficiency and accuracy; for sharp morphological changes, reduces the tracking speed to ensure continuity and stability. The method can adjust the strategy in time, reduce prediction errors, effectively prevent target loss, and provide reliable target control and monitoring in military surveillance and disaster monitoring to avoid major losses.

Description

SAR image target detection tracking method
Technical Field
The invention relates to the technical field of target detection and tracking, in particular to a SAR image target detection and tracking method.
Background
SAR image target detection tracking refers to target detection and tracking by using synthetic aperture radar (SYNTHETIC APERTURE RADAR, SAR) images. SAR is an active remote sensing technology, and obtains images of ground objects by transmitting microwave signals and receiving reflected signals, and has the advantages of all weather, all-day time, penetration of cloud layers and smoke and the like. Therefore, the SAR image has wide application in the fields of military, disaster monitoring, environmental protection and the like. Target detection refers to identifying and locating a selected target, such as a vehicle, vessel or building, from the SAR image.
The target tracking is to continuously monitor the position of the target in the time sequence based on the target detection. SAR image target tracking involves continuously tracking a target by a specific algorithm using detected target position information in a continuously acquired SAR image sequence. Common tracking methods include kalman filtering, particle filtering, optical flow methods, and the like. These methods maintain accurate tracking of the target by predicting the future position of the target and making corresponding corrections. SAR image target detection tracking is used for monitoring enemy activities in the military field, is used for tracking dynamic changes of natural disasters in disaster monitoring, is used for monitoring illegal activities in environmental protection, and has important practical application value. Through effective target detection and tracking, real-time monitoring and accurate positioning of the target can be realized, and the perception and response capability to the dynamic change of the target in a complex environment are improved.
The prior art has the following defects:
When tracking a selected target identified in an SAR image, the prior art usually tracks the selected target at a constant speed, the target may change in form in the moving process, and when the selected target changes in form rapidly, if the target changes in form rapidly, the tracking algorithm cannot adjust and match new target features in time, so that the prediction error of the target position increases, and finally, the tracking algorithm cannot continuously lock the target. Target loss is a fatal problem for tracking systems, particularly in military surveillance and disaster monitoring, which may mean loss of control and surveillance of important targets, resulting in irrecoverable losses.
The above information disclosed in the background section is only for enhancement of understanding of the background of the disclosure and therefore it may include information that does not form the prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide a SAR image target detection tracking method, which is used for ensuring the accuracy of target dynamic tracking by preprocessing SAR images and acquiring continuous time sequence images, utilizing a convolutional neural network to evaluate target form changes between adjacent frames, identifying sharp and normal form changes, adopting optimal constant speed tracking for the normal form changes and ensuring the efficiency and the accuracy, reducing the tracking speed for the sharp form changes and ensuring the continuity and the stability.
In order to achieve the purpose, the invention provides the following technical scheme that the SAR image target detection tracking method comprises the following steps:
transmitting a microwave signal through a synthetic aperture radar and receiving a signal reflected back from the ground to generate an SAR image;
Preprocessing the generated SAR image, and improving the image quality and the detectability of the target;
Identifying and locating a selected target from the preprocessed SAR image;
Continuously acquiring each frame of SAR image according to the time sequence to form time sequence data, and providing accurate data support for dynamic tracking of a target through the continuous time sequence image;
Extracting the identified and positioned selected targets in the adjacent two frames of SAR images, and evaluating the morphological characteristic change condition of the selected targets in the two frames by comparing the image data of the selected targets between the adjacent frames;
Based on the result of the morphological feature change evaluation, classifying the morphological change of the selected target into a sharp morphological change and a normal morphological change;
And aiming at normal morphological changes, setting an optimal constant speed to track a selected target based on historical data, and aiming at abrupt morphological changes, adjusting the actual tracking speed of tracking the selected target, reducing the actual tracking speed and ensuring the continuity and stability of target tracking.
Preferably, the specific steps of identifying and locating the selected target from the SAR image are as follows:
Image segmentation is carried out on the preprocessed SAR image;
on the basis of image segmentation, extracting the characteristics of each segmented region;
after a large number of features are extracted, performing feature selection to reduce feature dimensions;
Performing target detection by using the selected characteristics;
after the target is detected, target positioning is performed.
Preferably, contour curvature information and motion blur degree information of a selected target between adjacent frames are obtained, contour curvature change indexes and motion blur degree change indexes are respectively generated after the contour curvature information and the motion blur degree information of the selected target are analyzed, the analyzed contour curvature change indexes and motion blur degree change indexes are input into a convolutional neural network trained in advance, morphological feature change coefficients are generated through the convolutional neural network, and morphological feature change conditions of the selected target in the two frames are evaluated through the morphological feature change coefficients.
Preferably, the step of obtaining profile curvature information of a selected object between adjacent frames, analyzing the profile curvature information of the selected object, and generating a profile curvature change index is as follows:
performing target detection on each frame of SAR image, extracting the contour edge of the selected target by using an edge detection algorithm, and serializing edge points of the contour edge into a contour curve representation;
For the contour edge extracted from each frame, calculating the curvature of each edge point, wherein the calculation expression of the curvature in the discrete edge point sequence is as follows:
Where κ i is the curvature of the ith edge point, a i is the area of a triangle consisting of the (i-1, i, i+1) th edge points, d i-1,i is the distance between the (i-1) th edge point and the (i) th edge point, d i,i+1 is the distance between the (i) th edge point and the (i+1) th edge point, and d i+1,i-1 is the distance between the (i+1) th edge point and the (i-1) th edge point;
the alignment of corresponding contour points in the two frames of images is realized by using the minimized distance between the edge points;
Calculating curvature difference of contour curve edge points in adjacent frames, wherein the calculation expression of the curvature difference is delta kappa i=κi (t+1)i (t) I, delta kappa i is the curvature difference of the ith edge point between the adjacent frames, kappa i (t+1) is the curvature of the ith edge point in the t+1st frame, and kappa i (t) is the curvature of the ith edge point in the t frame;
accumulating curvature differences of all edge points, and calculating the total curvature variation of the whole target contour, wherein the calculated expression is as follows: Δκ total is the total curvature variation of the whole contour, N is the total number of edge points, w i is the weight of the ith edge point;
calculating a profile curvature change index by the total curvature change amount, the calculated expression being: Where Cont curva denotes the profile curvature change index, and max (Δκ total) is a predefined maximum total curvature change for normalization.
Preferably, the step of obtaining motion blur degree information of a selected target between adjacent frames, analyzing the motion blur degree information of the selected target, and generating a motion blur degree change index is as follows:
extracting motion blur information of a target from SAR images of adjacent frames by analyzing gradient changes of selected target areas, wherein the SAR images of the adjacent frames are respectively represented by I t and I t+1, I t and I t+1 respectively represent the t frame SAR image and the t+1st frame SAR image, and gradient images of the selected target areas corresponding to I t and I t+1 are respectively represented by G t and G t+1 In the formula,Is a gradient operator;
in the selected target area, calculating a motion blur vector of each pixel point, wherein the calculated expression is as follows: Wherein v t (x, y) and v t+1 (x, y) are motion blur vectors of each pixel point in a selected target area in a t frame and a t+1st frame respectively, and (x, y) is coordinates of the pixel point;
M t(x,y)=Σx′,y′vt (x ', y'). K (x-x ', y-y'), where M t+1(x,y)=Σx',y′vt+1 (x ', y'). K (x-x ', y-y'), where K is a kernel function for smoothing motion blur vectors, and M t and M t+1 are blur degree matrices of t-th and t+1th frames, respectively, of the selected target region, where x 'and y' are variables traversing all pixels in the selected target region;
Calculating a blurring degree change matrix, which represents the blurring degree change quantity of each pixel point in a selected target area between adjacent frames, wherein the calculated expression is delta M (x, y) =M t+1(x,y)-Mt (x, y) |, and delta M (x, y) is the blurring degree change matrix;
Motion blur=Σx,y∈R delta M (x, y) is calculated by integrating the blurring degree change of the whole selected target area, wherein Motion blur is the blurring degree change index and represents the comprehensive measure of the blurring degree change of the whole target area, and R is the selected target area and represents the pixel area of the selected target tracked in the SAR image.
Preferably, the morphological feature change coefficient generated after the morphological feature change evaluation of the selected target is compared with a preset morphological feature change coefficient reference threshold value for analysis, and the morphological change of the selected target is dynamically divided, wherein the specific dividing steps are as follows:
If the morphological feature change coefficient is larger than the morphological feature change coefficient reference threshold, dividing the morphological change of the selected target into abrupt morphological changes;
If the morphological feature change coefficient is smaller than or equal to the morphological feature change coefficient reference threshold, the morphological change of the selected target is divided into normal morphological changes.
Preferably, for the abrupt morphological change, the actual tracking speed of the selected target tracking is adjusted, and the specific steps are as follows:
When a sharp morphological change is detected, starting to adjust an actual tracking speed reference, firstly calculating a speed adjustment quantity Deltav according to a morphological feature change coefficient Morp fl, wherein the calculated expression is Deltav=gamma (Morp flthreopt, wherein Morp fl represents the morphological feature change coefficient, tau thre represents a morphological feature change coefficient reference threshold value, morp flthre and gamma represent speed adjustment coefficients for controlling a speed adjustment amplitude, and v opt represents an optimal constant speed;
according to the calculated speed adjustment quantity Deltav, the actual tracking speed v actual is adjusted, and the adjusted expression is v actual=vopt-Δv=vopt-γ·(Morpflthre)·vopt;
In the tracking process, dynamically monitoring the change of the morphological characteristic change coefficient, dynamically adjusting the actual tracking speed according to real-time feedback, ensuring the tracking continuity and stability, and dynamically adjusting the actual tracking speed to be expressed as v actual=vopt·(1-γ·max(0,Morpflthre);
The actual tracking speed after dynamic adjustment is adjusted by using a smoothing function, wherein the adjusting expression is that v smooth=λ·actual+(1-λ)·vprev,vsmooth represents the actual tracking speed after smoothing, lambda represents a smoothing coefficient, 1> lambda >0, and v prev represents the actual tracking speed of the SAR image of the previous frame;
The smoothed actual tracking speed v smooth is applied to the target tracking process between the current frame and the next frame, so that the tracking continuity is ensured.
In the technical scheme, the invention has the technical effects and advantages that:
The invention ensures the accuracy of the data of the dynamic tracking of the target by preprocessing SAR images and acquiring continuous time sequence images, evaluates the morphological characteristic change of the selected target between adjacent frames by utilizing a convolutional neural network, can effectively identify the abrupt morphological change and the normal morphological change of the target, adopts the optimal constant speed tracking for the normal morphological change according to a dynamic dividing mechanism of the morphological change, ensures the tracking efficiency and accuracy, ensures the tracking continuity and stability for the abrupt morphological change by reducing the actual tracking speed, not only can timely adjust the tracking strategy when the target abruptly changes and reduce the target position prediction error, but also can effectively prevent the target from losing, thereby providing reliable target control and monitoring in key applications such as military monitoring, disaster monitoring and the like and avoiding irrecoverable losses.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for those skilled in the art.
Fig. 1 is a flow chart of a method for detecting and tracking a target of an SAR image.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the example embodiments may be embodied in many different forms and should not be construed as limited to the examples set forth herein, but rather, the example embodiments are provided so that this disclosure will be more thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
The invention provides a SAR image target detection and tracking method shown in figure 1, which comprises the following steps:
Transmitting a microwave signal through a Synthetic Aperture Radar (SAR), and receiving a signal reflected back from the ground to generate an SAR image;
the SAR system synthesizes a long aperture by moving an antenna, so that the acquisition of a high-resolution image is realized, the process involves the synthesis processing of echo signals at different positions, and finally a complete SAR image is generated, so that basic data is provided for the subsequent target detection and tracking.
Preprocessing the generated SAR image, and improving the image quality and the detectability of the target;
The preprocessing step comprises denoising, filtering, image enhancement and other technologies. Common methods include gaussian filtering, mean filtering, adaptive filtering, etc., which aim to reduce speckle noise and other interference in the image, and improve the contrast between the target and the background, so that the target is more easily identified and located in subsequent steps.
Identifying and locating a selected target from the preprocessed SAR image;
the specific steps for identifying and locating a selected target from the SAR image are as follows:
Image segmentation is carried out on the preprocessed SAR image;
the purpose of image segmentation is to break up the image into different regions in order to identify the target. Common image segmentation methods include threshold segmentation, region growing, watershed algorithms, and the like. The threshold segmentation divides the image into a target part and a background part by setting a gray value threshold, the region growing method starts from a seed point and gradually expands the region according to a similarity criterion, and the watershed algorithm segments the image into a plurality of connected regions by utilizing gradient information of the image.
On the basis of image segmentation, extracting the characteristics of each segmented region;
feature extraction is an important step in identifying and distinguishing objects, and generally includes shape features, texture features, gray scale features, and the like. The shape features can comprise the area, perimeter, contour and the like of the target, the texture features can be extracted by a gray level co-occurrence matrix (GLCM) or wavelet transformation and the like, and the gray level features directly utilize gray level value statistical information such as mean value, variance and the like of the target area.
After a large number of features are extracted, performing feature selection to reduce feature dimensions;
Feature selection is performed to reduce feature dimensions in order to improve the efficiency and accuracy of the algorithm, and the feature selection method includes Principal Component Analysis (PCA), linear Discriminant Analysis (LDA), and feature selection algorithms (e.g., recursive feature elimination RFE, etc.). PCA extracts the most representative features by dimension reduction, LDA selects the features with the most distinguished component by maximizing the inter-class variance and minimizing the intra-class variance, RFE selects the optimal feature set by recursively removing the least important features.
Performing target detection by using the selected characteristics;
The task of object detection is to distinguish objects and backgrounds from images, and common methods are Support Vector Machines (SVMs), decision trees, random forests, deep learning models, and the like. The SVM distinguishes the target and the background by searching the optimal segmentation hyperplane, the decision tree and the random forest improve the detection precision by constructing a plurality of classification trees, and the deep learning model (such as a convolutional neural network CNN) automatically learns the image characteristics through a multi-layer network structure, so that the efficient target detection is realized.
After the target is detected, target positioning is carried out;
Object localization is the determination of the specific position of an object in an image, typically represented by a bounding box (bounding box) or mask (mask). The positioning method can be simple geometric calculation (such as calculating a minimum bounding rectangle) or predicting the accurate position of the target through a regression model. The results of the target location will be used for subsequent tracking and monitoring.
Continuously acquiring each frame of SAR image according to the time sequence to form time sequence data, and providing accurate data support for dynamic tracking of a target through the continuous time sequence image;
each frame of image is obtained through the same generation and preprocessing process, the consistency of each frame of image is ensured, and the continuous time sequence images provide necessary data support for dynamic tracking of the target, so that the system can monitor the movement and change conditions of the target at different time points.
Extracting the identified and positioned selected targets in the adjacent two frames of SAR images, and evaluating the morphological characteristic change condition of the selected targets in the two frames by comparing the image data of the selected targets between the adjacent frames;
Contour curvature information and motion blur degree information of a selected target between adjacent frames are obtained, contour curvature change indexes and motion blur degree change indexes are respectively generated after the contour curvature information and the motion blur degree information of the selected target are analyzed, the analyzed contour curvature change indexes and motion blur degree change indexes are input into a convolutional neural network trained in advance, morphological feature change coefficients are generated through the convolutional neural network, and morphological feature change conditions of the selected target in the two frames are evaluated through the morphological feature change coefficients.
A large change in the curvature of the contours of the selected object between adjacent frames indicates a rapid change in the morphological characteristics of the selected object. The profile curvature reflects the degree of curvature of the edge of the object, with large variations generally meaning that the shape of the object changes significantly in a short period of time, such as a portion of the object exhibiting a pronounced relief, curvature, or distortion. Such changes may be due to changes in morphology caused by rotation, deformation, partial occlusion or rapid movement of the target. Under the condition of large change of the curvature of the outline, the contrast between the edge characteristic of the target and the background is obviously changed to influence the visual characteristic of the target, so that the traditional constant speed tracking algorithm is difficult to accurately predict the position of the target and keep continuous tracking. Therefore, the large change of the curvature of the contour is an important index for the rapid change of the morphological characteristics of the target, and special processing is needed in a tracking algorithm to improve the robustness and the accuracy of target tracking.
The method comprises the steps of obtaining contour curvature information of a selected target between adjacent frames, analyzing the contour curvature information of the selected target, and generating a contour curvature change index, wherein the steps are as follows:
performing target detection on each frame of SAR image, extracting the contour edge of the selected target by using an edge detection algorithm, and serializing edge points of the contour edge into a contour curve representation;
For the contour edge extracted from each frame, calculating the curvature of each edge point, wherein the calculation expression of the curvature in the discrete edge point sequence is as follows:
Where k i is the curvature of the ith edge point, A i is the area of a triangle consisting of the (i-1, i, i+1) th edge points, d i-1,i is the distance between the (i-1) th edge point and the (i) th edge point, d i,i+1 is the distance between the (i) th edge point and the (i+1) th edge point, and d i+1,i-1 is the distance between the (i+1) th edge point and the (i-1) th edge point;
the alignment of corresponding contour points in the two frames of images is realized by using the minimized distance between the edge points;
The main purpose is to make curvature comparisons between adjacent frames;
The alignment of corresponding contour points in two frames of images is realized by minimizing the distance between edge points, namely, in two adjacent frames of images, the sum of Euclidean distances between the matching points in the adjacent frames is minimized by matching the contour points of each frame. This typically involves finding the closest point pair in the previous and subsequent frames to ensure that the points represent the same physical location or feature. In this way, corresponding pairs of contour points can be found in two frames.
Calculating curvature difference of contour curve edge points in adjacent frames, wherein the calculation expression of the curvature difference is delta kappa i=κi (t+1)i (t) I, delta kappa i is the curvature difference of the ith edge point between the adjacent frames, kappa i (t+1) is the curvature of the ith edge point in the t+1st frame, and k i (t) is the curvature of the ith edge point in the t frame;
accumulating curvature differences of all edge points, and calculating the total curvature variation of the whole target contour, wherein the calculated expression is as follows: Δκ total is the total curvature variation of the whole contour, N is the total number of edge points, w i is the weight of the ith edge point (the weight may be assigned according to the importance or reliability of the edge point);
calculating a profile curvature change index by the total curvature change amount, the calculated expression being: Where Cont curva denotes the profile curvature change index, max (Δκ total) is the predefined maximum total curvature change for normalization;
The predefined maximum total curvature change is a maximum possible total curvature change value set in advance for normalizing the calculation result. This predefined value is typically determined from historical data or empirical values in a particular application scenario, representing the amount of curvature change of the target profile in the most extreme case. This predefined value is used to normalize the actual calculated total curvature change such that the range of curvature change indices is defined between [0,1], thereby facilitating comparison and analysis between different images.
The method comprises the steps of acquiring contour curvature information of a selected target between adjacent frames, analyzing the contour curvature information of the selected target to generate a larger representation value of a contour curvature change index, and indicating that the contour curvature change of the selected target between the adjacent frames is more remarkable, so that the morphology feature change of the selected target is more remarkable, otherwise, if the representation value of the contour curvature change index is smaller, the contour curvature change of the selected target between the adjacent frames is not remarkable, and the morphology feature change of the target is slower.
The degree of motion blur of the selected object between adjacent frames increases significantly, indicating that the morphological characteristics of the selected object are changing rapidly. Such variations are typically caused by the rapid movement of the object within the camera exposure time, such that the object presents a blurred trajectory in the image. The significant increase in motion blur reflects a rapid morphological transformation of the object in a short time, which can cause significant changes in the edge, shape and detail characteristics of the object between different frames. Motion blur, which is a sharp morphological change, increases the difficulty of identifying and matching target features for tracking algorithms, because traditional static features may no longer be suitable, and more dynamic and robust feature extraction and matching methods must be relied upon. It follows that a significant increase in the degree of motion blur is an important indicator of a rapid change in the morphology of the target.
The method comprises the steps of obtaining motion blur degree information of a selected target between adjacent frames, analyzing the motion blur degree information of the selected target, and generating a motion blur degree change index, wherein the motion blur degree change index comprises the following steps:
extracting motion blur information of a target from SAR images of adjacent frames by analyzing gradient changes of selected target areas, wherein the SAR images of the adjacent frames are respectively represented by I t and I t+1, I t and I t+1 respectively represent the t frame SAR image and the t+1st frame SAR image, and gradient images of the selected target areas corresponding to I t and I t+1 are respectively represented by G t and G t+1 In the formula,Is a gradient operator;
in the selected target area, calculating a motion blur vector of each pixel point, wherein the calculated expression is as follows: Wherein v t (x, y) and v t+1 (x, y) are motion blur vectors of each pixel point in a selected target area in a t frame and a t+1st frame respectively, and (x, y) is coordinates of the pixel point;
the motion blur vector quantifies the degree of blur for each pixel point.
M t(x,y)=∑x′,y′vt (x ', y'). K (x-x ', y-y'), where M t+1(x,y)=∑x′,y′vt+1 (x ', y'). K (x-x ', y-y'), where K is a kernel function for smoothing motion blur vectors, and M t and M t+1 are blur degree matrices of t-th and t+1th frames, respectively, of the selected target region, where x 'and y' are variables traversing all pixels in the selected target region;
the blur degree matrix quantifies the blur degree of each pixel point in the selected target area.
Calculating a blurring degree change matrix, which represents the blurring degree change quantity of each pixel point in a selected target area between adjacent frames, wherein the calculated expression is delta M (x, y) =M t+1(x,y)-Mt (x, y) |, and delta M (x, y) is the blurring degree change matrix;
Motion blur=Σx,y∈R delta M (x, y) is calculated by integrating the blurring degree change of the whole selected target area, wherein Motion blur is the blurring degree change index and represents the comprehensive measure of the blurring degree change of the whole target area, and R is the selected target area and represents the pixel area of the selected target tracked in the SAR image.
The motion blur degree information of a selected target between adjacent frames is obtained, the larger the expression value of the motion blur degree change index generated after the motion blur degree information of the selected target is analyzed is, the faster the morphological feature change of the target between the adjacent frames is indicated, otherwise, the smaller the motion blur degree change index is, the slower the morphological feature change of the target is indicated, because the high motion blur degree change index reflects the rapid change of the position and the shape of the target in an image, and the lower motion blur degree change index indicates the gradual and stable change of the target.
The convolutional neural network is not particularly limited, and can comprehensively analyze the contour curvature change index Cont curva and the blurring degree change index Motion blur to generate the morphological feature change coefficient Morp fl;
The calculation formula generated by the morphological feature change coefficient Morp fl is as follows:
Wherein, mu 1、μ2 is the preset proportionality coefficient of the contour curvature change index Cont curva and the blurring degree change index Motion blur respectively, and mu 1、μ2 is larger than 0.
The morphological feature change coefficient can know that the larger the appearance value of the outline curvature change index generated after analyzing the outline curvature information of the selected object is, the greater the appearance value of the motion blur degree change index generated after analyzing the motion blur degree information of the selected object is, the faster the morphological feature change of the selected object is, and otherwise, the slower the morphological feature change of the selected object is.
Based on the result of the morphological feature change evaluation, classifying the morphological change of the selected target into a sharp morphological change and a normal morphological change;
Comparing and analyzing the morphological feature change coefficient generated after the morphological feature change evaluation of the selected target with a preset morphological feature change coefficient reference threshold value, and dynamically dividing the morphological change of the selected target, wherein the specific dividing steps are as follows:
If the morphological feature change coefficient is larger than the morphological feature change coefficient reference threshold, dividing the morphological change of the selected target into abrupt morphological changes;
If the morphological feature change coefficient is smaller than or equal to the morphological feature change coefficient reference threshold, the morphological change of the selected target is divided into normal morphological changes.
Aiming at normal morphological changes, based on historical data, setting an optimal constant speed to track a selected target, aiming at abrupt morphological changes, adjusting the actual tracking speed of tracking the selected target, reducing the actual tracking speed, and ensuring the continuity and stability of target tracking;
The optimal constant speed refers to the most suitable tracking speed set based on the historical tracking data under the condition of normal morphological change of the selected target, so as to ensure the accuracy and stability of tracking. The process of setting based on the historical data comprises analyzing the characteristics of motion track, speed, acceleration and the like of the target in the past period, and determining a speed value capable of effectively balancing tracking precision and calculation efficiency through a statistical method or a machine learning model. The speed value can keep accurate tracking when the target changes are not severe, and meanwhile excessive calculation and resource waste are avoided.
For sharp morphological changes, the actual tracking speed of the selected target tracking is adjusted, and the specific steps are as follows:
When a sharp morphological change is detected, starting to adjust an actual tracking speed reference, firstly calculating a speed adjustment quantity Deltav according to a morphological feature change coefficient Morp fl, wherein the calculated expression is Deltav=gamma (Morp flthre)·vopt, wherein Morp fl represents the morphological feature change coefficient, tau thre represents a morphological feature change coefficient reference threshold value, morp flthre and gamma represent speed adjustment coefficients for controlling a speed adjustment amplitude, and v opt represents an optimal constant speed;
according to the calculated speed adjustment quantity Deltav, the actual tracking speed v actual is adjusted, and the adjusted expression is v actual=vopt-Δv=vopt-γ·(Morpflthre)·vopt;
in the tracking process, dynamically monitoring the change of the morphological characteristic change coefficient, dynamically adjusting the actual tracking speed according to real-time feedback, ensuring the tracking continuity and stability, and dynamically adjusting the actual tracking speed to be expressed as v actual=vopt·(1-γ·max(0,Morpflth)e);
The dynamic adjusted actual tracking speed is adjusted by using a smoothing function (in order to avoid instability caused by drastic change of speed), the adjusting expression is that v smooth=λ·vactual+(1-λ)·vprev,vsmooth represents the actual tracking speed after smoothing, lambda represents a smoothing coefficient, 1> lambda >0, and v prev represents the actual tracking speed of the SAR image of the previous frame;
The smoothed actual tracking speed v smooth is applied to the target tracking process between the current frame and the next frame, so that the tracking continuity is ensured.
The invention ensures the accuracy of the data of the dynamic tracking of the target by preprocessing SAR images and acquiring continuous time sequence images, evaluates the morphological characteristic change of the selected target between adjacent frames by utilizing a convolutional neural network, can effectively identify the abrupt morphological change and the normal morphological change of the target, adopts the optimal constant speed tracking for the normal morphological change according to a dynamic dividing mechanism of the morphological change, ensures the tracking efficiency and accuracy, ensures the tracking continuity and stability for the abrupt morphological change by reducing the actual tracking speed, not only can timely adjust the tracking strategy when the target abruptly changes and reduce the target position prediction error, but also can effectively prevent the target from losing, thereby providing reliable target control and monitoring in key applications such as military monitoring, disaster monitoring and the like and avoiding irrecoverable losses.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas with a large amount of data collected for software simulation to obtain the latest real situation, and preset parameters in the formulas are set by those skilled in the art according to the actual situation.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
While certain exemplary embodiments of the present invention have been described above by way of illustration only, it will be apparent to those of ordinary skill in the art that modifications may be made to the described embodiments in various different ways without departing from the spirit and scope of the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive of the scope of the invention, which is defined by the appended claims.

Claims (7)

1. The SAR image target detection and tracking method is characterized by comprising the following steps of:
transmitting a microwave signal through a synthetic aperture radar and receiving a signal reflected back from the ground to generate an SAR image;
Preprocessing the generated SAR image, and improving the image quality and the detectability of the target;
Identifying and locating a selected target from the preprocessed SAR image;
Continuously acquiring each frame of SAR image according to the time sequence to form time sequence data, and providing accurate data support for dynamic tracking of a target through the continuous time sequence image;
Extracting the identified and positioned selected targets in the adjacent two frames of SAR images, and evaluating the morphological characteristic change condition of the selected targets in the two frames by comparing the image data of the selected targets between the adjacent frames;
Based on the result of the morphological feature change evaluation, classifying the morphological change of the selected target into a sharp morphological change and a normal morphological change;
And aiming at normal morphological changes, setting an optimal constant speed to track a selected target based on historical data, and aiming at abrupt morphological changes, adjusting the actual tracking speed of tracking the selected target, reducing the actual tracking speed and ensuring the continuity and stability of target tracking.
2. The SAR image target detection tracking method according to claim 1, wherein the specific steps of identifying and locating the selected target from the SAR image are as follows:
Image segmentation is carried out on the preprocessed SAR image;
on the basis of image segmentation, extracting the characteristics of each segmented region;
after a large number of features are extracted, performing feature selection to reduce feature dimensions;
Performing target detection by using the selected characteristics;
after the target is detected, target positioning is performed.
3. The method for detecting and tracking the SAR image target according to claim 1, wherein contour curvature information and motion blur degree information of a selected target between adjacent frames are obtained, contour curvature information and motion blur degree information of the selected target are analyzed, then contour curvature change indexes and motion blur degree change indexes are generated respectively, the analyzed contour curvature change indexes and motion blur degree change indexes are input into a pre-trained convolutional neural network, morphological feature change coefficients are generated through the convolutional neural network, and morphological feature change conditions of the selected target in the two frames are evaluated through the morphological feature change coefficients.
4. A method for detecting and tracking a target in a SAR image according to claim 3, wherein the step of obtaining the contour curvature information of the selected target between adjacent frames, analyzing the contour curvature information of the selected target, and generating the contour curvature change index is as follows:
performing target detection on each frame of SAR image, extracting the contour edge of the selected target by using an edge detection algorithm, and serializing edge points of the contour edge into a contour curve representation;
For the contour edge extracted from each frame, calculating the curvature of each edge point, wherein the calculation expression of the curvature in the discrete edge point sequence is as follows:
Where κ i is the curvature of the ith edge point, a i is the area of a triangle consisting of the (i-1, i, i+1) th edge points, d i-1,i is the distance between the (i-1) th edge point and the (i) th edge point, d i,i+1 is the distance between the (i) th edge point and the (i+1) th edge point, and d i+1,i-1 is the distance between the (i+1) th edge point and the (i-1) th edge point;
the alignment of corresponding contour points in the two frames of images is realized by using the minimized distance between the edge points;
Calculating curvature difference of contour curve edge points in adjacent frames, wherein the calculation expression of the curvature difference is delta kappa i=|κi (t+1)i (t), delta kappa i is the curvature difference of the ith edge point between the adjacent frames, kappa i (t+1) is the curvature of the ith edge point in the t+1st frame, and kappa i (t) is the curvature of the ith edge point in the t frame;
accumulating curvature differences of all edge points, and calculating the total curvature variation of the whole target contour, wherein the calculated expression is as follows: Δκ total is the total curvature variation of the whole contour, N is the total number of edge points, w i is the weight of the ith edge point;
calculating a profile curvature change index by the total curvature change amount, the calculated expression being: Where Cont curva denotes the profile curvature change index, and max (Δκ total) is a predefined maximum total curvature change for normalization.
5. The method for detecting and tracking SAR image target as set forth in claim 4, wherein the step of obtaining motion blur degree information of a selected target between adjacent frames, analyzing the motion blur degree information of the selected target, and generating a motion blur degree variation index is as follows:
extracting motion blur information of a target from SAR images of adjacent frames by analyzing gradient changes of selected target areas, wherein the SAR images of the adjacent frames are respectively represented by I t and I t+1, I t and I t+1 respectively represent the t frame SAR image and the t+1st frame SAR image, and gradient images of the selected target areas corresponding to I t and I t+1 are respectively represented by G t and G t+1 In the formula,Is a gradient operator;
in the selected target area, calculating a motion blur vector of each pixel point, wherein the calculated expression is as follows: Wherein v t (x, y) and v t+1 (x, y) are motion blur vectors of each pixel point in a selected target area in a t frame and a t+1st frame respectively, and (x, y) is coordinates of the pixel point;
m t(x,y)=Σx′,y′vt (x ', y'. K (x-x ', y-y'), where K is a kernel function for smoothing motion blur vectors, and M t and M t+1 are blur degree matrices of the t-th and t+1th frames, respectively, for constructing a blur degree matrix of the selected target region using the blur vectors, where x 'and y' are variables traversing all pixels in the selected target region, M t+1(x,y)=Σx′,y′vt+1 (x ', y'). K (x-x ', y-y');
Calculating a blurring degree change matrix, which represents the blurring degree change quantity of each pixel point in a selected target area between adjacent frames, wherein the calculated expression is delta M (x, y) = |M t+1(x,y)-Mt (x, y) }, and delta M (x, y) is the blurring degree change matrix;
Motion blur=Σx,y∈R delta M (x, y) is calculated by integrating the blurring degree change of the whole selected target area, wherein Motion blur is the blurring degree change index and represents the comprehensive measure of the blurring degree change of the whole target area, and R is the selected target area and represents the pixel area of the selected target tracked in the SAR image.
6. The SAR image target detection tracking method according to claim 3, wherein the morphological characteristic change coefficient generated after the evaluation of the morphological characteristic change of the selected target is compared with a preset morphological characteristic change coefficient reference threshold value, and the morphological change of the selected target is dynamically divided, and the specific dividing steps are as follows:
If the morphological feature change coefficient is larger than the morphological feature change coefficient reference threshold, dividing the morphological change of the selected target into abrupt morphological changes;
If the morphological feature change coefficient is smaller than or equal to the morphological feature change coefficient reference threshold, the morphological change of the selected target is divided into normal morphological changes.
7. The SAR image target detection tracking method according to claim 6, wherein the actual tracking speed of the selected target tracking is adjusted for abrupt morphological changes, and the specific steps are as follows:
When a sharp morphological change is detected, starting to adjust an actual tracking speed reference, firstly calculating a speed adjustment quantity Deltav according to a morphological feature change coefficient Morp fl, wherein the calculated expression is Deltav=gamma (Morp flthre)·vopt, wherein Morp fl represents the morphological feature change coefficient, tau thre represents a morphological feature change coefficient reference threshold value, morp flthre and gamma represent speed adjustment coefficients for controlling a speed adjustment amplitude, and v opt represents an optimal constant speed;
according to the calculated speed adjustment quantity Deltav, the actual tracking speed v actual is adjusted, and the adjusted expression is v actual=vopt-Δv=vopt-γ·(Morpflthre)·vopt;
In the tracking process, dynamically monitoring the change of the morphological characteristic change coefficient, dynamically adjusting the actual tracking speed according to real-time feedback, ensuring the tracking continuity and stability, and dynamically adjusting the actual tracking speed to be expressed as v actual=vopt·(1-γ·max(0,Morpflthre);
The actual tracking speed after dynamic adjustment is adjusted by using a smoothing function, wherein the adjusting expression is that v smooth=λ·vactual+(1-λ)·vprev,vsmooth represents the actual tracking speed after smoothing, lambda represents a smoothing coefficient, 1> lambda >0, and v prev represents the actual tracking speed of the SAR image of the previous frame;
The smoothed actual tracking speed v smooth is applied to the target tracking process between the current frame and the next frame, so that the tracking continuity is ensured.
CN202411134989.5A 2024-08-19 2024-08-19 A SAR Image Target Detection and Tracking Method Active CN119224750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411134989.5A CN119224750B (en) 2024-08-19 2024-08-19 A SAR Image Target Detection and Tracking Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411134989.5A CN119224750B (en) 2024-08-19 2024-08-19 A SAR Image Target Detection and Tracking Method

Publications (2)

Publication Number Publication Date
CN119224750A CN119224750A (en) 2024-12-31
CN119224750B true CN119224750B (en) 2025-04-22

Family

ID=94039395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411134989.5A Active CN119224750B (en) 2024-08-19 2024-08-19 A SAR Image Target Detection and Tracking Method

Country Status (1)

Country Link
CN (1) CN119224750B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318589A (en) * 2014-11-04 2015-01-28 中国电子科技集团公司第十四研究所 ViSAR-based anomalous change detection and tracking method
CN108731587A (en) * 2017-04-14 2018-11-02 中交遥感载荷(北京)科技有限公司 A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6943724B1 (en) * 2002-10-30 2005-09-13 Lockheed Martin Corporation Identification and tracking of moving objects in detected synthetic aperture imagery
JP3971411B2 (en) * 2004-06-11 2007-09-05 株式会社東芝 Time-series image processing apparatus, method and program
CN108957453B (en) * 2018-07-23 2022-03-04 航天恒星科技有限公司 A high-precision moving target imaging and recognition method based on multi-target tracking
CN109858455B (en) * 2019-02-18 2023-06-20 南京航空航天大学 Block detection scale self-adaptive tracking method for round target
CN110647794B (en) * 2019-07-12 2023-01-03 五邑大学 Attention mechanism-based multi-scale SAR image recognition method and device
CN113848545B (en) * 2021-09-01 2023-04-14 电子科技大学 A Fusion Target Detection and Tracking Method Based on Vision and Millimeter Wave Radar
CN114037733B (en) * 2021-09-24 2024-10-29 西安电子科技大学 SAR ship multi-target tracking method based on improved nuclear correlation filtering
CN115205227A (en) * 2022-06-22 2022-10-18 中国人民解放军国防科技大学 A shadow region detection method for SAR images based on change detection
CN118033631B (en) * 2024-03-18 2024-11-01 无锡市引宵科技有限公司 Target extraction method and system based on ground surveillance radar system
CN117949942B (en) * 2024-03-26 2024-06-07 北京市计量检测科学研究院 Target tracking method and system based on fusion of radar data and video data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318589A (en) * 2014-11-04 2015-01-28 中国电子科技集团公司第十四研究所 ViSAR-based anomalous change detection and tracking method
CN108731587A (en) * 2017-04-14 2018-11-02 中交遥感载荷(北京)科技有限公司 A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model

Also Published As

Publication number Publication date
CN119224750A (en) 2024-12-31

Similar Documents

Publication Publication Date Title
CN112308881A (en) A ship multi-target tracking method based on remote sensing images
CN108921873B (en) Markov Decision Online Multi-target Tracking Method Based on Kernel Correlation Filtering Optimization
CN118918535B (en) Smart home detection and management method and device and computer equipment
CN110400294B (en) Infrared target detection system and detection method
CN120198416B (en) Geological structure crack automatic identification and detection system based on deep learning
CN120259926B (en) Intelligent identification method and system for UAVs targeting occluded targets
CN120195652B (en) Lightweight AI personnel perception method and system based on millimeter wave radar
CN119224750B (en) A SAR Image Target Detection and Tracking Method
CN109887004A (en) A kind of unmanned boat sea area method for tracking target based on TLD algorithm
CN118033631B (en) Target extraction method and system based on ground surveillance radar system
CN120107234A (en) A method and system for detecting pests and diseases based on dynamic scenes
CN119107217A (en) Computer vision-assisted segmentation system for beef cattle carcasses
CN118549923A (en) Video radar monitoring method and related equipment
CN116563348B (en) Infrared weak small target multi-mode tracking method and system based on dual-feature template
Li Infrared spectral imaging-based image recognition for motion detection
Rojas et al. A comparative analysis of weed images classification approaches in vegetables crops
CN119830170B (en) False alarm identification modeling method, identification method, equipment, storage medium and product
CN114638785A (en) Method, system, medium, device and terminal for detecting highlight area in image
Han et al. A two-stage detection method based on improved DP-TBD for marine weak extended targets
CN120375363B (en) Image Recognition-Based Food Conveying Detection Method and Device
CN120126069B (en) Dynamic detection system for hidden danger around power transmission line based on image difference identification
CN117876977B (en) Target identification method based on monitoring video
CN119418092B (en) Anti-interference target acquisition method based on machine vision technology measurement
de A. Lopes et al. Combining features to improve oil spill classification in SAR images
CN121074986A (en) A Method and System for Recognizing the Behavior of Miners Based on YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant