Disclosure of Invention
The invention aims to provide a SAR image target detection tracking method, which is used for ensuring the accuracy of target dynamic tracking by preprocessing SAR images and acquiring continuous time sequence images, utilizing a convolutional neural network to evaluate target form changes between adjacent frames, identifying sharp and normal form changes, adopting optimal constant speed tracking for the normal form changes and ensuring the efficiency and the accuracy, reducing the tracking speed for the sharp form changes and ensuring the continuity and the stability.
In order to achieve the purpose, the invention provides the following technical scheme that the SAR image target detection tracking method comprises the following steps:
transmitting a microwave signal through a synthetic aperture radar and receiving a signal reflected back from the ground to generate an SAR image;
Preprocessing the generated SAR image, and improving the image quality and the detectability of the target;
Identifying and locating a selected target from the preprocessed SAR image;
Continuously acquiring each frame of SAR image according to the time sequence to form time sequence data, and providing accurate data support for dynamic tracking of a target through the continuous time sequence image;
Extracting the identified and positioned selected targets in the adjacent two frames of SAR images, and evaluating the morphological characteristic change condition of the selected targets in the two frames by comparing the image data of the selected targets between the adjacent frames;
Based on the result of the morphological feature change evaluation, classifying the morphological change of the selected target into a sharp morphological change and a normal morphological change;
And aiming at normal morphological changes, setting an optimal constant speed to track a selected target based on historical data, and aiming at abrupt morphological changes, adjusting the actual tracking speed of tracking the selected target, reducing the actual tracking speed and ensuring the continuity and stability of target tracking.
Preferably, the specific steps of identifying and locating the selected target from the SAR image are as follows:
Image segmentation is carried out on the preprocessed SAR image;
on the basis of image segmentation, extracting the characteristics of each segmented region;
after a large number of features are extracted, performing feature selection to reduce feature dimensions;
Performing target detection by using the selected characteristics;
after the target is detected, target positioning is performed.
Preferably, contour curvature information and motion blur degree information of a selected target between adjacent frames are obtained, contour curvature change indexes and motion blur degree change indexes are respectively generated after the contour curvature information and the motion blur degree information of the selected target are analyzed, the analyzed contour curvature change indexes and motion blur degree change indexes are input into a convolutional neural network trained in advance, morphological feature change coefficients are generated through the convolutional neural network, and morphological feature change conditions of the selected target in the two frames are evaluated through the morphological feature change coefficients.
Preferably, the step of obtaining profile curvature information of a selected object between adjacent frames, analyzing the profile curvature information of the selected object, and generating a profile curvature change index is as follows:
performing target detection on each frame of SAR image, extracting the contour edge of the selected target by using an edge detection algorithm, and serializing edge points of the contour edge into a contour curve representation;
For the contour edge extracted from each frame, calculating the curvature of each edge point, wherein the calculation expression of the curvature in the discrete edge point sequence is as follows:
Where κ i is the curvature of the ith edge point, a i is the area of a triangle consisting of the (i-1, i, i+1) th edge points, d i-1,i is the distance between the (i-1) th edge point and the (i) th edge point, d i,i+1 is the distance between the (i) th edge point and the (i+1) th edge point, and d i+1,i-1 is the distance between the (i+1) th edge point and the (i-1) th edge point;
the alignment of corresponding contour points in the two frames of images is realized by using the minimized distance between the edge points;
Calculating curvature difference of contour curve edge points in adjacent frames, wherein the calculation expression of the curvature difference is delta kappa i=κi (t+1)-κi (t) I, delta kappa i is the curvature difference of the ith edge point between the adjacent frames, kappa i (t+1) is the curvature of the ith edge point in the t+1st frame, and kappa i (t) is the curvature of the ith edge point in the t frame;
accumulating curvature differences of all edge points, and calculating the total curvature variation of the whole target contour, wherein the calculated expression is as follows: Δκ total is the total curvature variation of the whole contour, N is the total number of edge points, w i is the weight of the ith edge point;
calculating a profile curvature change index by the total curvature change amount, the calculated expression being: Where Cont curva denotes the profile curvature change index, and max (Δκ total) is a predefined maximum total curvature change for normalization.
Preferably, the step of obtaining motion blur degree information of a selected target between adjacent frames, analyzing the motion blur degree information of the selected target, and generating a motion blur degree change index is as follows:
extracting motion blur information of a target from SAR images of adjacent frames by analyzing gradient changes of selected target areas, wherein the SAR images of the adjacent frames are respectively represented by I t and I t+1, I t and I t+1 respectively represent the t frame SAR image and the t+1st frame SAR image, and gradient images of the selected target areas corresponding to I t and I t+1 are respectively represented by G t and G t+1 In the formula,Is a gradient operator;
in the selected target area, calculating a motion blur vector of each pixel point, wherein the calculated expression is as follows: Wherein v t (x, y) and v t+1 (x, y) are motion blur vectors of each pixel point in a selected target area in a t frame and a t+1st frame respectively, and (x, y) is coordinates of the pixel point;
M t(x,y)=Σx′,y′vt (x ', y'). K (x-x ', y-y'), where M t+1(x,y)=Σx',y′vt+1 (x ', y'). K (x-x ', y-y'), where K is a kernel function for smoothing motion blur vectors, and M t and M t+1 are blur degree matrices of t-th and t+1th frames, respectively, of the selected target region, where x 'and y' are variables traversing all pixels in the selected target region;
Calculating a blurring degree change matrix, which represents the blurring degree change quantity of each pixel point in a selected target area between adjacent frames, wherein the calculated expression is delta M (x, y) =M t+1(x,y)-Mt (x, y) |, and delta M (x, y) is the blurring degree change matrix;
Motion blur=Σx,y∈R delta M (x, y) is calculated by integrating the blurring degree change of the whole selected target area, wherein Motion blur is the blurring degree change index and represents the comprehensive measure of the blurring degree change of the whole target area, and R is the selected target area and represents the pixel area of the selected target tracked in the SAR image.
Preferably, the morphological feature change coefficient generated after the morphological feature change evaluation of the selected target is compared with a preset morphological feature change coefficient reference threshold value for analysis, and the morphological change of the selected target is dynamically divided, wherein the specific dividing steps are as follows:
If the morphological feature change coefficient is larger than the morphological feature change coefficient reference threshold, dividing the morphological change of the selected target into abrupt morphological changes;
If the morphological feature change coefficient is smaller than or equal to the morphological feature change coefficient reference threshold, the morphological change of the selected target is divided into normal morphological changes.
Preferably, for the abrupt morphological change, the actual tracking speed of the selected target tracking is adjusted, and the specific steps are as follows:
When a sharp morphological change is detected, starting to adjust an actual tracking speed reference, firstly calculating a speed adjustment quantity Deltav according to a morphological feature change coefficient Morp fl, wherein the calculated expression is Deltav=gamma (Morp fl-τthre)·opt, wherein Morp fl represents the morphological feature change coefficient, tau thre represents a morphological feature change coefficient reference threshold value, morp fl>τthre and gamma represent speed adjustment coefficients for controlling a speed adjustment amplitude, and v opt represents an optimal constant speed;
according to the calculated speed adjustment quantity Deltav, the actual tracking speed v actual is adjusted, and the adjusted expression is v actual=vopt-Δv=vopt-γ·(Morpfl-τthre)·vopt;
In the tracking process, dynamically monitoring the change of the morphological characteristic change coefficient, dynamically adjusting the actual tracking speed according to real-time feedback, ensuring the tracking continuity and stability, and dynamically adjusting the actual tracking speed to be expressed as v actual=vopt·(1-γ·max(0,Morpfl-τthre);
The actual tracking speed after dynamic adjustment is adjusted by using a smoothing function, wherein the adjusting expression is that v smooth=λ·actual+(1-λ)·vprev,vsmooth represents the actual tracking speed after smoothing, lambda represents a smoothing coefficient, 1> lambda >0, and v prev represents the actual tracking speed of the SAR image of the previous frame;
The smoothed actual tracking speed v smooth is applied to the target tracking process between the current frame and the next frame, so that the tracking continuity is ensured.
In the technical scheme, the invention has the technical effects and advantages that:
The invention ensures the accuracy of the data of the dynamic tracking of the target by preprocessing SAR images and acquiring continuous time sequence images, evaluates the morphological characteristic change of the selected target between adjacent frames by utilizing a convolutional neural network, can effectively identify the abrupt morphological change and the normal morphological change of the target, adopts the optimal constant speed tracking for the normal morphological change according to a dynamic dividing mechanism of the morphological change, ensures the tracking efficiency and accuracy, ensures the tracking continuity and stability for the abrupt morphological change by reducing the actual tracking speed, not only can timely adjust the tracking strategy when the target abruptly changes and reduce the target position prediction error, but also can effectively prevent the target from losing, thereby providing reliable target control and monitoring in key applications such as military monitoring, disaster monitoring and the like and avoiding irrecoverable losses.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the example embodiments may be embodied in many different forms and should not be construed as limited to the examples set forth herein, but rather, the example embodiments are provided so that this disclosure will be more thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
The invention provides a SAR image target detection and tracking method shown in figure 1, which comprises the following steps:
Transmitting a microwave signal through a Synthetic Aperture Radar (SAR), and receiving a signal reflected back from the ground to generate an SAR image;
the SAR system synthesizes a long aperture by moving an antenna, so that the acquisition of a high-resolution image is realized, the process involves the synthesis processing of echo signals at different positions, and finally a complete SAR image is generated, so that basic data is provided for the subsequent target detection and tracking.
Preprocessing the generated SAR image, and improving the image quality and the detectability of the target;
The preprocessing step comprises denoising, filtering, image enhancement and other technologies. Common methods include gaussian filtering, mean filtering, adaptive filtering, etc., which aim to reduce speckle noise and other interference in the image, and improve the contrast between the target and the background, so that the target is more easily identified and located in subsequent steps.
Identifying and locating a selected target from the preprocessed SAR image;
the specific steps for identifying and locating a selected target from the SAR image are as follows:
Image segmentation is carried out on the preprocessed SAR image;
the purpose of image segmentation is to break up the image into different regions in order to identify the target. Common image segmentation methods include threshold segmentation, region growing, watershed algorithms, and the like. The threshold segmentation divides the image into a target part and a background part by setting a gray value threshold, the region growing method starts from a seed point and gradually expands the region according to a similarity criterion, and the watershed algorithm segments the image into a plurality of connected regions by utilizing gradient information of the image.
On the basis of image segmentation, extracting the characteristics of each segmented region;
feature extraction is an important step in identifying and distinguishing objects, and generally includes shape features, texture features, gray scale features, and the like. The shape features can comprise the area, perimeter, contour and the like of the target, the texture features can be extracted by a gray level co-occurrence matrix (GLCM) or wavelet transformation and the like, and the gray level features directly utilize gray level value statistical information such as mean value, variance and the like of the target area.
After a large number of features are extracted, performing feature selection to reduce feature dimensions;
Feature selection is performed to reduce feature dimensions in order to improve the efficiency and accuracy of the algorithm, and the feature selection method includes Principal Component Analysis (PCA), linear Discriminant Analysis (LDA), and feature selection algorithms (e.g., recursive feature elimination RFE, etc.). PCA extracts the most representative features by dimension reduction, LDA selects the features with the most distinguished component by maximizing the inter-class variance and minimizing the intra-class variance, RFE selects the optimal feature set by recursively removing the least important features.
Performing target detection by using the selected characteristics;
The task of object detection is to distinguish objects and backgrounds from images, and common methods are Support Vector Machines (SVMs), decision trees, random forests, deep learning models, and the like. The SVM distinguishes the target and the background by searching the optimal segmentation hyperplane, the decision tree and the random forest improve the detection precision by constructing a plurality of classification trees, and the deep learning model (such as a convolutional neural network CNN) automatically learns the image characteristics through a multi-layer network structure, so that the efficient target detection is realized.
After the target is detected, target positioning is carried out;
Object localization is the determination of the specific position of an object in an image, typically represented by a bounding box (bounding box) or mask (mask). The positioning method can be simple geometric calculation (such as calculating a minimum bounding rectangle) or predicting the accurate position of the target through a regression model. The results of the target location will be used for subsequent tracking and monitoring.
Continuously acquiring each frame of SAR image according to the time sequence to form time sequence data, and providing accurate data support for dynamic tracking of a target through the continuous time sequence image;
each frame of image is obtained through the same generation and preprocessing process, the consistency of each frame of image is ensured, and the continuous time sequence images provide necessary data support for dynamic tracking of the target, so that the system can monitor the movement and change conditions of the target at different time points.
Extracting the identified and positioned selected targets in the adjacent two frames of SAR images, and evaluating the morphological characteristic change condition of the selected targets in the two frames by comparing the image data of the selected targets between the adjacent frames;
Contour curvature information and motion blur degree information of a selected target between adjacent frames are obtained, contour curvature change indexes and motion blur degree change indexes are respectively generated after the contour curvature information and the motion blur degree information of the selected target are analyzed, the analyzed contour curvature change indexes and motion blur degree change indexes are input into a convolutional neural network trained in advance, morphological feature change coefficients are generated through the convolutional neural network, and morphological feature change conditions of the selected target in the two frames are evaluated through the morphological feature change coefficients.
A large change in the curvature of the contours of the selected object between adjacent frames indicates a rapid change in the morphological characteristics of the selected object. The profile curvature reflects the degree of curvature of the edge of the object, with large variations generally meaning that the shape of the object changes significantly in a short period of time, such as a portion of the object exhibiting a pronounced relief, curvature, or distortion. Such changes may be due to changes in morphology caused by rotation, deformation, partial occlusion or rapid movement of the target. Under the condition of large change of the curvature of the outline, the contrast between the edge characteristic of the target and the background is obviously changed to influence the visual characteristic of the target, so that the traditional constant speed tracking algorithm is difficult to accurately predict the position of the target and keep continuous tracking. Therefore, the large change of the curvature of the contour is an important index for the rapid change of the morphological characteristics of the target, and special processing is needed in a tracking algorithm to improve the robustness and the accuracy of target tracking.
The method comprises the steps of obtaining contour curvature information of a selected target between adjacent frames, analyzing the contour curvature information of the selected target, and generating a contour curvature change index, wherein the steps are as follows:
performing target detection on each frame of SAR image, extracting the contour edge of the selected target by using an edge detection algorithm, and serializing edge points of the contour edge into a contour curve representation;
For the contour edge extracted from each frame, calculating the curvature of each edge point, wherein the calculation expression of the curvature in the discrete edge point sequence is as follows:
Where k i is the curvature of the ith edge point, A i is the area of a triangle consisting of the (i-1, i, i+1) th edge points, d i-1,i is the distance between the (i-1) th edge point and the (i) th edge point, d i,i+1 is the distance between the (i) th edge point and the (i+1) th edge point, and d i+1,i-1 is the distance between the (i+1) th edge point and the (i-1) th edge point;
the alignment of corresponding contour points in the two frames of images is realized by using the minimized distance between the edge points;
The main purpose is to make curvature comparisons between adjacent frames;
The alignment of corresponding contour points in two frames of images is realized by minimizing the distance between edge points, namely, in two adjacent frames of images, the sum of Euclidean distances between the matching points in the adjacent frames is minimized by matching the contour points of each frame. This typically involves finding the closest point pair in the previous and subsequent frames to ensure that the points represent the same physical location or feature. In this way, corresponding pairs of contour points can be found in two frames.
Calculating curvature difference of contour curve edge points in adjacent frames, wherein the calculation expression of the curvature difference is delta kappa i=κi (t+1)-κi (t) I, delta kappa i is the curvature difference of the ith edge point between the adjacent frames, kappa i (t+1) is the curvature of the ith edge point in the t+1st frame, and k i (t) is the curvature of the ith edge point in the t frame;
accumulating curvature differences of all edge points, and calculating the total curvature variation of the whole target contour, wherein the calculated expression is as follows: Δκ total is the total curvature variation of the whole contour, N is the total number of edge points, w i is the weight of the ith edge point (the weight may be assigned according to the importance or reliability of the edge point);
calculating a profile curvature change index by the total curvature change amount, the calculated expression being: Where Cont curva denotes the profile curvature change index, max (Δκ total) is the predefined maximum total curvature change for normalization;
The predefined maximum total curvature change is a maximum possible total curvature change value set in advance for normalizing the calculation result. This predefined value is typically determined from historical data or empirical values in a particular application scenario, representing the amount of curvature change of the target profile in the most extreme case. This predefined value is used to normalize the actual calculated total curvature change such that the range of curvature change indices is defined between [0,1], thereby facilitating comparison and analysis between different images.
The method comprises the steps of acquiring contour curvature information of a selected target between adjacent frames, analyzing the contour curvature information of the selected target to generate a larger representation value of a contour curvature change index, and indicating that the contour curvature change of the selected target between the adjacent frames is more remarkable, so that the morphology feature change of the selected target is more remarkable, otherwise, if the representation value of the contour curvature change index is smaller, the contour curvature change of the selected target between the adjacent frames is not remarkable, and the morphology feature change of the target is slower.
The degree of motion blur of the selected object between adjacent frames increases significantly, indicating that the morphological characteristics of the selected object are changing rapidly. Such variations are typically caused by the rapid movement of the object within the camera exposure time, such that the object presents a blurred trajectory in the image. The significant increase in motion blur reflects a rapid morphological transformation of the object in a short time, which can cause significant changes in the edge, shape and detail characteristics of the object between different frames. Motion blur, which is a sharp morphological change, increases the difficulty of identifying and matching target features for tracking algorithms, because traditional static features may no longer be suitable, and more dynamic and robust feature extraction and matching methods must be relied upon. It follows that a significant increase in the degree of motion blur is an important indicator of a rapid change in the morphology of the target.
The method comprises the steps of obtaining motion blur degree information of a selected target between adjacent frames, analyzing the motion blur degree information of the selected target, and generating a motion blur degree change index, wherein the motion blur degree change index comprises the following steps:
extracting motion blur information of a target from SAR images of adjacent frames by analyzing gradient changes of selected target areas, wherein the SAR images of the adjacent frames are respectively represented by I t and I t+1, I t and I t+1 respectively represent the t frame SAR image and the t+1st frame SAR image, and gradient images of the selected target areas corresponding to I t and I t+1 are respectively represented by G t and G t+1 In the formula,Is a gradient operator;
in the selected target area, calculating a motion blur vector of each pixel point, wherein the calculated expression is as follows: Wherein v t (x, y) and v t+1 (x, y) are motion blur vectors of each pixel point in a selected target area in a t frame and a t+1st frame respectively, and (x, y) is coordinates of the pixel point;
the motion blur vector quantifies the degree of blur for each pixel point.
M t(x,y)=∑x′,y′vt (x ', y'). K (x-x ', y-y'), where M t+1(x,y)=∑x′,y′vt+1 (x ', y'). K (x-x ', y-y'), where K is a kernel function for smoothing motion blur vectors, and M t and M t+1 are blur degree matrices of t-th and t+1th frames, respectively, of the selected target region, where x 'and y' are variables traversing all pixels in the selected target region;
the blur degree matrix quantifies the blur degree of each pixel point in the selected target area.
Calculating a blurring degree change matrix, which represents the blurring degree change quantity of each pixel point in a selected target area between adjacent frames, wherein the calculated expression is delta M (x, y) =M t+1(x,y)-Mt (x, y) |, and delta M (x, y) is the blurring degree change matrix;
Motion blur=Σx,y∈R delta M (x, y) is calculated by integrating the blurring degree change of the whole selected target area, wherein Motion blur is the blurring degree change index and represents the comprehensive measure of the blurring degree change of the whole target area, and R is the selected target area and represents the pixel area of the selected target tracked in the SAR image.
The motion blur degree information of a selected target between adjacent frames is obtained, the larger the expression value of the motion blur degree change index generated after the motion blur degree information of the selected target is analyzed is, the faster the morphological feature change of the target between the adjacent frames is indicated, otherwise, the smaller the motion blur degree change index is, the slower the morphological feature change of the target is indicated, because the high motion blur degree change index reflects the rapid change of the position and the shape of the target in an image, and the lower motion blur degree change index indicates the gradual and stable change of the target.
The convolutional neural network is not particularly limited, and can comprehensively analyze the contour curvature change index Cont curva and the blurring degree change index Motion blur to generate the morphological feature change coefficient Morp fl;
The calculation formula generated by the morphological feature change coefficient Morp fl is as follows:
Wherein, mu 1、μ2 is the preset proportionality coefficient of the contour curvature change index Cont curva and the blurring degree change index Motion blur respectively, and mu 1、μ2 is larger than 0.
The morphological feature change coefficient can know that the larger the appearance value of the outline curvature change index generated after analyzing the outline curvature information of the selected object is, the greater the appearance value of the motion blur degree change index generated after analyzing the motion blur degree information of the selected object is, the faster the morphological feature change of the selected object is, and otherwise, the slower the morphological feature change of the selected object is.
Based on the result of the morphological feature change evaluation, classifying the morphological change of the selected target into a sharp morphological change and a normal morphological change;
Comparing and analyzing the morphological feature change coefficient generated after the morphological feature change evaluation of the selected target with a preset morphological feature change coefficient reference threshold value, and dynamically dividing the morphological change of the selected target, wherein the specific dividing steps are as follows:
If the morphological feature change coefficient is larger than the morphological feature change coefficient reference threshold, dividing the morphological change of the selected target into abrupt morphological changes;
If the morphological feature change coefficient is smaller than or equal to the morphological feature change coefficient reference threshold, the morphological change of the selected target is divided into normal morphological changes.
Aiming at normal morphological changes, based on historical data, setting an optimal constant speed to track a selected target, aiming at abrupt morphological changes, adjusting the actual tracking speed of tracking the selected target, reducing the actual tracking speed, and ensuring the continuity and stability of target tracking;
The optimal constant speed refers to the most suitable tracking speed set based on the historical tracking data under the condition of normal morphological change of the selected target, so as to ensure the accuracy and stability of tracking. The process of setting based on the historical data comprises analyzing the characteristics of motion track, speed, acceleration and the like of the target in the past period, and determining a speed value capable of effectively balancing tracking precision and calculation efficiency through a statistical method or a machine learning model. The speed value can keep accurate tracking when the target changes are not severe, and meanwhile excessive calculation and resource waste are avoided.
For sharp morphological changes, the actual tracking speed of the selected target tracking is adjusted, and the specific steps are as follows:
When a sharp morphological change is detected, starting to adjust an actual tracking speed reference, firstly calculating a speed adjustment quantity Deltav according to a morphological feature change coefficient Morp fl, wherein the calculated expression is Deltav=gamma (Morp fl-τthre)·vopt, wherein Morp fl represents the morphological feature change coefficient, tau thre represents a morphological feature change coefficient reference threshold value, morp fl>τthre and gamma represent speed adjustment coefficients for controlling a speed adjustment amplitude, and v opt represents an optimal constant speed;
according to the calculated speed adjustment quantity Deltav, the actual tracking speed v actual is adjusted, and the adjusted expression is v actual=vopt-Δv=vopt-γ·(Morpfl-τthre)·vopt;
in the tracking process, dynamically monitoring the change of the morphological characteristic change coefficient, dynamically adjusting the actual tracking speed according to real-time feedback, ensuring the tracking continuity and stability, and dynamically adjusting the actual tracking speed to be expressed as v actual=vopt·(1-γ·max(0,Morpfl-τth)e);
The dynamic adjusted actual tracking speed is adjusted by using a smoothing function (in order to avoid instability caused by drastic change of speed), the adjusting expression is that v smooth=λ·vactual+(1-λ)·vprev,vsmooth represents the actual tracking speed after smoothing, lambda represents a smoothing coefficient, 1> lambda >0, and v prev represents the actual tracking speed of the SAR image of the previous frame;
The smoothed actual tracking speed v smooth is applied to the target tracking process between the current frame and the next frame, so that the tracking continuity is ensured.
The invention ensures the accuracy of the data of the dynamic tracking of the target by preprocessing SAR images and acquiring continuous time sequence images, evaluates the morphological characteristic change of the selected target between adjacent frames by utilizing a convolutional neural network, can effectively identify the abrupt morphological change and the normal morphological change of the target, adopts the optimal constant speed tracking for the normal morphological change according to a dynamic dividing mechanism of the morphological change, ensures the tracking efficiency and accuracy, ensures the tracking continuity and stability for the abrupt morphological change by reducing the actual tracking speed, not only can timely adjust the tracking strategy when the target abruptly changes and reduce the target position prediction error, but also can effectively prevent the target from losing, thereby providing reliable target control and monitoring in key applications such as military monitoring, disaster monitoring and the like and avoiding irrecoverable losses.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas with a large amount of data collected for software simulation to obtain the latest real situation, and preset parameters in the formulas are set by those skilled in the art according to the actual situation.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
While certain exemplary embodiments of the present invention have been described above by way of illustration only, it will be apparent to those of ordinary skill in the art that modifications may be made to the described embodiments in various different ways without departing from the spirit and scope of the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive of the scope of the invention, which is defined by the appended claims.