CN115511006B - Target detection method, device, electronic equipment and storage medium based on unitary transform - Google Patents
Target detection method, device, electronic equipment and storage medium based on unitary transformInfo
- Publication number
- CN115511006B CN115511006B CN202211339176.0A CN202211339176A CN115511006B CN 115511006 B CN115511006 B CN 115511006B CN 202211339176 A CN202211339176 A CN 202211339176A CN 115511006 B CN115511006 B CN 115511006B
- Authority
- CN
- China
- Prior art keywords
- tensor
- space
- time
- target
- objective
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/14—Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computational Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to the field of target detection technologies, and in particular, to a target detection method, apparatus, and storage medium based on unitary transformation. The method comprises the steps of constructing a space-time tensor of a multi-frame infrared image, constructing an objective function of the space-time tensor, wherein the space-time tensor comprises a background tensor and an objective tensor, the objective function is obtained by restraining the background tensor by utilizing a tensor kernel norm under a unitary transformation domain and restraining the objective tensor by utilizing a combined space-time total variation and an L 1 norm, solving the objective function, solving the objective tensor, reconstructing the solved objective tensor into a plurality of objective images of single frames, and outputting a detection result of each objective image. The method can accurately detect the target in the complex background and has strong detection capability.
Description
Technical Field
The present invention relates to the field of target detection technologies, and in particular, to a target detection method, apparatus, and storage medium based on unitary transformation.
Background
The infrared detection technology has the characteristics of strong anti-interference capability, all-weather operation and the like, so that the infrared searching and tracking system (INFRARED SEARCH AND TRACK, IRST) is widely applied to the military and civil fields. The infrared target detection is used as a basic function in the IRST system, and plays an important role in aspects of aerospace reconnaissance, disaster relief and the like.
In the related art, when the imaging environment of the target is complex, the existing target detection method cannot accurately detect the target, and the detection capability is poor.
Therefore, there is a need for a unitary transform-based target detection method to solve the above-mentioned problems.
Disclosure of Invention
Based on the problems of low target detection accuracy and poor detection capability of the existing target detection method, the embodiment of the invention provides a target detection method, device and storage medium based on unitary transformation, which can accurately detect targets in complex backgrounds and have strong detection capability.
In a first aspect, an embodiment of the present invention provides a unitary transform-based target detection method, including:
constructing a space-time tensor of a multi-frame infrared image, wherein the space-time tensor comprises a background tensor and a target tensor;
Constructing an objective function of the space-time tensor, wherein the objective function is obtained by restraining the background tensor by using tensor kernel norms under a unitary transformation domain and restraining the objective tensor by using a joint space-time total variation and an L 1 norms;
solving the objective function and solving the objective tensor;
reconstructing the solved target tensor into a plurality of target images of single frames, and outputting a detection result of each target image.
In one possible design, the objective function is:
In the formula, As a function of the background tensor,For the purpose of the target tensor,As a function of the space-time tensor,As a result of the random noise,Is tensor kernel norm under unitary transform domain, equal in value toThe sum of the nuclear norms of all front slices of the new tensor obtained after multiplication of the unitary transformation matrix a by the modulo three fibers,Is the L 1 norm of the tensor,Is the Frobenius norm,Is a space-time total variation, lambda 1、λ2 and lambda 3 are balance coefficients,The value range of p is 1-10, the value range of lambda 2 is 0.01-0.1, the value range of lambda 3 is 100-200, m is the maximum value of the length and the width of the infrared image of each frame, and l is the total frame number of the infrared image.
In one possible design, the solving the objective function to solve for the objective tensor includes:
constructing the unitary transformation matrix by utilizing zero frequency components of the space-time tensor time domain;
And solving the objective function based on the unitary transformation matrix, and solving the objective tensor.
In one possible design, the constructing the unitary transformation matrix using the zero frequency component of the space-time tensor time domain includes:
Respectively carrying out one-dimensional Fourier transform on m multiplied by n mode three fibers of the space-time tensor to obtain a first space-time tensor, wherein m and n are respectively the length and the width of the infrared image;
Reserving a zero frequency component in the first space-time tensor to obtain a second space-time tensor;
respectively carrying out one-dimensional Fourier inverse transformation on m multiplied by n mode three fibers of the second space-time tensor to obtain a third space-time tensor;
And performing singular value decomposition on the modulo three fiber expansion matrix of the third space-time tensor, and taking the conjugate transpose of the obtained left singular matrix as the unitary transformation matrix.
In one possible design, the solving the objective function based on the unitary transformation matrix, solving the objective tensor includes:
Introducing auxiliary variables AndThe objective function is simplified to a first objective function represented by the following formula:
In the formula, Is a space-time total variation operator;
The augmented lagrangian function of the first objective function is:
Wherein y 1,y2,y=[yv,yh,yt is Lagrangian multiplier, and beta is penalty factor;
And solving the Lagrangian function, and solving the target tensor.
In one possible design, the solving the lagrangian function to solve for the target tensor includes:
for the background tensor, in k+1 iterations, let
For a pair ofIs multiplied by the unitary transformation matrix A
For a pair ofSingular value contraction processing is carried out on each front slice of the table to obtain updated
ThenThe calculation formula of (2) is as follows:
wherein A H is the conjugate transpose of unitary transformation matrix A, i, j are natural numbers greater than 0 respectively;
For the target tensor, in k +1 iterations, The calculation formula of (2) is as follows:
wherein, the Sign () is a sign function;
and stopping calculating and outputting the target tensor in response to reaching a preset iteration stop condition.
In one possible design, the preset iteration stop condition is that the iteration number reaches a preset maximum iteration number or
In a second aspect, an embodiment of the present invention further provides an object detection apparatus based on unitary transformation, including:
the first construction module is used for constructing a space-time tensor of the multi-frame infrared image, wherein the space-time tensor comprises a background tensor and a target tensor;
The second construction module is used for constructing an objective function of the space-time tensor, wherein the objective function is obtained by restraining the background tensor by utilizing tensor kernel norms under a unitary transformation domain and restraining the objective tensor by utilizing a joint space-time total variation and L 1 norms;
the solving module is used for solving the objective function and solving the objective tensor;
And the reconstruction output module is used for reconstructing the solved target tensor into a plurality of target images of single frames and outputting the detection result of each target image.
In a third aspect, an embodiment of the present invention further provides a computing device, including a memory and a processor, where the memory stores a computer program, and the processor implements a method according to any embodiment of the present specification when executing the computer program.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform a method according to any of the embodiments of the present specification.
The embodiment of the invention provides a target detection method and device based on unitary transformation, electronic equipment and a storage medium. According to the method, the space-time tensor is constructed by using the multi-frame infrared images, data processing can be carried out in a high-dimensional space, the data structure of the original infrared images is reserved, the time domain information of the infrared image sequence is fully utilized, more image information is obtained, and prior information such as the shape, the motion trail and the like of the target is obtained. And then, based on tensor kernel norms under the unitary transformation domain, constraining the background tensor and utilizing the combined space-time total variation and L 1 norms to constrain the target tensor, thereby constructing a target function of the space-time tensor, and then, solving the target function and solving the target to obtain the target tensor. In these two steps, the tensor kernel norms based on unitary transformation are used to characterize the low rank of the space-time tensor in the infrared image, which can result in a lower tensor rank than the tensor kernel norms based on fourier transformation. The space-time total variation is used for constraining the target tensor, so that the space and time continuity of the target tensor are fully described, the internal smoothness of the target tensor is enhanced, and the detection performance in a complex scene is improved. Therefore, the method can accurately detect the target in the complex background and has strong detection capability.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a unitary transformation-based target detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of target detection using the method of FIG. 1;
FIG. 3 (a) is an infrared image containing a small target;
FIG. 3 (b) is a three-dimensional distribution of the infrared image shown in FIG. 3 (a);
FIG. 4 (a) is a target image obtained by performing target detection on FIG. 3 (a) using the method of the present invention;
FIG. 4 (b) is a three-dimensional distribution of the target image shown in FIG. 4 (a);
FIG. 5 (a) is a target image obtained by performing target detection on FIG. 3 (a) by using a local contrast (Local Contrast Measure, LCM) method;
FIG. 5 (b) is a three-dimensional distribution of the target image shown in FIG. 5 (a);
FIG. 6 is a target Image obtained by performing target detection on FIG. 3 (a) using an infrared block Image (INFRARED PATCH-Image, IPI) method;
FIG. 6 (b) is a three-dimensional distribution of the target image shown in FIG. 6 (a);
Fig. 7 is a target image obtained by performing target detection on fig. 3 (a) using the method of re-weighting infrared block tensors RIPT (REWEIGHTED INFRARED PATCH tensor, RIPT);
fig. 7 (b) is a three-dimensional distribution diagram of the target image shown in fig. 7 (a);
FIG. 8 is a target image of FIG. 3 (a) using partial tensor kernel norms and a method of (Partial Sum of the Tensor Nuclear Norm, PSTNN);
FIG. 8 (b) is a three-dimensional distribution of the target image shown in FIG. 8 (a);
FIG. 9 is a hardware architecture diagram of a computing device according to one embodiment of the invention;
Fig. 10 is a block diagram of an object detection apparatus based on unitary transformation according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
Specific implementations of the above concepts are described below.
Referring to fig. 1, an embodiment of the present invention provides a unitary transformation-based target detection method, which includes:
Step 100, constructing a space-time tensor of a multi-frame infrared image, wherein the space-time tensor comprises a background tensor and a target tensor;
102, constructing an objective function of a space-time tensor, wherein the objective function is obtained by restraining a background tensor by using a tensor kernel norm under a unitary transformation domain and restraining the objective tensor by using a joint space-time total variation and an L 1 norm;
104, solving an objective function and solving an objective tensor;
and 106, reconstructing the solved target tensor into a plurality of target images of single frames, and outputting a detection result of each target image.
The embodiment of the invention provides a target detection method based on unitary transformation. According to the method, the space-time tensor is constructed by using the multi-frame infrared images, data processing can be carried out in a high-dimensional space, the data structure of the original infrared images is reserved, the time domain information of the infrared image sequence is fully utilized, more image information is obtained, and prior information such as the shape, the motion trail and the like of the target is obtained. And then, based on tensor kernel norms under the unitary transformation domain, constraining the background tensor and utilizing the combined space-time total variation and L 1 norms to constrain the target tensor, thereby constructing a target function of the space-time tensor, and then, solving the target function and solving the target to obtain the target tensor. In these two steps, the tensor kernel norms based on unitary transformation are used to characterize the low rank of the space-time tensor in the infrared image, which can result in a lower tensor rank than the tensor kernel norms based on fourier transformation. The space-time total variation is used for constraining the target tensor, so that the space and time continuity of the target tensor are fully described, the internal smoothness of the target tensor is enhanced, and the detection performance in a complex scene is improved. Therefore, the method can accurately detect the target in the complex background and has strong detection capability.
In this embodiment, the targets in the infrared image may be targets of various sizes, and there is a good detection effect for small targets.
The manner in which the individual steps shown in fig. 1 are performed is described below.
First, for step 100, a space-time tensor of a multi-frame infrared image is constructed, the space-time tensor comprising a background tensor and a target tensor.
In this step, the number of the plurality of frames of infrared images is at least 3 frames, and the sampling time interval between two adjacent frames of infrared images is smaller than the preset time interval. Therefore, the time domain information of the infrared image sequence can be fully utilized, the continuity of the target in space and time domain is fully utilized, and a more accurate detection result is obtained. Of course, it is preferable that the plurality of frames of infrared images are consecutive infrared images.
In this step, the target tensor and the background tensor are unknown tensors, and an objective function of the space-time tensor needs to be constructed to solve the two tensors.
Then, for step 102, an objective function of the space-time tensor is constructed, where the objective function is obtained by constraining the background tensor by using the tensor kernel norm under the unitary transform domain and constraining the objective tensor by using the joint space-time total variation and the L 1 norm.
In this step, the low rank of the space-time tensor in the infrared image is described using a tensor kernel norm based on unitary transformation, and the unitary transformation matrix used is correlated with the zero frequency component in the time dimension in the space-time tensor, and a lower tensor rank is obtained than the tensor kernel norm based on fourier transformation. The space-time total variation is used for constraining the target tensor, so that the space and time continuity of the target tensor are fully described, the internal smoothness of the target tensor is enhanced, and the detection performance in a complex scene is improved. Compared with an infrared small target detection method which uses total variation and constrains background tensors, the method can effectively reduce the running time.
In some embodiments, the objective function is:
In the formula, As a background tensor,As a result of the tensor of the object,Is a tensor of the space-time,As a result of the random noise,Is tensor kernel norm under unitary transform domain, equal in value toThe sum of the nuclear norms of all front slices of the new tensor obtained after multiplication of the unitary transformation matrix a by the modulo three fibers,Is the L 1 norm of the tensor,Is the Frobenius norm,Is a space-time total variation, lambda 1、λ2 and lambda 3 are balance coefficients,The value range of p is 1-10, the value range of lambda 2 is 0.01-0.1, the value range of lambda 3 is 100-200, m is the maximum value of the length and the width of each frame of infrared image, and l is the total frame number of the infrared image.
Then, for step 104, the objective function is solved, and the objective tensor is solved.
Of course, the background tensor can also be solved by solving the objective function.
In some implementations, solving the objective function, solving for the objective tensor, includes:
constructing a unitary transformation matrix by utilizing zero frequency components of a space-time tensor time domain;
and solving an objective function based on the unitary transformation matrix, and solving an objective tensor.
In some embodiments, constructing a unitary transformation matrix using zero frequency components of a space-time tensor time domain includes:
respectively performing one-dimensional Fourier transform on m multiplied by n mode three fibers of the space-time tensor to obtain a first space-time tensor, wherein m and n are respectively the length and the width of an infrared image;
reserving a zero frequency component in the first space-time tensor to obtain a second space-time tensor;
Respectively carrying out one-dimensional Fourier inverse transformation on m multiplied by n mode three fibers of the second space-time tensor to obtain a third space-time tensor, wherein the third space-time tensor can be used as an estimated value of a background tensor;
Singular value decomposition is carried out on the modulo three fiber expansion matrix of the third space-time tensor, and the conjugate transpose of the obtained left singular matrix is used as a unitary transformation matrix.
In some embodiments, solving the objective function based on the unitary transformation matrix, solving for the objective tensor, comprises:
Introducing auxiliary variables AndThe objective function is reduced to a first objective function shown in the following formula:
In the formula, Is a space-time total variation operator, which comprises three components, namely D v、Dh and D t, respectively, and the three components respectively represent the difference operators in the vertical direction, the horizontal direction and the time domain direction.
The augmented lagrangian function of the first objective function is:
Wherein y 1,y2,y=[yv,yh,yt is Lagrangian multiplier, and beta is penalty factor;
And solving a Lagrangian function, and solving a target tensor.
In some implementations, solving the lagrangian function, solving for the target tensor, includes:
for background tensors, in k+1 iterations, let
For a pair ofIs obtained by multiplying each modulo three fiber by unitary transformation matrix A
For a pair ofSingular value contraction processing is carried out on each front slice of the table to obtain updated
Taking singular value contraction processing of the L-th front slice as an example:
For a pair of Singular value decomposition is carried out on the L-th front slice to obtain a left singular matrix U L, a right singular matrix V L and a diagonal matrix sigma L, and singular value contraction processing is carried out on the front slice to obtain
TraversingAll front slices of the model (C) are processed by singular value contraction to obtain updated
Thereby, a product is obtainedIs calculated according to the formula:
wherein A H is the conjugate transpose of unitary transformation matrix A, i, j are natural numbers greater than 0 respectively;
For the target tensor, in k +1 iterations, The calculation formula of (2) is as follows:
wherein, the Sign () is a sign function;
and stopping calculating and outputting the target tensor in response to reaching a preset iteration stop condition. Of course, after the iterative computation is completed, the user can also solve for the background tensor as required.
In some embodiments, the preset iteration stop condition is that the iteration number reaches a preset maximum iteration number or
Finally, for step 106, reconstructing the solved target tensor into a plurality of target images of single frames, and outputting the detection result of each target image.
The advantageous effects of the method according to the invention are described below in a specific example.
As shown in fig. 2, which is a flow chart of the method of the present invention, the number of infrared images is 3, and the size of each infrared image is 256×256, wherein one infrared image containing a small target and its three-dimensional distribution diagram are shown in fig. 3 (a) and 3 (b). In the infrared image, the target center is located (145,115). The imaging background is complex, and the background comprises forests, grass, farmlands and the like. Since there is a lot of strong radiation and noise from the ground, there are also a lot of areas of high brightness and high bright noise spots in the image, which can interfere with the detection of the object, which is difficult to detect without the aid of inter-frame information in the infrared image sequence.
The inventor respectively adopts the method, the LCM method, the IPI method, the RIPT method and the PSTNN method to respectively process the infrared image shown in the figure 3 (a), and then the detection results are respectively shown in the figure 4 (a), the figure 4 (b), the figure 8 (a) and the figure 8 (b). As can be seen from fig. 4 (a) and fig. 4 (b), by adopting the method of the present invention, the background is effectively suppressed, and only the target remains in the image. As can be seen from fig. 5 (a) and 5 (b), the images obtained by LCM method have obvious "blocky" effect and are very sensitive to noise. As can be seen from fig. 6 (a), 6 (b) -8 (a) and 8 (b), in the detection results of the IPI method, RIPT method and PSTNN method, other pixels have higher gray values except for small targets, so that false alarms are easy to be caused, and the influences of highlight noise points and clutter interference on the targets are difficult to be completely solved by the three methods.
Therefore, the method and the device construct the space-time tensor, and simultaneously utilize the space-time total variation to constrain the target tensor in time, effectively utilize time domain information, and can well reduce the influence of noise and a highlight region so as to detect the target.
As shown in fig. 9 and 10, an embodiment of the present invention provides an object detection apparatus based on unitary transformation. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. In terms of hardware, as shown in fig. 9, a hardware architecture diagram of a computing device where an object detection apparatus based on unitary transformation is located according to an embodiment of the present invention is shown, where in addition to a processor, a memory, a network interface, and a nonvolatile memory shown in fig. 9, the computing device where the apparatus is located in the embodiment may generally include other hardware, such as a forwarding chip responsible for processing a packet, and so on. Taking a software implementation as an example, as shown in fig. 10, as a device in a logic sense, the device is formed by reading a corresponding computer program in a nonvolatile memory into a memory by a CPU of a computing device where the device is located. The object detection device based on unitary transformation provided in this embodiment includes:
a first construction module 1000, configured to construct a space-time tensor of a multi-frame infrared image, where the space-time tensor includes a background tensor and a target tensor;
A second construction module 1002, configured to construct an objective function of the space-time tensor, where the objective function is obtained by using a tensor kernel norm under the unitary transform domain to constrain a background tensor and using a joint space-time total variation and an L 1 norm to constrain the objective tensor;
a solving module 1004, configured to solve an objective function and solve an objective tensor;
And a reconstruction output module 1006, configured to reconstruct the solved target tensor into a plurality of target images of a single frame, and output a detection result of each target image.
In an embodiment of the present invention, the first building block 1000 may be used to perform step 100 in the above-described method embodiment, the second building block 1002 may be used to perform step 102 in the above-described method embodiment, the solving block 1004 may be used to perform step 104 in the above-described method embodiment, and the reconstruction output block 1006 may be used to perform step 106 in the above-described method embodiment.
In one embodiment of the invention, the objective function is:
In the formula, As a background tensor,As a result of the tensor of the object,Is a tensor of the space-time,As a result of the random noise,Is tensor kernel norm under unitary transform domain, equal in value toThe sum of the nuclear norms of all front slices of the new tensor obtained after multiplication of the unitary transformation matrix a by the modulo three fibers,Is the L 1 norm of the tensor,Is the Frobenius norm,Is a space-time total variation, lambda 1、λ2 and lambda 3 are balance coefficients,The value range of p is 1-10, the value range of lambda 2 is 0.01-0.1, the value range of lambda 3 is 100-200, m is the maximum value of the length and the width of each frame of infrared image, and l is the total frame number of the infrared image.
In an embodiment of the present invention, the solving module 1004 is configured to perform:
constructing a unitary transformation matrix by utilizing zero frequency components of a space-time tensor time domain;
and solving an objective function based on the unitary transformation matrix, and solving an objective tensor.
In an embodiment of the present invention, constructing a unitary transformation matrix using zero frequency components of a space-time tensor time domain includes:
respectively performing one-dimensional Fourier transform on m multiplied by n mode three fibers of the space-time tensor to obtain a first space-time tensor, wherein m and n are respectively the length and the width of an infrared image;
reserving a zero frequency component in the first space-time tensor to obtain a second space-time tensor;
Respectively carrying out one-dimensional Fourier inverse transformation on m multiplied by n mode three fibers of the second space-time tensor to obtain a third space-time tensor;
Singular value decomposition is carried out on the modulo three fiber expansion matrix of the third space-time tensor, and the conjugate transpose of the obtained left singular matrix is used as a unitary transformation matrix.
In the embodiment of the invention, solving the objective function based on the unitary transformation matrix to solve the objective tensor comprises the following steps:
Introducing auxiliary variables AndThe objective function is reduced to a first objective function shown in the following formula:
In the formula, Is a space-time total variation operator;
The augmented lagrangian function of the first objective function is:
Wherein y 1,y2,y=[yv,yh,yt is Lagrangian multiplier, and beta is penalty factor;
And solving a Lagrangian function, and solving a target tensor.
In the embodiment of the invention, solving the Lagrangian function and solving the target tensor comprises the following steps:
for background tensors, in k+1 iterations, let
For a pair ofIs obtained by multiplying each modulo three fiber by unitary transformation matrix A
For a pair ofSingular value contraction processing is carried out on each front slice of the table to obtain updated
ThenIs calculated according to the formula:
wherein A H is the conjugate transpose of unitary transformation matrix A, i, j are natural numbers greater than 0 respectively;
For the target tensor, in k +1 iterations, The calculation formula of (2) is as follows:
wherein, the Sign () is a sign function;
and stopping calculating and outputting the target tensor in response to reaching a preset iteration stop condition.
In the embodiment of the invention, the preset iteration stop condition is that the iteration number reaches the preset maximum iteration number or
It will be appreciated that the structure illustrated in the embodiments of the present invention does not constitute a specific limitation on an object detection apparatus based on unitary transformation. In other embodiments of the invention, an object detection device based on unitary transformation may include more or less components than illustrated, or may combine certain components, or may split certain components, or may have a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The content of information interaction and execution process between the modules in the device is based on the same conception as the embodiment of the method of the present invention, and specific content can be referred to the description in the embodiment of the method of the present invention, which is not repeated here.
The embodiment of the invention also provides a computing device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the target detection method based on unitary transformation in any embodiment of the invention when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program, and the computer program when executed by a processor causes the processor to execute the target detection method based on unitary transformation in any embodiment of the invention.
Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium.
In this case, the program code itself read from the storage medium may realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code form part of the present invention.
Examples of storage media for providing program code include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs, DVD+RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer by a communication network.
Further, it should be apparent that the functions of any of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform part or all of the actual operations based on the instructions of the program code.
Further, it is understood that the program code read out by the storage medium is written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion module connected to the computer, and then a CPU or the like mounted on the expansion board or the expansion module is caused to perform part and all of actual operations based on instructions of the program code, thereby realizing the functions of any of the above embodiments.
It is noted that relational terms such as first and second, and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one.," does not exclude that an additional identical element is present in a process, method, article, or apparatus that comprises the element.
It will be appreciated by those of ordinary skill in the art that implementing all or part of the steps of the above method embodiments may be accomplished by hardware associated with program instructions, and that the above program may be stored in a computer readable storage medium which, when executed, performs the steps comprising the above method embodiments, where the above storage medium includes various media that may store program code, such as ROM, RAM, magnetic or optical disks.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention, and not for limiting the same, and although the present invention has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the spirit and scope of the technical solution of the embodiments of the present invention.
Claims (8)
1. A unitary transform-based target detection method, comprising:
constructing a space-time tensor of a multi-frame infrared image, wherein the space-time tensor comprises a background tensor and a target tensor;
Constructing an objective function of the space-time tensor, wherein the objective function is obtained by restraining the background tensor by using tensor kernel norms under a unitary transformation domain and restraining the objective tensor by using a joint space-time total variation and L 1 norms;
Reconstructing the solved target tensor into a plurality of target images of single frames, and outputting a detection result of each target image;
The solving the objective function to solve the objective tensor includes:
Constructing a unitary transformation matrix by utilizing zero frequency components of the space-time tensor time domain;
Solving the objective function based on the unitary transformation matrix to solve the objective tensor;
The constructing the unitary transformation matrix using the zero frequency component of the space-time tensor time domain includes:
Respectively carrying out one-dimensional Fourier transform on m multiplied by n mode three fibers of the space-time tensor to obtain a first space-time tensor, wherein m and n are respectively the length and the width of the infrared image;
Reserving a zero frequency component in the first space-time tensor to obtain a second space-time tensor;
respectively carrying out one-dimensional Fourier inverse transformation on m multiplied by n mode three fibers of the second space-time tensor to obtain a third space-time tensor;
And performing singular value decomposition on the modulo three fiber expansion matrix of the third space-time tensor, and taking the conjugate transpose of the obtained left singular matrix as the unitary transformation matrix.
2. The method of claim 1, wherein the objective function is:
In the formula, As a function of the background tensor,For the purpose of the target tensor,As a function of the space-time tensor,As a result of the random noise,Is tensor kernel norm under unitary transform domain, equal in value toModulo three-fiber multiplication unitary transformation matrixThe sum of the kernel norms of all the front slices of the new tensor obtained later,Is the L 1 norm of the tensor,Is the Frobenius norm,Is the space-time total variation of the space-time,、AndIn order for the coefficient of balance to be present,The value range of p is 1-10,The value range of (2) is 0.01-0.1,The value range of the infrared image is 100-200, m is the maximum value of the length and the width of the infrared image of each frame,Is the total frame number of the infrared image.
3. The method of claim 1, wherein the solving the objective function based on the unitary transformation matrix to solve for the objective tensor comprises:
Introducing auxiliary variables AndThe objective function is simplified to a first objective function represented by the following formula:
In the formula, Is a space-time total variation operator;
The augmented lagrangian function of the first objective function is:
In the formula, In order to be a lagrange multiplier,Representing penalty factors;
And solving the Lagrangian function, and solving the target tensor.
4. A method according to claim 3, wherein said solving the lagrangian function for the target tensor comprises:
for the background tensor, in k+1 iterations, let ;
For a pair ofMultiplying each modulo three fiber of (a) by the unitary transformation matrixObtaining;
For a pair ofSingular value contraction processing is carried out on each front slice of the table to obtain updated;
ThenThe calculation formula of (2) is as follows:
In the formula, I and j are natural numbers greater than 0 respectively for the conjugate transpose matrix of the unitary transformation matrix A;
For the target tensor, in k +1 iterations, The calculation formula of (2) is as follows:
wherein, the ,Is a sign function;
and stopping calculating and outputting the target tensor in response to reaching a preset iteration stop condition.
5. The method of claim 4, wherein the predetermined iteration stop condition is that the number of iterations reaches a predetermined maximum number of iterations or。
6. An object detection apparatus based on unitary transformation, comprising:
the first construction module is used for constructing a space-time tensor of the multi-frame infrared image, wherein the space-time tensor comprises a background tensor and a target tensor;
The second construction module is used for constructing an objective function of the space-time tensor, wherein the objective function is obtained by restraining the background tensor by utilizing tensor kernel norms under a unitary transformation domain and restraining the objective tensor by utilizing a joint space-time total variation and L 1 norms;
the solving module is used for solving the objective function and solving the objective tensor;
the reconstruction output module is used for reconstructing the solved target tensor into a plurality of target images of single frames and outputting the detection result of each target image;
The solving module is used for executing the following operations:
Constructing a unitary transformation matrix by utilizing zero frequency components of the space-time tensor time domain;
Solving the objective function based on the unitary transformation matrix to solve the objective tensor;
The constructing the unitary transformation matrix using the zero frequency component of the space-time tensor time domain includes:
Respectively carrying out one-dimensional Fourier transform on m multiplied by n mode three fibers of the space-time tensor to obtain a first space-time tensor, wherein m and n are respectively the length and the width of the infrared image;
Reserving a zero frequency component in the first space-time tensor to obtain a second space-time tensor;
respectively carrying out one-dimensional Fourier inverse transformation on m multiplied by n mode three fibers of the second space-time tensor to obtain a third space-time tensor;
And performing singular value decomposition on the modulo three fiber expansion matrix of the third space-time tensor, and taking the conjugate transpose of the obtained left singular matrix as the unitary transformation matrix.
7. A computing device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the method of any of claims 1-5 when the computer program is executed.
8. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-5.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211339176.0A CN115511006B (en) | 2022-10-28 | 2022-10-28 | Target detection method, device, electronic equipment and storage medium based on unitary transform |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211339176.0A CN115511006B (en) | 2022-10-28 | 2022-10-28 | Target detection method, device, electronic equipment and storage medium based on unitary transform |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115511006A CN115511006A (en) | 2022-12-23 |
| CN115511006B true CN115511006B (en) | 2025-12-12 |
Family
ID=84512876
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211339176.0A Active CN115511006B (en) | 2022-10-28 | 2022-10-28 | Target detection method, device, electronic equipment and storage medium based on unitary transform |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115511006B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120803868B (en) * | 2025-07-02 | 2026-02-03 | 军事科学院系统工程研究院后勤科学与技术研究所 | Test evaluation method and device for industrial park integrated management system |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019222150A1 (en) * | 2018-05-15 | 2019-11-21 | Lightmatter, Inc. | Algorithms for training neural networks with photonic hardware accelerators |
| CN113256585A (en) * | 2021-05-24 | 2021-08-13 | 北京理工大学 | Real-time detection method for small infrared video moving target based on space-time tensor decomposition |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8098196B2 (en) * | 2006-04-11 | 2012-01-17 | Research Foundation Of The City University Of New York | Time-compressed clutter covariance signal processor |
| WO2014031499A1 (en) * | 2012-08-18 | 2014-02-27 | Halliburton Energy Services, Inc. | Mud pulse telemetry systems and methods using receive array processing |
| CN111191680B (en) * | 2019-10-09 | 2022-08-26 | 南京邮电大学 | Target detection method based on non-convex motion assistance |
| CN114200433B (en) * | 2021-12-10 | 2025-06-17 | 中国传媒大学 | A Tensor-Based Angle Estimation Method in Bistatic MIMO Radar |
| CN114841888B (en) * | 2022-05-16 | 2023-03-28 | 电子科技大学 | Visual data completion method based on low-rank tensor ring decomposition and factor prior |
-
2022
- 2022-10-28 CN CN202211339176.0A patent/CN115511006B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019222150A1 (en) * | 2018-05-15 | 2019-11-21 | Lightmatter, Inc. | Algorithms for training neural networks with photonic hardware accelerators |
| CN113256585A (en) * | 2021-05-24 | 2021-08-13 | 北京理工大学 | Real-time detection method for small infrared video moving target based on space-time tensor decomposition |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115511006A (en) | 2022-12-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Guo et al. | NERNet: Noise estimation and removal network for image denoising | |
| Amiaz et al. | Coarse to over-fine optical flow estimation | |
| Nair et al. | Fast high-dimensional bilateral and nonlocal means filtering | |
| Chountasis et al. | Applications of the Moore‐Penrose Inverse in Digital Image Restoration | |
| Karimi et al. | A convex variational method for super resolution of SAR image with speckle noise | |
| Zeng et al. | A robust variational approach to super-resolution with nonlocal TV regularisation term | |
| Tan et al. | Pixelwise estimation of signal‐dependent image noise using deep residual learning | |
| Tian et al. | Image compressed sensing using multi-scale residual generative adversarial network | |
| CN116485834A (en) | Infrared weak and small target detection method, device, equipment and medium | |
| CN115511006B (en) | Target detection method, device, electronic equipment and storage medium based on unitary transform | |
| Wang et al. | HPETC: History priority enhanced tensor completion for network distance measurement | |
| Chountasis et al. | Digital image reconstruction in the spectral domain utilizing the Moore‐Penrose inverse | |
| Liu et al. | Image inpainting algorithm based on tensor decomposition and weighted nuclear norm | |
| Palummo et al. | Functional principal component analysis for incomplete space–time data | |
| Jain et al. | Iterative solvers for image denoising with diffusion models: A comparative study | |
| Maji et al. | Reconstructing an image from its edge representation | |
| Ghosh et al. | Phase unwrapping algorithm using breadth-first-search and multi-level segmentation of phase quality interval in digital holography | |
| CN115346004B (en) | Remote sensing time sequence data reconstruction method combining space-time reconstruction and CUDA acceleration | |
| Ghasemi-Falavarjani et al. | Particle filter based multi-frame image super resolution | |
| Kulkarni et al. | Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction | |
| Rogass et al. | Performance of correlation approaches for the evaluation of spatial distortion reductions | |
| Hao et al. | Gpu-accelerated infrared patch-image model for small target detection | |
| Shao et al. | Structural similarity-optimal total variation algorithm for image denoising | |
| Wang et al. | Hyperspectral image denoising via nonconvex logarithmic penalty | |
| Zhang et al. | Parameterized reconstruction with random scales for radio synthesis imaging |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |