[go: up one dir, main page]

CN119167146A - A method and device for automatic modulation recognition of small sample signals - Google Patents

A method and device for automatic modulation recognition of small sample signals Download PDF

Info

Publication number
CN119167146A
CN119167146A CN202411241727.9A CN202411241727A CN119167146A CN 119167146 A CN119167146 A CN 119167146A CN 202411241727 A CN202411241727 A CN 202411241727A CN 119167146 A CN119167146 A CN 119167146A
Authority
CN
China
Prior art keywords
support
level
sample
feature
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411241727.9A
Other languages
Chinese (zh)
Inventor
马昭
方胜良
范有臣
李世忠
李石磊
刘冰雁
温晓敏
王孟涛
徐照菁
侯顺虎
李钰海
王梦阳
马淑丽
刘涵
陈晓宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Engineering University Of Pla Military Space Force
Original Assignee
Aerospace Engineering University Of Pla Military Space Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Engineering University Of Pla Military Space Force filed Critical Aerospace Engineering University Of Pla Military Space Force
Priority to CN202411241727.9A priority Critical patent/CN119167146A/en
Publication of CN119167146A publication Critical patent/CN119167146A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of signal modulation recognition, and particularly discloses a method and a device for automatically modulating and recognizing a small sample signal. The method comprises the steps of performing task extraction in a source domain training set to obtain a support set and a query set, constructing a signal category reasoning model, respectively inputting samples of the support set and samples of the query set into a multi-level embedding function, respectively averaging all levels of feature images of support samples of each category to obtain all levels of feature images of each support category, calculating all levels of similarity relation scores between the query sample and the feature images of the support category through a multi-level relation measurement module, performing weighted summation to obtain a final similarity score, reconstructing the feature images of the support sample through a decoder to obtain classification errors and reconstruction errors, back propagation errors, updating model parameters, and performing signal recognition on target domain data through the trained model. The method aims to solve the small sample dilemma faced by the AMR problem of the traditional deep learning method.

Description

Automatic modulation and identification method and device for small sample signals
Technical Field
The invention relates to the field of signal modulation recognition, in particular to a method and a device for automatically modulating and recognizing a small sample signal.
Background
The signal automatic modulation recognition is an important technology in the field of cognitive communication, and aims to judge the modulation mode of a signal according to a received radio signal, so that the signal automatic modulation recognition is a basis for further processing non-cooperative signals, is an important premise for realizing efficient spectrum sensing, spectrum understanding and spectrum utilization in a non-cooperative communication scene, and is one of important subjects of research in the field of wireless communication in recent years.
Conventional signal automatic modulation recognition methods can be classified into a method based on maximum likelihood theory and a method based on expert features. The method based on likelihood theory is to classify the statistical characteristics of the signals according to the discriminant criterion. The expert feature-based method is to manually design features, transform signals into a certain feature space, and then design a classifier for classification. Both of these methods typically require a great deal of a priori knowledge, which is typically less accurate and versatile.
With the development of deep learning technology, researchers gradually pay attention to the strong feature extraction and expression capability of the deep learning technology, and apply the deep learning technology to the field of automatic modulation recognition, so that considerable research results are obtained. O' SHEA et al first proposed a modulation recognition method for directly processing the original In-phase quadrature (In-phase and Quadrature, IQ) signal by using Convolutional Neural Network (CNN) In 2016. And then, network architectures such as a cyclic neural network, a long-short-term memory network, a noise reduction self-encoder and the like are gradually applied to the problem of automatic signal modulation and recognition, and the recognition accuracy is continuously improved. But the performance of deep learning-based methods is severely dependent on the large amount of high quality tagged data. In an actual non-cooperative scenario, it is impractical to want to obtain sufficient high quality signal samples in advance. Therefore, it is important to study how to solve the small sample problem faced in the field of signal modulation recognition.
Disclosure of Invention
In view of the foregoing, it is an object of the present invention to provide a small sample signal automatic modulation recognition method.
A second object of the present invention is to provide an automatic modulation recognition apparatus for small sample signals.
The first technical scheme adopted by the invention is as follows:
S1, extracting C-Way K-shot tasks in a source domain training set to obtain a support set and a query set;
S2, constructing a signal class reasoning model, respectively inputting a sample of the support set and a sample of the query set into a multi-level embedding function of the signal class reasoning model to obtain feature diagrams of each level of the support sample and feature diagrams of each level of the query sample, and respectively averaging the feature diagrams of each level of the support sample of each class to obtain feature diagrams of each level of the support class;
S3, calculating similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support categories through a multi-level relation measurement module of the signal category reasoning model, and carrying out weighted summation on the similarity relation scores of each level to obtain a similarity score between each support category corresponding to the query sample, so as to determine a prediction label of the query sample;
S4, reconstructing the feature map of the support sample through a decoder of the signal class reasoning model to obtain a reconstructed support sample;
S5, based on the prediction label and the reconstruction support sample, calculating to obtain a classification error and a reconstruction error;
S6, carrying out back propagation on the classification errors and the reconstruction errors, and updating the signal class reasoning model parameters;
s7, repeating the steps S1-S6 to perform iterative training on the signal category reasoning model, and performing signal recognition on the target domain data by using the trained signal category reasoning model to obtain a signal recognition result.
Optionally, the S2 includes:
S21, inputting each support sample in the support set into the multi-stage embedding function to obtain each stage of feature graphs of the support samples;
s22, averaging all levels of feature graphs of the support samples of the same class to obtain all levels of feature graphs of each support class;
S23, sending each query sample in the query set into a multi-stage embedding function to obtain each stage of feature graphs of the query samples.
Optionally, the step S3 includes:
S31, splicing the first-level query sample feature map and the first-level support category feature map, and calculating by using a first-level relation sub-module in the multi-level relation measurement module to obtain a first-level similarity feature map and a first-level similarity relation score;
S32, splicing the previous-stage similarity feature map, the same-stage query feature map and the same-stage support category feature map to serve as input of a next-stage relationship measurement module until similarity relationship scores between each-stage feature map of the query sample and each-stage feature map of the support category are calculated;
and S33, carrying out weighted summation on similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support categories to obtain similarity relation scores between the query sample and each support category, and determining a prediction label of the query sample according to the highest similarity relation score.
Alternatively to this, the method may comprise,
The v-th level similarity feature map expression is as follows:
the first-level similarity feature map expression is as follows:
In the formula, For the v-th stage support sample feature mapping,For the v-th level query sample feature mapping,And (5) outputting similarity characteristic mapping for the v-1 level relation module. f θ The system comprises a multi-stage embedding function and a multi-stage relation measuring module.
Alternatively to this, the method may comprise,
The similarity score expression for the query sample and each of the support categories is as follows:
Wherein,
And v-level similarity feature map relation score is as follows:
wherein alpha v is a fully-connected layer, C is a constant, Is a scalar attention weight.
The second technical scheme adopted by the invention is that the small sample signal automatic modulation and identification device comprises a task extraction module, a sampling module and a sampling module, wherein the task extraction module is used for carrying out C-Way K-shot task extraction in a source domain training set to obtain a support set and a query set;
The feature map generation module is used for constructing a signal class reasoning model, respectively inputting the samples of the support set and the samples of the query set into a multi-stage embedding function of the signal class reasoning model to obtain feature maps of all stages of the support sample and feature maps of all stages of the query sample, and respectively averaging the feature maps of all stages of the support sample of each class to obtain feature maps of all stages of each support class;
The multi-level relation measurement module is used for calculating similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support categories through the multi-level relation measurement module of the signal category reasoning model, carrying out weighted summation on the similarity relation scores of each level to obtain a similarity score between each support category corresponding to the query sample, and further determining a prediction label of the query sample;
the reconstruction module is used for reconstructing the feature map of the support sample through the decoder of the signal class reasoning model to obtain a reconstructed support sample;
the error calculation module is used for calculating to obtain classification errors and reconstruction errors based on the prediction labels and the reconstruction support samples;
the back propagation module is used for carrying out back propagation on the classification errors and the reconstruction errors and updating the signal class reasoning model parameters;
And the model training module is used for repeatedly carrying out iterative training on the signal category reasoning model, and carrying out signal recognition on the target domain data by utilizing the trained signal category reasoning model to obtain a signal recognition result.
Optionally, the feature map generating module includes:
the support feature map module is used for inputting each support sample in the support set into the multi-stage embedding function to obtain each stage feature map of the support sample;
the average value calculation module is used for calculating the average value of all levels of feature images of the support samples in the same category to obtain all levels of feature images of each support category;
And the query feature map module is used for sending each query sample in the query set into a multi-stage embedding function to obtain each stage of feature map of the query sample.
Optionally, the multi-level relationship metric module includes:
the first-level similarity score calculation module is used for splicing the first-level query sample feature images and the first-level support category feature images, and calculating by using a first-level relationship sub-module in the multi-level relationship measurement module to obtain a first-level similarity feature image and a first-level similarity relationship score;
The similarity score calculation module at each level is used for splicing the previous-level similarity feature map, the same-level query feature map and the same-level support category feature map to be used as the input of the relation measurement module at the next level until the similarity relation score between the feature map at each level of the query sample and the feature map at each level of the support category is calculated;
And the weighted summation module is used for weighted summation of similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support categories to obtain the similarity relation score between the query sample and each support category, and determining a prediction label of the query sample according to the highest similarity relation score.
Alternatively to this, the method may comprise,
The v-level similarity feature map expression is as follows:
the first-level similarity feature map expression is as follows:
In the formula, For the v-th level support feature map,For the v-th level of query feature mapping,Is a similarity feature map.
Alternatively to this, the method may comprise,
The similarity score expression for the query sample and each support category is as follows:
Wherein,
And v-level similarity feature map relation score is as follows:
wherein alpha v is a fully-connected layer, C is a constant, Is a scalar attention weight.
The beneficial effects of the technical scheme are that:
The invention obtains a support set and a query set by extracting C-Way K-shot tasks in a source domain training set, constructs a signal class reasoning model, respectively inputs the samples of the support set and the samples of the query set into a multi-stage embedding function of the signal class reasoning model to obtain each level of feature images of the support sample and each level of feature images of the query sample, respectively averages each level of feature images of the support sample of each class to obtain each level of feature images of the support class, calculates similarity relation scores between each level of feature images of the query sample and each level of feature images of the support class through a multi-stage relation measurement module of the signal class reasoning model, performs weighted summation on each level of similarity relation scores to obtain similarity scores between each support class corresponding to the query sample, further determines a prediction tag of the query sample, reconstructs the feature images of the support sample through a decoder of the signal class reasoning model to obtain a reconstructed support sample, calculates to obtain classification errors and reconstruction errors based on the prediction tag and the reconstructed support sample, performs inverse propagation on the reconstructed error and the reconstructed error model to perform inverse propagation on the signal class reasoning model, and performs iterative training model recognition on the signal class reasoning result. The construction MCRN-CR framework solves the small sample dilemma faced by the conventional deep learning method in AMR problems.
Drawings
FIG. 1 is a flow chart of the method for automatically modulating and identifying small sample signals;
Fig. 2 is a schematic structural diagram of the automatic small sample signal modulation and identification device provided by the invention;
FIG. 3 is a block diagram of a MCRN-CR according to the present invention;
FIG. 4 is a block diagram of an encoder;
FIG. 5 is a block diagram of a decoder;
FIG. 6 is a graph of recognition accuracy versus all signal to noise ratios for different models.
Detailed Description
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings and examples. The following detailed description of the embodiments and the accompanying drawings are provided to illustrate the principles of the invention and are not intended to limit the scope of the invention, i.e. the invention is not limited to the preferred embodiments described, which is defined by the claims.
In the description of the present invention, it should be noted that the meaning of "a plurality" is two or more, unless otherwise indicated, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance, and the specific meaning of the above terms in the present invention will be understood as appropriate to one of ordinary skill in the art.
Example 1
An embodiment of the invention provides an automatic small sample signal modulation and identification method, as shown in fig. 1 and 3, wherein S1 is used for carrying out C-Way K-shot task extraction in a source domain training set to obtain a support set and a query set, S2 is used for constructing a signal class reasoning model, respectively inputting samples of the support set and samples of the query set into a multi-level embedding function of the signal class reasoning model to obtain feature graphs of each level of the support sample and feature graphs of each level of the query sample, respectively solving the average value of the feature graphs of each level of the support sample to obtain feature graphs of each level of the support class, S3 is used for calculating similarity relation scores between the feature graphs of each level of the query sample and the feature graphs of each level of the support class through a multi-level relation measurement module of the signal class reasoning model, weighting and summing the similarity relation scores of each level to obtain a similarity score between each support class corresponding to the query sample, S4 is used for determining a prediction label of the query sample, S4 is used for reconstructing the feature graphs of the support sample through a decoder of the signal class reasoning model, S7 is used for reconstructing the support class feature graphs and obtaining a new signal class reasoning model, S6 is used for carrying out error reconstruction, and carrying out error-domain error-correlation-based on the support sample training model, and carrying out error-phase-error-correlation-based reconstruction training, and error-based on the model, and carrying out error-phase error-correlation-based reconstruction and error-based on the model, and carrying on the error-correlation-based on the model and the error-prediction model.
In order to solve the small sample problem faced by the signal modulation recognition field, a similar reconstruction relation network (MCRN-CR) based on multi-stage comparison is proposed, and the framework structure of the similar reconstruction relation network is shown in fig. 3. MCRN-CR comprises two parts of class reconstruction and a multi-stage comparison relation network, wherein the class reconstruction part is used for generating a low-dimensional potential representation of an input sample as an embedding of a support sample and a query sample, and the multi-stage comparison relation network is an improved version of the relation network and is used for realizing an identification task under the condition of a small sample.
The class reconstruction part comprises an encoder and a decoder, wherein the encoder generates a latent characteristic expression zs of the support sample xs, the support sample x s is further reconstructed by the decoder, and after training, the reconstruction error is smaller, the zs can represent the sample characteristic. The present invention employs complex convolution as a fundamental unit of an encoder and decoder because a complex neural network can process data including real and imaginary parts, providing a feature expression capability that is more abundant and diverse than a real neural network. As shown in fig. 4-5, the encoder and decoder structure comprises five-layer complex convolutions.
S1, extracting C-Way K-shot tasks in a source domain training set to obtain a support set and a query set;
the task extraction is C-way K-shot task extraction.
The C-way K-shot task extraction refers to randomly extracting C classes in the training set, extracting K samples from each class to form a Support set (Support set), and extracting Q samples as a Query set (Query set). The class labels of the support set are known and are used for constructing an embedded benchmark of the class C in the measurement space, and the query set is a sample to be predicted.
It should be further noted that, in one task extraction, if K >1, when the embedding function calculates the support class feature map of each stage, it is necessary to perform an average pooling operation on all support sample feature maps of the same class, so as to ensure that only one feature map is generated in each class. The class-level feature graphs are combined with the feature graphs of each query sample, so that in the 1-shot or k-shot setting, the number of relation scores of the single query sample is always C.
S2, constructing a signal class reasoning model, respectively inputting a sample of the support set and a sample of the query set into a multi-level embedding function of the signal class reasoning model to obtain feature diagrams of each level of the support sample and feature diagrams of each level of the query sample, and respectively averaging the feature diagrams of each level of the support sample of each class to obtain feature diagrams of each level of the support class;
the feature map includes a shallow feature map and a deep feature map.
S21, inputting each support sample in the support set into the multi-stage embedding function to obtain each stage of feature graphs of the support samples;
s22, averaging all levels of feature graphs of the support samples of the same class to obtain all levels of feature graphs of each support class;
S23, sending each query sample in the query set into a multi-stage embedding function to obtain each stage of feature graphs of the query samples.
S3, calculating similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support categories through a multi-level relation measurement module of the signal category reasoning model, and carrying out weighted summation on the similarity relation scores of each level to obtain a similarity score between each support category corresponding to the query sample, so as to determine a prediction label of the query sample;
If the query sample and the support category are to be accurately measured and matched, it is necessary to learn the nonlinear distances of different feature levels on the deep and shallow sub-abstract features at the same time and comprehensively calculate the relationship scores.
The relation measurement module is used for calculating the nonlinear distance between the query sample and the support category, obtaining the relation score of each sample and the families of different categories, and further determining the category of the query sample. The specific structure of the relationship metric module is shown in table 1:
TABLE 1 hierarchy of relational modules
S31, splicing the first-level query sample feature map and the first-level support category feature map, and calculating by using a first-level relation sub-module in the multi-level relation measurement module to obtain a first-level similarity feature map and a first-level similarity relation score;
S32, splicing the previous-stage similarity feature map, the same-stage query feature map and the same-stage support category feature map to serve as input of a next-stage relationship measurement module until similarity relationship scores between each-stage feature map of the query sample and each-stage feature map of the support category are calculated;
and S33, carrying out weighted summation on similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support categories to obtain similarity relation scores between the query sample and each support category, and determining a prediction label of the query sample according to the highest similarity relation score.
Specifically, the support sample and the query sample are respectively subjected to an embedding function to generate a v-th level feature mapAndThen splice them intoAnd then the corresponding v-th relation module is sent for comparison.
At level v-1, the relationship module outputs a similarity feature map for samples x i and x j The v-stage relationship module takes as input both the embedded output of the v-stage support and query samples and the v-1 stage relationship module similarity feature map.
The v-level similarity feature map expression is as follows:
For the first level relationship module, since it has no previous input, the first level similarity feature map expression is set as follows:
In the formula, For the v-th stage support sample feature mapping,For the v-th level query sample feature mapping,And (5) outputting similarity characteristic mapping for the v-1 level relation module. f θ The system comprises a multi-stage embedding function and a multi-stage relation measuring module.
Assuming q (-) represents the average pooling and fully-connected operation, the support and query samples output by each relationship module have a similarity (relationship) score for the feature map at level v, i.e., the similarity feature map relationship score at level v, as follows:
the further similarity score expression for query sample x j and each support category y c is as follows:
Wherein,
Wherein alpha v is a fully-connected layer, C is a constant,Is a scalar attention weight.
S4, reconstructing the feature map of the support sample through a decoder of the signal class reasoning model to obtain a reconstructed support sample;
S5, based on the prediction label and the reconstruction support sample, calculating to obtain a classification error and a reconstruction error;
In the training process, first, a low-dimensional potential representation z s of the support sample x s is generated by the class reconstruction part, on the one hand, z s participates in similarity calculation with the query sample, and on the other hand, z s needs to be further fed into the decoder to generate the reconstructed support sample In order to eliminate noise effect, when the reconstruction error L re is further calculated, the samples with high signal-to-noise ratio, which are the same as the types of the support samples, are obtainedAs a target for the reconstruction of the object,
Then, the class labels of the query samples are predicted by the classifier of section MCRN and the cross-entropy classification error is calculated with the true labels:
Where y is the sample of the query, Predictive labels for query samples.
The class reconstruction part is beneficial to the encoder as an embedding function to extract sample characteristics with higher identification degree, and the difference between the embedding of different classes is increased, so that the signal identification performance under the condition of small samples is improved. In MCRN-CR, L re and L ce together form the error of the model, with coefficients of λ re and λ ce, respectively.
L=λceLcereLre
S6, carrying out back propagation on the classification errors and the reconstruction errors, and updating the signal class reasoning model parameters;
s7, repeating the steps S1-S6 to perform iterative training on the signal category reasoning model, and performing signal recognition on the target domain data by using the trained signal category reasoning model to obtain a signal recognition result.
During training, each iteration is performed to extract C-way K-shot tasks from the training set, and support sets of one training task are respectively obtainedAnd a query setWhere m=k×c, n=q×c.
Subsequently, a relation score r c, j for each query sample xj and each support category yc is calculated and the predictive label of the query sample is further determined as described in S3Then the support sample hidden layer characteristic z s calculated by the encoder is further sent to a decoder to reconstruct the support sampleAnd then respectively settling classification errors and reconstruction errors according to the prediction labels and the reconstruction samples, and finally, reversely propagating errors, and respectively updating parameters of an encoder and a decoder of the reconstruction part of the class and parameters of the relation module.
The embodiment relates to an automatic small sample signal modulation and identification method, which comprises the steps of obtaining a small sample signal set, extracting a C-way K-shot task from the small sample signal set to obtain a support set and a query set, calculating each support category in the support set and each query sample in the query set through an encoder to obtain support feature mapping and query feature mapping, calculating the support feature mapping and the query feature mapping through a relation measurement module to obtain a similarity feature map and a relation score so as to determine a prediction label of the query sample, calculating the feature mapping through a decoder to obtain a reconstructed support sample, calculating a classification error and a reconstruction error based on the prediction label and the reconstructed support sample, and carrying out back propagation based on the classification error and the reconstruction error to update parameters of the encoder, the decoder and the relation measurement module. The construction MCRN-CR framework solves the small sample dilemma faced by the conventional deep learning method in AMR problems.
Example two
An embodiment of the invention provides a small sample signal automatic modulation recognition device 200, as shown in fig. 2, comprising a task extraction module 201, a search module and a data processing module, wherein the task extraction module is used for extracting a C-Way K-shot task in a source domain training set to obtain a support set and a query set;
The feature map generating module 202 constructs a signal class reasoning model, respectively inputs the samples of the support set and the samples of the query set into a multistage embedding function of the signal class reasoning model to obtain each stage of feature maps of the support sample and each stage of feature maps of the query sample, and respectively averages each stage of feature maps of the support sample of each class to obtain each stage of feature map of each support class;
the multi-level relation measurement module 203 is configured to calculate similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support class through the multi-level relation measurement module of the signal class inference model, and perform weighted summation on the similarity relation scores of each level to obtain a similarity score between each support class corresponding to the query sample, so as to determine a prediction label of the query sample;
A reconstruction module 204, configured to reconstruct, by using a decoder of the signal class inference model, a feature map of the support sample, to obtain a reconstructed support sample;
The error calculation module 205 is configured to calculate, based on the prediction tag and the reconstructed support sample, a classification error and a reconstruction error;
a back propagation module 206, configured to back propagate the classification error and the reconstruction error, and update the signal class inference model parameters;
the model training module 207 is configured to repeatedly perform iterative training on the signal class inference model, and perform signal recognition on the target domain data by using the trained signal class inference model, so as to obtain a signal recognition result.
Further, the feature map generating module 202 includes:
a support feature map module 2021, configured to input each support sample in the support set into the multi-stage embedding function, to obtain a feature map of each stage of the support sample;
the average value calculating module 2022 is configured to average the feature maps of the support samples of the same class, so as to obtain feature maps of the support class;
And a query feature map module 2023, configured to send each query sample in the query set into a multi-stage embedding function, so as to obtain a feature map of each stage of the query sample.
Further, the multi-level relationship measurement module 203 includes:
the first-level similarity score calculation module 2031 is configured to splice and calculate the first-level query sample feature map and the first-level support class feature map by using the first-level relationship sub-module in the multi-level relationship measurement module, so as to obtain a first-level similarity feature map and a first-level similarity relationship score;
The similarity score calculation module 2032 at each level is configured to splice the previous-level similarity feature map, the query feature map at the same level, and the support class feature map at the same level as inputs of the relationship measurement module at the next level until a similarity relationship score between the feature map at each level of the query sample and the feature map at each level of the support class is calculated;
the weighted summation module 2033 is configured to perform weighted summation on similarity relationship scores between the feature graphs of each level of the query sample and the feature graphs of each level of the support class, obtain a similarity relationship score between the query sample and each support class, and determine a prediction label of the query sample according to the highest similarity relationship score.
Further, the method further comprises the following steps:
the v-th level similarity feature map expression is as follows:
the first-level similarity feature map expression is as follows:
In the formula, For the v-th stage support sample feature mapping,For the v-th level query sample feature mapping,And (5) outputting similarity characteristic mapping for the v-1 level relation module. f θ The system comprises a multi-stage embedding function and a multi-stage relation measuring module.
Further, the method further comprises the following steps:
the similarity score expression for the query sample and each support category is as follows:
Wherein,
And v-level similarity feature map relation score is as follows:
wherein alpha v is a fully-connected layer, C is a constant, Is a scalar attention weight.
The embodiment is an automatic small sample signal modulation and identification device, a task extraction module, a characteristic map generation module, a reconstruction module, a calculation module and an error reconstruction module, wherein the task extraction module is used for carrying out C-Way K-shot task extraction in a source domain training set to obtain a support set and a query set, the characteristic map generation module is used for respectively inputting samples of the support set and samples of the query set into a multistage embedded function of the signal class inference model to obtain each stage of characteristic map of the support sample and each stage of characteristic map of the query sample, respectively averaging each stage of characteristic map of the support sample in each class, obtaining each stage of characteristic map of the support class, the multistage relation measurement module is used for calculating similarity relation scores between each stage of characteristic map of the query sample and each stage of characteristic map of the support class through the multistage relation measurement module of the signal class inference model, carrying out weighted summation on each stage of similarity relation scores to obtain the similarity scores between each support class corresponding to the query sample, further determining a prediction label of the query sample, the reconstruction module is used for reconstructing the characteristic map of the support sample through a decoder of the signal class inference model, obtaining a reconstruction of each stage of characteristic map of the support sample, carrying out calculation module is used for carrying out inverse calculation on the support class error analysis and error reconstruction module and carrying out inverse error-propagation training on the error-based on the support sample by the signal class inference model, and the error-inference training module is used for carrying out inverse error-based on the error-based reconstruction model, and obtaining a signal identification result. The construction MCRN-CR framework solves the small sample dilemma faced by the conventional deep learning method in AMR problems.
Example III
The data set and experimental settings are as follows:
to verify and evaluate the effectiveness of the proposed method, experiments were performed herein on RML2018.01A modulated signal datasets. The data set consists of 24 modulation signals, each signal sample comprises two paths of I/Q data, the format is [1024,2], the signal-to-noise ratio range is-20 dB-30 dB, and the interval is 2dB. The single class has 4096 samples at each signal-to-noise ratio for a total of 2555904 pieces of data.
The data set is first partitioned, with 14 of the modulation classes being known as source domain training sets, and the other 10 being target domain data sets. Because of the large number of raw data sets, which are redundant for the experiments herein, only 1000 samples were taken here for each class at a single signal-to-noise ratio. In a small sample scenario, the training samples of the existing target domain are usually scarce, so to meet this setting, a subset of the target domain data set will be divided into known small sample data sets, the number of which is 2% of the number of target domain samples, i.e. a single modulation scheme takes 20 samples at each signal-to-noise ratio. During each test, the support set is extracted from the subset. The remainder is the test set from which the query sample is drawn each time it is tested. The specific partitioning is shown in table 2.
The hardware environment of the experiment is Intel (R) Core (TM) i7-10700k CPU@3.8GHz, NVIDIAGeForce GTX3090 GPU, and the software environment is python3.8, pytorch deep learning framework.
Table 2 dataset partitioning
The comparison of the identification performance with other networks is as follows:
In order to verify the effectiveness and the recognition performance superiority of the proposed method under the condition of a small sample, under the setting of 5-way 5-shot, MCRN-CR is compared with the recognition performance of other networks. The comparison model comprises a recent small sample signal modulation identification method, AMCRN, AMR-CapsNet, a small sample learning method Prototype Network (PN) and a Relation Network (RN) in the image field. Meanwhile, in order to prove that the method can solve the small sample dilemma faced by the traditional deep learning method, a classifier is added on the structure of the encoder to form an automatic modulation recognition model AMR-CVNN based on a complex neural network, and the model is taken as a comparison term. AMR-CapsNet and AMCRN maintain the same settings as the original version, and PN and RN laboratory settings are consistent with MCRN-CR to ensure fairness of the experiment. AMR-CVNN adopts the idea of migration learning to perform pre-training on the source domain dataset first, and then perform fine adjustment of weight parameters on the target domain dataset.
Figure 6 shows the recognition accuracy for all contrast models at all signal-to-noise ratios on a given dataset. It can be seen from the figure that the method MCRN-CR presented herein has optimal recognition performance when the signal-to-noise ratio is greater than 0 dB. When the signal-to-noise ratio is below 0dB, all metric-based element learning methods recognize performance that is not very different, but MCRN-CR recognition performance is much higher than AMR-CapsNet and AMR-CVNN.
Table 3 identifies performance statistics for each comparative model in more detail. The highest recognition accuracy of MCRN-CR reaches 89.25%, the average recognition accuracy of MCRN-CR reaches 65.98% under all signal-to-noise ratios, the average recognition accuracy reaches 87.41% under SNR >0dB, compared with AMR-CVNN, the average recognition accuracy of MCRN-CR under 0dB is improved by nearly 10%, the average recognition accuracy of more than 0dB is improved by 5.86%, and the average recognition accuracy of all signal-to-noise ratios is improved by 8.04%. Thus, it can be demonstrated that MCRN-CR as proposed herein can effectively solve the small sample dilemma faced by conventional deep learning methods in AMR problems.
Table 3 comparison of average recognition accuracy for different methods
Model -20:2:-2dB 0:2:30dB -20:2:30dB Highest accuracy
MCRN-CR 31.69% 87.41% 65.98% 89.25%
AMR-CapsNet 19.22% 44.42% 34.73% 46.40%
AMCRN 32.13% 83.09% 63.49% 86.25%
PN 30.93% 83.00% 62.98% 85.41%
RN 30.72% 84.08% 63.56% 86.68%
AMR-CVNN 20.16% 81.55% 57.94% 85.04%
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. An automatic modulation and identification method for a small sample signal, which is characterized by comprising the following steps:
S1, extracting C-Way K-shot tasks in a source domain training set to obtain a support set and a query set;
S2, constructing a signal class reasoning model, respectively inputting a sample of the support set and a sample of the query set into a multi-level embedding function of the signal class reasoning model to obtain feature diagrams of each level of the support sample and feature diagrams of each level of the query sample, and respectively averaging the feature diagrams of each level of the support sample of each class to obtain feature diagrams of each level of the support class;
S3, calculating similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support categories through a multi-level relation measurement module of the signal category reasoning model, and carrying out weighted summation on the similarity relation scores of each level to obtain a similarity score between each support category corresponding to the query sample, so as to determine a prediction label of the query sample;
S4, reconstructing the feature map of the support sample through a decoder of the signal class reasoning model to obtain a reconstructed support sample;
S5, based on the prediction label and the reconstruction support sample, calculating to obtain a classification error and a reconstruction error;
S6, carrying out back propagation on the classification errors and the reconstruction errors, and updating the signal class reasoning model parameters;
s7, repeating the steps S1-S6 to perform iterative training on the signal category reasoning model, and performing signal recognition on the target domain data by using the trained signal category reasoning model to obtain a signal recognition result.
2. The small sample signal automatic modulation identification method according to claim 1, wherein S2 comprises:
S21, inputting each support sample in the support set into the multi-stage embedding function to obtain each stage of feature graphs of the support samples;
s22, averaging all levels of feature graphs of the support samples of the same class to obtain all levels of feature graphs of each support class;
S23, sending each query sample in the query set into a multi-stage embedding function to obtain each stage of feature graphs of the query samples.
3. The small sample signal automatic modulation identification method according to claim 2, wherein S3 comprises:
S31, splicing the first-level query sample feature map and the first-level support category feature map, and calculating by using a first-level relation sub-module in the multi-level relation measurement module to obtain a first-level similarity feature map and a first-level similarity relation score;
S32, splicing the previous-stage similarity feature map, the same-stage query feature map and the same-stage support category feature map to serve as input of a next-stage relationship measurement module until similarity relationship scores between each-stage feature map of the query sample and each-stage feature map of the support category are calculated;
and S33, carrying out weighted summation on similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support categories to obtain similarity relation scores between the query sample and each support category, and determining a prediction label of the query sample according to the highest similarity relation score.
4. The method for automatically modulating and identifying a small sample signal according to claim 3,
The v-th level similarity feature map expression is as follows:
the first-level similarity feature map expression is as follows:
In the formula, For the v-th stage support sample feature mapping,For the v-th level query sample feature mapping,And (5) outputting similarity characteristic mapping for the v-1 level relation module. f θ The system comprises a multi-stage embedding function and a multi-stage relation measuring module.
5. The method for automatically modulating and identifying a small sample signal according to claim 3,
The similarity score expression for the query sample and each support category is as follows:
Wherein,
And v-level similarity feature map relation score is as follows:
wherein alpha v is a fully-connected layer, C represents the number of support categories, Is a scalar attention weight.
6. An automatic small sample signal modulation and identification device, comprising:
The task extraction module is used for extracting C-Way K-shot tasks in the source domain training set to obtain a support set and a query set;
The feature map generation module is used for constructing a signal class reasoning model, respectively inputting the samples of the support set and the samples of the query set into a multi-stage embedding function of the signal class reasoning model to obtain feature maps of all stages of the support sample and feature maps of all stages of the query sample, and respectively averaging the feature maps of all stages of the support sample of each class to obtain feature maps of all stages of each support class;
The multi-level relation measurement module is used for calculating similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support categories through the multi-level relation measurement module of the signal category reasoning model, carrying out weighted summation on the similarity relation scores of each level to obtain a similarity score between each support category corresponding to the query sample, and further determining a prediction label of the query sample;
the reconstruction module is used for reconstructing the feature map of the support sample through the decoder of the signal class reasoning model to obtain a reconstructed support sample;
the error calculation module is used for calculating to obtain classification errors and reconstruction errors based on the prediction labels and the reconstruction support samples;
the back propagation module is used for carrying out back propagation on the classification errors and the reconstruction errors and updating the signal class reasoning model parameters;
And the model training module is used for repeatedly carrying out iterative training on the signal category reasoning model, and carrying out signal recognition on the target domain data by utilizing the trained signal category reasoning model to obtain a signal recognition result.
7. The small sample signal automatic modulation and identification device according to claim 6, wherein the feature map generation module comprises:
the support feature map module is used for inputting each support sample in the support set into the multi-stage embedding function to obtain each stage feature map of the support sample;
the average value calculation module is used for calculating the average value of all levels of feature images of the support samples in the same category to obtain all levels of feature images of each support category;
And the query feature map module is used for sending each query sample in the query set into a multi-stage embedding function to obtain each stage of feature map of the query sample.
8. The small sample signal automatic modulation and identification device of claim 6, wherein the multi-stage relationship metric module comprises:
the first-level similarity score calculation module is used for splicing the first-level query sample feature images and the first-level support category feature images, and calculating by using a first-level relationship sub-module in the multi-level relationship measurement module to obtain a first-level similarity feature image and a first-level similarity relationship score;
The similarity score calculation module at each level is used for splicing the previous-level similarity feature map, the same-level query feature map and the same-level support category feature map to be used as the input of the relation measurement module at the next level until the similarity relation score between the feature map at each level of the query sample and the feature map at each level of the support category is calculated;
And the weighted summation module is used for weighted summation of similarity relation scores between each level of feature graphs of the query sample and each level of feature graphs of the support categories to obtain the similarity relation score between the query sample and each support category, and determining a prediction label of the query sample according to the highest similarity relation score.
9. The apparatus for automatically modulating and identifying a small sample signal according to claim 8,
The v-th level similarity feature map expression is as follows:
the first-level similarity feature map expression is as follows:
In the formula, For the v-th stage support sample feature mapping,For the v-th level query sample feature mapping,And (5) outputting similarity characteristic mapping for the v-1 level relation module. f θ The system comprises a multi-stage embedding function and a multi-stage relation measuring module.
10. The apparatus for automatically modulating and identifying a small sample signal according to claim 8,
The similarity score expression for the query sample and each support category is as follows:
Wherein,
And v-level similarity feature map relation score is as follows:
wherein alpha v is a fully-connected layer, C is a constant, Is a scalar attention weight.
CN202411241727.9A 2024-09-05 2024-09-05 A method and device for automatic modulation recognition of small sample signals Pending CN119167146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411241727.9A CN119167146A (en) 2024-09-05 2024-09-05 A method and device for automatic modulation recognition of small sample signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411241727.9A CN119167146A (en) 2024-09-05 2024-09-05 A method and device for automatic modulation recognition of small sample signals

Publications (1)

Publication Number Publication Date
CN119167146A true CN119167146A (en) 2024-12-20

Family

ID=93886585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411241727.9A Pending CN119167146A (en) 2024-09-05 2024-09-05 A method and device for automatic modulation recognition of small sample signals

Country Status (1)

Country Link
CN (1) CN119167146A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119996133A (en) * 2025-02-26 2025-05-13 中国人民解放军军事航天部队航天工程大学 A small sample modulation recognition method based on diffusion model and attention mechanism

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116561559A (en) * 2023-05-08 2023-08-08 中国人民解放军空军工程大学 Small sample signal modulation recognition method based on multi-level relational metric network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116561559A (en) * 2023-05-08 2023-08-08 中国人民解放军空军工程大学 Small sample signal modulation recognition method based on multi-level relational metric network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHAO MA: "《Tackling Few-Shot Challenges in Automatic Modulation Recognition: A Multi-Level Comparative Relation Network Combining Class Reconstruction Strategy》", 《SENSORS》, vol. 24, no. 13, 8 July 2024 (2024-07-08), pages 1 *
庞伊琼: "《基于元学习的小样本调制识别算法》", 《空军工程大学学报》, vol. 23, no. 5, 31 October 2022 (2022-10-31) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119996133A (en) * 2025-02-26 2025-05-13 中国人民解放军军事航天部队航天工程大学 A small sample modulation recognition method based on diffusion model and attention mechanism

Similar Documents

Publication Publication Date Title
CN111860982B (en) VMD-FCM-GRU-based wind power plant short-term wind power prediction method
US11700156B1 (en) Intelligent data and knowledge-driven method for modulation recognition
CN111160176B (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
CN113780242B (en) A cross-scenario underwater acoustic target classification method based on model transfer learning
CN112966667B (en) One-dimensional range image noise reduction convolutional neural network recognition method for sea surface targets
CN115565019B (en) A ground object classification method for single-channel high-resolution SAR images based on deep self-supervised generative adversarial model
CN112749633A (en) Separate and reconstructed individual radiation source identification method
CN114299305A (en) Salient object detection algorithm for aggregating dense and attention multi-scale features
CN114980122A (en) Small sample radio frequency fingerprint intelligent identification system and method
CN114764577A (en) Lightweight modulation recognition model based on deep neural network and method thereof
CN105678343A (en) Adaptive-weighted-group-sparse-representation-based diagnosis method for noise abnormity of hydroelectric generating set
CN113987910A (en) Method and device for identifying load of residents by coupling neural network and dynamic time planning
CN114912486A (en) Modulation mode intelligent identification method based on lightweight network
CN114970601A (en) Power equipment partial discharge type identification method, equipment and storage medium
CN120451188B (en) Weak supervision cell nucleus segmentation method based on wavelet differential convolution and region expansion
CN115659254A (en) Power quality disturbance analysis method for power distribution network with bimodal feature fusion
CN115984650A (en) A Recognition Method of Synthetic Aperture Radar Image Based on Deep Learning
CN119128721A (en) Gas classification and recognition method based on graph neural network based on edge labeling framework
CN119167146A (en) A method and device for automatic modulation recognition of small sample signals
CN116340776B (en) A method and device for recognizing electricity usage behavior patterns based on noisy learning
CN116243248B (en) Multi-component interference signal recognition method based on multi-label classification network
CN119001408A (en) Battery simulator circuit damage prediction method based on multi-time sequence feature convolution
CN114612684B (en) Salient object detection algorithm based on efficient multi-scale context exploration network
CN112529035B (en) Intelligent identification method for identifying individual types of different radio stations
CN112434716B (en) Underwater target data amplification method and system based on condition countermeasure neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination