CN120165706B - A multi-step decoding decision accelerated decoding method, device and storage medium - Google Patents
A multi-step decoding decision accelerated decoding method, device and storage mediumInfo
- Publication number
- CN120165706B CN120165706B CN202510640584.7A CN202510640584A CN120165706B CN 120165706 B CN120165706 B CN 120165706B CN 202510640584 A CN202510640584 A CN 202510640584A CN 120165706 B CN120165706 B CN 120165706B
- Authority
- CN
- China
- Prior art keywords
- decoding
- neural network
- iteration
- learning algorithm
- code
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
- H03M13/2948—Iterative decoding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/25—Error detection or forward error correction by signal space coding, i.e. adding redundancy in the signal constellation, e.g. Trellis Coded Modulation [TCM]
- H03M13/251—Error detection or forward error correction by signal space coding, i.e. adding redundancy in the signal constellation, e.g. Trellis Coded Modulation [TCM] with block coding
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Error Detection And Correction (AREA)
Abstract
The invention relates to the technical field of communication and discloses a multi-step decoding decision acceleration decoding method, equipment and a storage medium, wherein the method comprises the steps of aiming at linear block code decoding based on a neural network learning algorithm, calculating component addition of a plurality of iterative decoding results as a selection basis of learning rate, and realizing optimal adjustment of the weights of the neural network learning algorithm by setting component times and addition coefficients of the iterative decoding results based on different code lengths, code rates and channel conditions so as to improve the learning rate. The invention uses the component addition of the repeated iterative decoding result as the basis of the learning rate selection, and realizes the optimal adjustment of the neural network weight under different code lengths, code rates and different channel conditions by setting the component times of the iterative decoding result and the addition coefficient selection, thereby improving the training learning rate.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, an apparatus, and a storage medium for multi-step decoding decision acceleration decoding.
Background
The rapid development of communications involves supporting high speed, low latency, high reliability transmissions, etc., e.g., autopilot, etc. The transmission of information as correctly as possible has been the subject of the development of communication technology. The channel error correction code technology is used as an indispensable anti-interference technology, and the linear block code is widely used for ensuring the correct transmission of information in a communication system with low complex performance. The learning algorithm of the neural network has proved its strong classification and fitting ability in the application occasions such as voice, image, natural language processing, etc., combines the learning algorithm of the neural network with the linear block code decoding, has proved the performance to be improved compared with the original decoding algorithm, especially some linear block codes which are difficult to adopt soft decision iterative decoding in the past, has obtained good decoding performance after adopting the learning algorithm of the neural network to decode.
The existing technology is focused on constructing a coding and decoding system by using a neural network, and basically carries out decoding processing according to a conventional processing mode, and lacks of a targeted study on the improvement of the efficiency of decision processing in the decoding process, so that a related technology is formed. The linear block code decoding adopting the neural network learning algorithm has very slow training convergence speed with the increase of the linear block code length, which makes the initial model training and the subsequent application of the enhanced training time very costly.
Disclosure of Invention
In order to solve the above problems, the present invention provides a multi-step decoding decision acceleration decoding method, apparatus and storage medium, which can improve training learning rate and is applicable to a wide range of linear block codes.
The technical scheme adopted by the invention is as follows:
A multi-step decoding decision accelerated decoding method comprising:
Aiming at linear block code decoding based on a neural network learning algorithm, calculating component addition of a plurality of iterative decoding results as a selection basis of learning rate;
Based on different code lengths, code rates and channel conditions, optimal adjustment of the weights of the neural network learning algorithm is realized by setting the number of iterative decoding result components and addition coefficients, so that the learning rate is improved.
Further, the method comprises the following steps:
Step 1, setting an initial step length of a neural network learning algorithm weight ;
Step2, calculating output after t iteration of linear block code decoding based on neural network learning algorithmJudging whether the decoding is successful or not, if so, ending, otherwise, executing the next step;
step3, calculating a loss function of cross entropy in the t-th iteration WhereinIs a binary block code;
step 4, a loss function based on cross entropy Calculate the accumulated value of the t-th iteration;
Step 5, based on the accumulated valueStep length adjustment is carried out to obtain updated step length;
Step6, based on the updated step lengthAnd (3) adjusting the neural network learning algorithm weight, executing the next iteration and turning to the step (2).
Further, the neural network learning algorithm weights include:
wherein, the The weight of the jth variable at the t-th iteration,For check node under the t-th iterationAnd variable nodeThe weight value between the two values is calculated,For check node under the t-th iterationAnd variable nodeThe weight value between the two values is calculated,Is a positive integer which is used for the preparation of the high-voltage power supply,Is a variable nodeThe values in the participating check sets.
Further, in step 2, output after the t-th iteration of the linear block code decoding based on the neural network learning algorithm is calculatedComprising:
In the formula, Is an arithmetic operation and has;As a function of the log-likelihood,For check node under the t-th iterationAnd variable nodeThe variable calculated value between them,Is a variable nodeParticipating check sets.
Further, in step3, a loss function of cross entropy in the t-th iteration is calculatedComprising:
In the formula, The method is a binary block code, the code length is N, the information bit length is N-M, and N and M are positive integers; Is the first And code words.
Further, in step 4, a cross entropy based loss functionCalculate the accumulated value of the t-th iterationComprising:
In the formula, For the accumulated impact factor of the t-th iteration,The t th iterationThe number of accumulated influencing factors is such that,。
Further, in step 5, based on the accumulated valueStep length adjustment is carried out to obtain updated step lengthComprising:
Using the accumulated value of the t-th iteration Front of (2)The term is the current step size:
In the formula, Step sizes of different iteration times are given, and subscripts are given as the iteration times;
setting a positive deviation value And minimum value;
If it isThen:
If it is Then:
If it is Then:
。
Further, based on the updated step size Adjusting neural network learning algorithm weights, comprising:
。
A computer device comprising a memory storing a computer program and a processor implementing the multi-step decoding decision accelerated decoding method described above when executing the computer program.
A computer readable storage medium storing a computer program which when executed by a processor implements the multi-step decoding decision accelerated decoding method described above.
The invention has the beneficial effects that:
The invention uses the component addition of the repeated iterative decoding result as the basis of the learning rate selection, and realizes the optimal adjustment of the neural network weight under different code lengths, code rates and different channel conditions by setting the component times of the iterative decoding result and the addition coefficient selection, thereby improving the training learning rate. The present invention is applicable to a wide range of linear block codes, including low density parity check codes (LDPC).
Drawings
Fig. 1 is a two-way diagram of a linear block code according to embodiment 1 of the present invention.
Fig. 2 is a flow chart of a multi-step decoding decision acceleration decoding method according to embodiment 1 of the present invention.
Detailed Description
Specific embodiments of the present invention will now be described in order to provide a clearer understanding of the technical features, objects and effects of the present invention. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
Example 1
Since linear block code decoding employing neural network learning algorithms increases with linear block code length, the training convergence speed becomes very slow, which makes initial model training and subsequent application of enhanced training time very costly.
The linear block code decoding method based on the neural network learning algorithm comprises the following steps:
To be used for Binary block codeFor example, where N is the code length, N-M is the information bit length, and N and M are positive integers. Arbitrary code word,The transmission form of (a) isWherein,,Is a binary domain which is used for the data processing,. Assume that the codeword transmitted by the transmitting end isAfter transmission mapping and Binary Phase Shift Keying (BPSK) modulation, the signal passes through a channel with noise interference and finally reaches a receiving end. The receiving end demodulates it and outputs decision signalAnd sent to a channel decoder. Thus, the first and second substrates are bonded together,Is the received signal (or input signal) of the channel decoder. The main task of the decoder is to receive the sequenceError correction decoding is performed to filter out channel errors, and the transmitted code words are recovered from the received signal with the smallest possible decoding error probability.
Order theA conditional probability distribution function representing the channel output may calculate a Log Likelihood (LLR) function for the symbol:
In the formula, 。
Is provided with oneBinary block codeIs of the check matrix of (a)Where H ij is the ith row and jth column element of the check matrix H. FIG. 1 is a bipartite representation thereof, codewordsRepresented as a set of inodes,Represented as a set of check nodesOnly whenWhen the nodeTo the point ofConnected by a directed edge. Aggregation of reamsRepresenting variablesThe check set to be attended to,Representation ofDoes not compriseIs a subset of the set of (c),Representing check nodesA constrained local set of symbol information,Representation ofDoes not compriseIs a subset of the set of (c). In the past, bi-directional diagrams were used to describe a specific class of linear block codes, namely LDPC codes, but are also used to describe linear block codes in general, and particularly good results are obtained through neural network decoding of BCH codes on the bi-directional diagrams, so that the bi-directional diagrams are more generalized.
The iteration of the linear block code decoding method based on the neural network learning algorithm is as follows:
with a positive integer t, the t-th iteration is as follows:
wherein, the Is the coefficient that requires training for neural network decoding,Is thatIs an inverse function of (1), let:
Then:
。
Based on the above, in order to solve the problem that the convergence rate of coefficient training increases with the code length in the linear block code decoding method based on the neural network learning algorithm, the present embodiment provides a multi-step decoding decision acceleration decoding method, which uses component addition of multiple iterative decoding results as a selection basis of learning rate, and based on different code lengths, code rates and channel conditions, sets the number of times of iterative decoding result components and addition coefficients to realize optimal adjustment of the weights of the neural network learning algorithm, thereby improving the learning rate.
Specifically, as shown in fig. 2, the method of this embodiment may be implemented by the following steps:
Step 1, setting an initial step length of a neural network learning algorithm weight ;
Step2, calculating output after t iteration of linear block code decoding based on neural network learning algorithmJudging whether the decoding is successful or not, if so, ending, otherwise, executing the next step;
step3, calculating a loss function of cross entropy in the t-th iteration WhereinIs a binary block code;
step 4, a loss function based on cross entropy Calculate the accumulated value of the t-th iteration;
Step 5, based on the accumulated valueStep length adjustment is carried out to obtain updated step length;
Step6, based on the updated step lengthAnd (3) adjusting the neural network learning algorithm weight, executing the next iteration and turning to the step (2).
In this embodiment, the neural network learning algorithm weights include:
wherein, the The weight of the jth variable at the t-th iteration,For check node under the t-th iterationAnd variable nodeThe weight value between the two values is calculated,For check node under the t-th iterationAnd variable nodeThe weight value between the two values is calculated,Is a positive integer which is used for the preparation of the high-voltage power supply,Is a variable nodeThe values in the participating check sets.
It should be noted that, the initial step length of the neural network learning algorithm weightThis embodiment will be described by taking the same initial step as an example, and may be the same or different.
Preferably, in step 2, the output after the t-th iteration of the linear block code decoding based on the neural network learning algorithm is calculatedComprising:
In the formula, Is an arithmetic operation and has;As a function of the log-likelihood,For check node under the t-th iterationAnd variable nodeThe variable calculated value between them,Is a variable nodeParticipating check sets.
Preferably, in step 3, a loss function of cross entropy in the t-th iteration is calculatedComprising:
In the formula, The method is a binary block code, the code length is N, the information bit length is N-M, and N and M are positive integers; Is the first And code words.
Preferably, in step 4, the cross entropy based loss functionCalculate the accumulated value of the t-th iterationComprising:
In the formula, For the accumulated impact factor of the t-th iteration,The t th iterationThe number of accumulated influencing factors is such that,。May be the same or different.
Preferably, in step 5, the accumulated value is based onStep length adjustment is carried out to obtain updated step lengthComprising:
Using the accumulated value of the t-th iteration Front of (2)The term is the current step size:
In the formula, Step sizes of different iteration times are given, and subscripts are given as the iteration times;
setting a positive deviation value And minimum valuePreferably, the first and second substrates are bonded together,。
If it isThen:
If it is Then:
If it is Then:
。
preferably based on updated step sizes Adjusting neural network learning algorithm weights, comprising:
。
In summary, the method of the embodiment uses the component addition of the iterative decoding result for multiple times as the basis of learning rate selection, and sets the number of times of the iterative decoding result components and the addition coefficient to select so as to realize the optimal adjustment of the neural network weights under different code lengths, code rates and different channel conditions, thereby improving the training learning rate.
Example 2
This example is based on example 1:
The present embodiment provides a computer device including a memory storing a computer program and a processor implementing the multi-step decoding decision accelerated decoding method of embodiment 1 when the computer program is executed. Wherein the computer program may be in source code form, object code form, executable file or some intermediate form, etc.
Example 3
This example is based on example 1:
The present embodiment provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the multi-step decoding decision accelerated decoding method of embodiment 1. Wherein the computer program may be in source code form, object code form, executable file or some intermediate form, etc. The storage medium includes any entity or device capable of carrying computer program code, recording medium, computer memory, read-only memory (ROM), random Access Memory (RAM), electrical carrier signals, telecommunications signals, and software distribution media. It should be noted that the content of the storage medium may be appropriately increased or decreased according to the requirements of jurisdictions in which the legislation and the patent practice, such as in some jurisdictions, the storage medium does not include electrical carrier signals and telecommunication signals according to the legislation and the patent practice.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously according to the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510640584.7A CN120165706B (en) | 2025-05-19 | 2025-05-19 | A multi-step decoding decision accelerated decoding method, device and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510640584.7A CN120165706B (en) | 2025-05-19 | 2025-05-19 | A multi-step decoding decision accelerated decoding method, device and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN120165706A CN120165706A (en) | 2025-06-17 |
| CN120165706B true CN120165706B (en) | 2025-07-22 |
Family
ID=96007151
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510640584.7A Active CN120165706B (en) | 2025-05-19 | 2025-05-19 | A multi-step decoding decision accelerated decoding method, device and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120165706B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113872610A (en) * | 2021-10-08 | 2021-12-31 | 华侨大学 | LDPC code neural network training and decoding method and system |
| CN115378443A (en) * | 2022-08-11 | 2022-11-22 | 西安工业大学 | Low-precision SC (standard code) decoding algorithm for deep neural network polar code |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180357530A1 (en) * | 2017-06-13 | 2018-12-13 | Ramot At Tel-Aviv University Ltd. | Deep learning decoding of error correcting codes |
| CN110741553B (en) * | 2017-06-22 | 2023-11-03 | 瑞典爱立信有限公司 | Neural network for forward error correction decoding |
| WO2019213947A1 (en) * | 2018-05-11 | 2019-11-14 | Qualcomm Incorporated | Improved iterative decoder for ldpc codes with weights and biases |
| CN118367945B (en) * | 2024-06-20 | 2024-10-18 | 南京信息工程大学 | LDPC code minimum sum decoding method and device based on neural network |
-
2025
- 2025-05-19 CN CN202510640584.7A patent/CN120165706B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113872610A (en) * | 2021-10-08 | 2021-12-31 | 华侨大学 | LDPC code neural network training and decoding method and system |
| CN115378443A (en) * | 2022-08-11 | 2022-11-22 | 西安工业大学 | Low-precision SC (standard code) decoding algorithm for deep neural network polar code |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120165706A (en) | 2025-06-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110730008B (en) | RS code belief propagation decoding method based on deep learning | |
| CN106301517B (en) | Based on the satellite multi-beam joint-detection and interpretation method it is expected to propagate and system | |
| CN108737027B (en) | A method for optimizing the distribution of no-rate code degree distribution in cloud access network | |
| CN110233628B (en) | Adaptive Belief Propagation List Decoding Method for Polar Codes | |
| CN102412846B (en) | Multi-value corrected min-sum decoding method applicable to low-density parity-check code | |
| Teng et al. | Syndrome-enabled unsupervised learning for neural network-based polar decoder and jointly optimized blind equalizer | |
| CN113890543A (en) | Decoding method of multi-system LDPC code based on multilayer perceptive neural network | |
| Zhang et al. | On the design of channel coding autoencoders with arbitrary rates for ISI channels | |
| CN118282414A (en) | Space coupling LDPC code sliding window decoding method and device based on dynamic residual error scheduling | |
| CN101136639B (en) | Systems and methods for reduced complexity ldpc decoding | |
| CN113014271B (en) | A Polar Code BP Decoding Method with Reduced Flip Sets | |
| CN120165706B (en) | A multi-step decoding decision accelerated decoding method, device and storage medium | |
| CN114499547A (en) | Adaptive Zipper code soft decision decoding method based on Chase-Pyndianh algorithm | |
| CN114421974A (en) | Polar code BPL decoding method with improved factor graph selection mode | |
| KR20090012189A (en) | Decoding Apparatus and Method Using Scaling-based Improved MINI-SMW Iterative Decoding Algorithm for Performance Improvement of LDPC Code | |
| CN118868975A (en) | A LDPC-Hadamard hybrid coding and decoding method in low signal-to-noise ratio environment | |
| CN101350695A (en) | Low-density parity-check code decoding method and system | |
| Jamali et al. | Low-complexity decoding of a class of Reed-Muller subcodes for low-capacity channels | |
| CN113872609B (en) | Partial cyclic redundancy check-assisted adaptive belief propagation decoding method | |
| CN114337691A (en) | A Soft Decoding Method of Zipper Code Based on Chase-Pyndiah Algorithm | |
| US20250202502A1 (en) | Error correction based on asymmetric ratio | |
| Zhang et al. | Bp flip decoding algorithm of polar code based on convolutional neural network | |
| CN112003626A (en) | LDPC decoding method, system and medium based on known bits of navigation message | |
| CN119892303B (en) | LDPC-Hadamard full sequence coding method for anti-interference communication | |
| CN114567335B (en) | Self-adaptive non-uniform quantization decoding method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |