[go: up one dir, main page]

CN120165706B - A multi-step decoding decision accelerated decoding method, device and storage medium - Google Patents

A multi-step decoding decision accelerated decoding method, device and storage medium

Info

Publication number
CN120165706B
CN120165706B CN202510640584.7A CN202510640584A CN120165706B CN 120165706 B CN120165706 B CN 120165706B CN 202510640584 A CN202510640584 A CN 202510640584A CN 120165706 B CN120165706 B CN 120165706B
Authority
CN
China
Prior art keywords
decoding
neural network
iteration
learning algorithm
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510640584.7A
Other languages
Chinese (zh)
Other versions
CN120165706A (en
Inventor
饶志宏
任祥维
魏兴雲
徐锐
何健辉
吴治霖
朱永川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 30 Research Institute
Original Assignee
CETC 30 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 30 Research Institute filed Critical CETC 30 Research Institute
Priority to CN202510640584.7A priority Critical patent/CN120165706B/en
Publication of CN120165706A publication Critical patent/CN120165706A/en
Application granted granted Critical
Publication of CN120165706B publication Critical patent/CN120165706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2948Iterative decoding
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/25Error detection or forward error correction by signal space coding, i.e. adding redundancy in the signal constellation, e.g. Trellis Coded Modulation [TCM]
    • H03M13/251Error detection or forward error correction by signal space coding, i.e. adding redundancy in the signal constellation, e.g. Trellis Coded Modulation [TCM] with block coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Error Detection And Correction (AREA)

Abstract

The invention relates to the technical field of communication and discloses a multi-step decoding decision acceleration decoding method, equipment and a storage medium, wherein the method comprises the steps of aiming at linear block code decoding based on a neural network learning algorithm, calculating component addition of a plurality of iterative decoding results as a selection basis of learning rate, and realizing optimal adjustment of the weights of the neural network learning algorithm by setting component times and addition coefficients of the iterative decoding results based on different code lengths, code rates and channel conditions so as to improve the learning rate. The invention uses the component addition of the repeated iterative decoding result as the basis of the learning rate selection, and realizes the optimal adjustment of the neural network weight under different code lengths, code rates and different channel conditions by setting the component times of the iterative decoding result and the addition coefficient selection, thereby improving the training learning rate.

Description

Multi-step decoding decision acceleration decoding method, equipment and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, an apparatus, and a storage medium for multi-step decoding decision acceleration decoding.
Background
The rapid development of communications involves supporting high speed, low latency, high reliability transmissions, etc., e.g., autopilot, etc. The transmission of information as correctly as possible has been the subject of the development of communication technology. The channel error correction code technology is used as an indispensable anti-interference technology, and the linear block code is widely used for ensuring the correct transmission of information in a communication system with low complex performance. The learning algorithm of the neural network has proved its strong classification and fitting ability in the application occasions such as voice, image, natural language processing, etc., combines the learning algorithm of the neural network with the linear block code decoding, has proved the performance to be improved compared with the original decoding algorithm, especially some linear block codes which are difficult to adopt soft decision iterative decoding in the past, has obtained good decoding performance after adopting the learning algorithm of the neural network to decode.
The existing technology is focused on constructing a coding and decoding system by using a neural network, and basically carries out decoding processing according to a conventional processing mode, and lacks of a targeted study on the improvement of the efficiency of decision processing in the decoding process, so that a related technology is formed. The linear block code decoding adopting the neural network learning algorithm has very slow training convergence speed with the increase of the linear block code length, which makes the initial model training and the subsequent application of the enhanced training time very costly.
Disclosure of Invention
In order to solve the above problems, the present invention provides a multi-step decoding decision acceleration decoding method, apparatus and storage medium, which can improve training learning rate and is applicable to a wide range of linear block codes.
The technical scheme adopted by the invention is as follows:
A multi-step decoding decision accelerated decoding method comprising:
Aiming at linear block code decoding based on a neural network learning algorithm, calculating component addition of a plurality of iterative decoding results as a selection basis of learning rate;
Based on different code lengths, code rates and channel conditions, optimal adjustment of the weights of the neural network learning algorithm is realized by setting the number of iterative decoding result components and addition coefficients, so that the learning rate is improved.
Further, the method comprises the following steps:
Step 1, setting an initial step length of a neural network learning algorithm weight ;
Step2, calculating output after t iteration of linear block code decoding based on neural network learning algorithmJudging whether the decoding is successful or not, if so, ending, otherwise, executing the next step;
step3, calculating a loss function of cross entropy in the t-th iteration WhereinIs a binary block code;
step 4, a loss function based on cross entropy Calculate the accumulated value of the t-th iteration;
Step 5, based on the accumulated valueStep length adjustment is carried out to obtain updated step length;
Step6, based on the updated step lengthAnd (3) adjusting the neural network learning algorithm weight, executing the next iteration and turning to the step (2).
Further, the neural network learning algorithm weights include:
wherein, the The weight of the jth variable at the t-th iteration,For check node under the t-th iterationAnd variable nodeThe weight value between the two values is calculated,For check node under the t-th iterationAnd variable nodeThe weight value between the two values is calculated,Is a positive integer which is used for the preparation of the high-voltage power supply,Is a variable nodeThe values in the participating check sets.
Further, in step 2, output after the t-th iteration of the linear block code decoding based on the neural network learning algorithm is calculatedComprising:
In the formula, Is an arithmetic operation and has;As a function of the log-likelihood,For check node under the t-th iterationAnd variable nodeThe variable calculated value between them,Is a variable nodeParticipating check sets.
Further, in step3, a loss function of cross entropy in the t-th iteration is calculatedComprising:
In the formula, The method is a binary block code, the code length is N, the information bit length is N-M, and N and M are positive integers; Is the first And code words.
Further, in step 4, a cross entropy based loss functionCalculate the accumulated value of the t-th iterationComprising:
In the formula, For the accumulated impact factor of the t-th iteration,The t th iterationThe number of accumulated influencing factors is such that,
Further, in step 5, based on the accumulated valueStep length adjustment is carried out to obtain updated step lengthComprising:
Using the accumulated value of the t-th iteration Front of (2)The term is the current step size:
In the formula, Step sizes of different iteration times are given, and subscripts are given as the iteration times;
setting a positive deviation value And minimum value;
If it isThen:
If it is Then:
If it is Then:
Further, based on the updated step size Adjusting neural network learning algorithm weights, comprising:
A computer device comprising a memory storing a computer program and a processor implementing the multi-step decoding decision accelerated decoding method described above when executing the computer program.
A computer readable storage medium storing a computer program which when executed by a processor implements the multi-step decoding decision accelerated decoding method described above.
The invention has the beneficial effects that:
The invention uses the component addition of the repeated iterative decoding result as the basis of the learning rate selection, and realizes the optimal adjustment of the neural network weight under different code lengths, code rates and different channel conditions by setting the component times of the iterative decoding result and the addition coefficient selection, thereby improving the training learning rate. The present invention is applicable to a wide range of linear block codes, including low density parity check codes (LDPC).
Drawings
Fig. 1 is a two-way diagram of a linear block code according to embodiment 1 of the present invention.
Fig. 2 is a flow chart of a multi-step decoding decision acceleration decoding method according to embodiment 1 of the present invention.
Detailed Description
Specific embodiments of the present invention will now be described in order to provide a clearer understanding of the technical features, objects and effects of the present invention. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
Example 1
Since linear block code decoding employing neural network learning algorithms increases with linear block code length, the training convergence speed becomes very slow, which makes initial model training and subsequent application of enhanced training time very costly.
The linear block code decoding method based on the neural network learning algorithm comprises the following steps:
To be used for Binary block codeFor example, where N is the code length, N-M is the information bit length, and N and M are positive integers. Arbitrary code word,The transmission form of (a) isWherein,,Is a binary domain which is used for the data processing,. Assume that the codeword transmitted by the transmitting end isAfter transmission mapping and Binary Phase Shift Keying (BPSK) modulation, the signal passes through a channel with noise interference and finally reaches a receiving end. The receiving end demodulates it and outputs decision signalAnd sent to a channel decoder. Thus, the first and second substrates are bonded together,Is the received signal (or input signal) of the channel decoder. The main task of the decoder is to receive the sequenceError correction decoding is performed to filter out channel errors, and the transmitted code words are recovered from the received signal with the smallest possible decoding error probability.
Order theA conditional probability distribution function representing the channel output may calculate a Log Likelihood (LLR) function for the symbol:
In the formula,
Is provided with oneBinary block codeIs of the check matrix of (a)Where H ij is the ith row and jth column element of the check matrix H. FIG. 1 is a bipartite representation thereof, codewordsRepresented as a set of inodes,Represented as a set of check nodesOnly whenWhen the nodeTo the point ofConnected by a directed edge. Aggregation of reamsRepresenting variablesThe check set to be attended to,Representation ofDoes not compriseIs a subset of the set of (c),Representing check nodesA constrained local set of symbol information,Representation ofDoes not compriseIs a subset of the set of (c). In the past, bi-directional diagrams were used to describe a specific class of linear block codes, namely LDPC codes, but are also used to describe linear block codes in general, and particularly good results are obtained through neural network decoding of BCH codes on the bi-directional diagrams, so that the bi-directional diagrams are more generalized.
The iteration of the linear block code decoding method based on the neural network learning algorithm is as follows:
with a positive integer t, the t-th iteration is as follows:
wherein, the Is the coefficient that requires training for neural network decoding,Is thatIs an inverse function of (1), let:
Then:
Based on the above, in order to solve the problem that the convergence rate of coefficient training increases with the code length in the linear block code decoding method based on the neural network learning algorithm, the present embodiment provides a multi-step decoding decision acceleration decoding method, which uses component addition of multiple iterative decoding results as a selection basis of learning rate, and based on different code lengths, code rates and channel conditions, sets the number of times of iterative decoding result components and addition coefficients to realize optimal adjustment of the weights of the neural network learning algorithm, thereby improving the learning rate.
Specifically, as shown in fig. 2, the method of this embodiment may be implemented by the following steps:
Step 1, setting an initial step length of a neural network learning algorithm weight ;
Step2, calculating output after t iteration of linear block code decoding based on neural network learning algorithmJudging whether the decoding is successful or not, if so, ending, otherwise, executing the next step;
step3, calculating a loss function of cross entropy in the t-th iteration WhereinIs a binary block code;
step 4, a loss function based on cross entropy Calculate the accumulated value of the t-th iteration;
Step 5, based on the accumulated valueStep length adjustment is carried out to obtain updated step length;
Step6, based on the updated step lengthAnd (3) adjusting the neural network learning algorithm weight, executing the next iteration and turning to the step (2).
In this embodiment, the neural network learning algorithm weights include:
wherein, the The weight of the jth variable at the t-th iteration,For check node under the t-th iterationAnd variable nodeThe weight value between the two values is calculated,For check node under the t-th iterationAnd variable nodeThe weight value between the two values is calculated,Is a positive integer which is used for the preparation of the high-voltage power supply,Is a variable nodeThe values in the participating check sets.
It should be noted that, the initial step length of the neural network learning algorithm weightThis embodiment will be described by taking the same initial step as an example, and may be the same or different.
Preferably, in step 2, the output after the t-th iteration of the linear block code decoding based on the neural network learning algorithm is calculatedComprising:
In the formula, Is an arithmetic operation and has;As a function of the log-likelihood,For check node under the t-th iterationAnd variable nodeThe variable calculated value between them,Is a variable nodeParticipating check sets.
Preferably, in step 3, a loss function of cross entropy in the t-th iteration is calculatedComprising:
In the formula, The method is a binary block code, the code length is N, the information bit length is N-M, and N and M are positive integers; Is the first And code words.
Preferably, in step 4, the cross entropy based loss functionCalculate the accumulated value of the t-th iterationComprising:
In the formula, For the accumulated impact factor of the t-th iteration,The t th iterationThe number of accumulated influencing factors is such that,May be the same or different.
Preferably, in step 5, the accumulated value is based onStep length adjustment is carried out to obtain updated step lengthComprising:
Using the accumulated value of the t-th iteration Front of (2)The term is the current step size:
In the formula, Step sizes of different iteration times are given, and subscripts are given as the iteration times;
setting a positive deviation value And minimum valuePreferably, the first and second substrates are bonded together,
If it isThen:
If it is Then:
If it is Then:
preferably based on updated step sizes Adjusting neural network learning algorithm weights, comprising:
In summary, the method of the embodiment uses the component addition of the iterative decoding result for multiple times as the basis of learning rate selection, and sets the number of times of the iterative decoding result components and the addition coefficient to select so as to realize the optimal adjustment of the neural network weights under different code lengths, code rates and different channel conditions, thereby improving the training learning rate.
Example 2
This example is based on example 1:
The present embodiment provides a computer device including a memory storing a computer program and a processor implementing the multi-step decoding decision accelerated decoding method of embodiment 1 when the computer program is executed. Wherein the computer program may be in source code form, object code form, executable file or some intermediate form, etc.
Example 3
This example is based on example 1:
The present embodiment provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the multi-step decoding decision accelerated decoding method of embodiment 1. Wherein the computer program may be in source code form, object code form, executable file or some intermediate form, etc. The storage medium includes any entity or device capable of carrying computer program code, recording medium, computer memory, read-only memory (ROM), random Access Memory (RAM), electrical carrier signals, telecommunications signals, and software distribution media. It should be noted that the content of the storage medium may be appropriately increased or decreased according to the requirements of jurisdictions in which the legislation and the patent practice, such as in some jurisdictions, the storage medium does not include electrical carrier signals and telecommunication signals according to the legislation and the patent practice.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously according to the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.

Claims (8)

1.一种多步译码判决加速译码方法,其特征在于,包括:1. A multi-step decoding decision accelerated decoding method, characterized by comprising: 针对基于神经网络学习算法的线性分组码译码,计算多次迭代译码结果的分量加成作为学习速率的选择依据;For linear block code decoding based on neural network learning algorithm, the component addition of multiple iterative decoding results is calculated as the basis for selecting the learning rate; 基于不同码长、码率和信道条件,通过设置迭代译码结果分量次数和加成系数,实现神经网络学习算法权值的最佳调整,进而提高学习速率;Based on different code lengths, code rates and channel conditions, the optimal adjustment of the weights of the neural network learning algorithm is achieved by setting the number of iterative decoding result components and the addition coefficient, thereby improving the learning rate; 所述多步译码判决加速译码方法包括以下步骤:The multi-step decoding decision accelerated decoding method comprises the following steps: 步骤1、设置神经网络学习算法权值的初始步长Step 1: Set the initial step size of the neural network learning algorithm weights ; 步骤2、计算基于神经网络学习算法的线性分组码译码第t次迭代结束后的输出,并判断译码是否成功,若成功则结束,否则执行下一步骤;Step 2: Calculate the output of the linear block code decoding based on the neural network learning algorithm after the tth iteration , and determine whether the decoding is successful, if successful, end, otherwise proceed to the next step; 步骤3、计算第t次迭代中交叉熵的损失函数,其中为二进制分组码;Step 3: Calculate the cross entropy loss function in the tth iteration ,in is a binary block code; 步骤4、基于交叉熵的损失函数,计算第t次迭代的积累值Step 4: Loss function based on cross entropy , calculate the cumulative value of the tth iteration ; 步骤5、基于积累值进行步长调整,得到更新后的步长Step 5: Based on the accumulated value Adjust the step size to get the updated step size ; 步骤6、基于更新后的步长调整神经网络学习算法权值,执行下一次迭代并转至步骤2;Step 6: Based on the updated step size Adjust the weights of the neural network learning algorithm, perform the next iteration and go to step 2; 步骤5中,基于积累值进行步长调整,得到更新后的步长,包括:In step 5, based on the accumulated value Adjust the step size to get the updated step size ,include: 采用第t次迭代的积累值的前项为当前步长:Use the accumulated value of the tth iteration Before The term is the current step size: 式中,为不同迭代次数的步长,下标为迭代次数;In the formula, is the step size of different iteration numbers, and the subscript is the number of iterations; 设定一个正偏离值和极小值Set a positive deviation and minimum ; ,则:like ,but: ,则:like ,but: ,则:like ,but: . 2.根据权利要求1所述的一种多步译码判决加速译码方法,其特征在于,所述神经网络学习算法权值包括:2. The multi-step decoding decision accelerated decoding method according to claim 1, characterized in that the neural network learning algorithm weights include: 其中,为第t次迭代下第j个变量的权值,为第t次迭代下校验节点与变量节点之间的权值,为第t次迭代下校验节点与变量节点之间的权值,为正整数,为变量节点参加的校验集中的值。in, is the weight of the jth variable at the tth iteration, is the check node for the tth iteration With variable nodes The weight between is the check node for the tth iteration With variable nodes The weight between is a positive integer, For variable nodes The value in the validation set to participate in. 3.根据权利要求2所述的一种多步译码判决加速译码方法,其特征在于,步骤2中,计算基于神经网络学习算法的线性分组码译码第t次迭代结束后的输出,包括:3. The multi-step decoding decision accelerated decoding method according to claim 2, characterized in that in step 2, the output after the t-th iteration of the linear block code decoding based on the neural network learning algorithm is calculated ,include: 式中,为一种运算操作,且有为对数似然函数,为第t次迭代下校验节点与变量节点之间的变量计算值,为变量节点参加的校验集。In the formula, is an operation, and there is ; is the log-likelihood function, is the check node for the tth iteration With variable nodes Calculate the value of the variable between For variable nodes The validation set to participate in. 4.根据权利要求3所述的一种多步译码判决加速译码方法,其特征在于,步骤3中,计算第t次迭代中交叉熵的损失函数,包括:4. A multi-step decoding decision accelerated decoding method according to claim 3, characterized in that in step 3, the cross entropy loss function in the t-th iteration is calculated ,include: 式中,为二进制分组码,码长为N,信息位长为N-M,N和M为正整数;为第个码字。In the formula, is a binary block code, the code length is N, the information bit length is NM, N and M are positive integers; For the A code word. 5.根据权利要求4所述的一种多步译码判决加速译码方法,其特征在于,步骤4中,基于交叉熵的损失函数,计算第t次迭代的积累值,包括:5. A multi-step decoding decision accelerated decoding method according to claim 4, characterized in that in step 4, the loss function based on cross entropy , calculate the cumulative value of the tth iteration ,include: 式中,为第t次迭代的积累影响因子,为第t次迭代的第个积累影响因子,In the formula, is the cumulative impact factor of the tth iteration, is the tth iteration Cumulative impact factor, . 6.根据权利要求1所述的一种多步译码判决加速译码方法,其特征在于,步骤6中,基于更新后的步长调整神经网络学习算法权值,包括:6. The multi-step decoding decision accelerated decoding method according to claim 1, characterized in that in step 6, based on the updated step length Adjust the weights of the neural network learning algorithm, including: . 7.一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1-6任一项所述的多步译码判决加速译码方法。7. A computer device, comprising a memory and a processor, wherein the memory stores a computer program, and wherein the processor implements the multi-step decoding decision accelerated decoding method according to any one of claims 1 to 6 when executing the computer program. 8.一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-6任一项所述的多步译码判决加速译码方法。8. A computer-readable storage medium storing a computer program, characterized in that when the computer program is executed by a processor, the multi-step decoding decision accelerated decoding method according to any one of claims 1 to 6 is implemented.
CN202510640584.7A 2025-05-19 2025-05-19 A multi-step decoding decision accelerated decoding method, device and storage medium Active CN120165706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510640584.7A CN120165706B (en) 2025-05-19 2025-05-19 A multi-step decoding decision accelerated decoding method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510640584.7A CN120165706B (en) 2025-05-19 2025-05-19 A multi-step decoding decision accelerated decoding method, device and storage medium

Publications (2)

Publication Number Publication Date
CN120165706A CN120165706A (en) 2025-06-17
CN120165706B true CN120165706B (en) 2025-07-22

Family

ID=96007151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510640584.7A Active CN120165706B (en) 2025-05-19 2025-05-19 A multi-step decoding decision accelerated decoding method, device and storage medium

Country Status (1)

Country Link
CN (1) CN120165706B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113872610A (en) * 2021-10-08 2021-12-31 华侨大学 LDPC code neural network training and decoding method and system
CN115378443A (en) * 2022-08-11 2022-11-22 西安工业大学 Low-precision SC (standard code) decoding algorithm for deep neural network polar code

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180357530A1 (en) * 2017-06-13 2018-12-13 Ramot At Tel-Aviv University Ltd. Deep learning decoding of error correcting codes
CN110741553B (en) * 2017-06-22 2023-11-03 瑞典爱立信有限公司 Neural network for forward error correction decoding
WO2019213947A1 (en) * 2018-05-11 2019-11-14 Qualcomm Incorporated Improved iterative decoder for ldpc codes with weights and biases
CN118367945B (en) * 2024-06-20 2024-10-18 南京信息工程大学 LDPC code minimum sum decoding method and device based on neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113872610A (en) * 2021-10-08 2021-12-31 华侨大学 LDPC code neural network training and decoding method and system
CN115378443A (en) * 2022-08-11 2022-11-22 西安工业大学 Low-precision SC (standard code) decoding algorithm for deep neural network polar code

Also Published As

Publication number Publication date
CN120165706A (en) 2025-06-17

Similar Documents

Publication Publication Date Title
CN110730008B (en) RS code belief propagation decoding method based on deep learning
CN106301517B (en) Based on the satellite multi-beam joint-detection and interpretation method it is expected to propagate and system
CN108737027B (en) A method for optimizing the distribution of no-rate code degree distribution in cloud access network
CN110233628B (en) Adaptive Belief Propagation List Decoding Method for Polar Codes
CN102412846B (en) Multi-value corrected min-sum decoding method applicable to low-density parity-check code
Teng et al. Syndrome-enabled unsupervised learning for neural network-based polar decoder and jointly optimized blind equalizer
CN113890543A (en) Decoding method of multi-system LDPC code based on multilayer perceptive neural network
Zhang et al. On the design of channel coding autoencoders with arbitrary rates for ISI channels
CN118282414A (en) Space coupling LDPC code sliding window decoding method and device based on dynamic residual error scheduling
CN101136639B (en) Systems and methods for reduced complexity ldpc decoding
CN113014271B (en) A Polar Code BP Decoding Method with Reduced Flip Sets
CN120165706B (en) A multi-step decoding decision accelerated decoding method, device and storage medium
CN114499547A (en) Adaptive Zipper code soft decision decoding method based on Chase-Pyndianh algorithm
CN114421974A (en) Polar code BPL decoding method with improved factor graph selection mode
KR20090012189A (en) Decoding Apparatus and Method Using Scaling-based Improved MINI-SMW Iterative Decoding Algorithm for Performance Improvement of LDPC Code
CN118868975A (en) A LDPC-Hadamard hybrid coding and decoding method in low signal-to-noise ratio environment
CN101350695A (en) Low-density parity-check code decoding method and system
Jamali et al. Low-complexity decoding of a class of Reed-Muller subcodes for low-capacity channels
CN113872609B (en) Partial cyclic redundancy check-assisted adaptive belief propagation decoding method
CN114337691A (en) A Soft Decoding Method of Zipper Code Based on Chase-Pyndiah Algorithm
US20250202502A1 (en) Error correction based on asymmetric ratio
Zhang et al. Bp flip decoding algorithm of polar code based on convolutional neural network
CN112003626A (en) LDPC decoding method, system and medium based on known bits of navigation message
CN119892303B (en) LDPC-Hadamard full sequence coding method for anti-interference communication
CN114567335B (en) Self-adaptive non-uniform quantization decoding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant