US20100058149A1 - Decoder of error correction codes - Google Patents
Decoder of error correction codes Download PDFInfo
- Publication number
- US20100058149A1 US20100058149A1 US12/199,414 US19941408A US2010058149A1 US 20100058149 A1 US20100058149 A1 US 20100058149A1 US 19941408 A US19941408 A US 19941408A US 2010058149 A1 US2010058149 A1 US 2010058149A1
- Authority
- US
- United States
- Prior art keywords
- symbols
- parity
- group
- codeword
- symbol
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
- H03M13/1191—Codes on graphs other than LDPC codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
- H03M13/1105—Decoding
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
- H03M13/1105—Decoding
- H03M13/1108—Hard decision decoding, e.g. bit flipping, modified or weighted bit flipping
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/13—Linear codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/3784—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 for soft-output decoding of block codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/45—Soft decoding, i.e. using symbol reliability information
- H03M13/451—Soft decoding, i.e. using symbol reliability information using a set of candidate code words, e.g. ordered statistics decoding [OSD]
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/45—Soft decoding, i.e. using symbol reliability information
- H03M13/458—Soft decoding, i.e. using symbol reliability information by updating bit probabilities or hard decisions in an iterative fashion for convergence to a final decoding result
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/61—Aspects and characteristics of methods and arrangements for error correction or error detection, not provided for otherwise
- H03M13/615—Use of computational or mathematical techniques
- H03M13/616—Matrix operations, especially for generator matrices or check matrices, e.g. column or row permutations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0045—Arrangements at the receiver end
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0057—Block codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/63—Joint error correction and other techniques
- H03M13/635—Error control coding in combination with rate matching
- H03M13/6362—Error control coding in combination with rate matching by puncturing
Definitions
- the invention relates to the field of communications and, more specifically, to systems dedicated to decoding of error correction codes such as linear block codes.
- Coding and decoding of data by the use of parity-check codes and/or block codes are techniques frequently used in data transmission systems in order to ensure reliability of data transmission.
- data is pre-processed in such a way as to allow for reliable data recovery by decoding.
- Decoding is the process of reconstructing transmitted data which has been coded and transmitted through unreliable media into codewords. Many different coding/decoding approaches are available depending on system and transmission channel considerations.
- FIG. 1 is a schematic representation of a communications system comprising a transmitter and a receiver
- FIG. 2 shows a parity-check matrix H and a Tanner graph representation of the parity-check matrix H
- FIG. 3 shows a parity-check matrix HD and a configuration graph defined by the parity-check matrix HD
- FIG. 4 is a flow diagram illustrating a method of decoding
- FIG. 5 is a flow diagram illustrating a method of decoding
- FIG. 6 is a schematic representation of a decoder
- FIG. 7 is a schematic representation of a decoder
- FIG. 8 is a graph showing for a first code the word error ratio of a decoder output plotted versus the energy per bit to noise power spectral density ratio.
- FIG. 1 shows an embodiment of a communications system comprising a transmitter 100 and a receiver 200 .
- the transmitter 100 comprises an encoder 101 and may comprise further data processing units such as a modulator 102 , a puncturing device (not shown) etc.
- the receiver 200 may comprise a demodulator 201 , further data processing units such as e.g. a de-puncturing device (not shown) and a decoder 202 .
- An output signal 103 of the transmitter 100 is transmitted via a channel 300 to form an input signal 204 of the receiver 200 .
- FIG. 1 provides a schematic and strongly simplified representation of the transmitter 100 and the receiver 200 of the communications system.
- the transmitter 100 and the receiver 200 are equipped with additional data processing units (not shown) which are typically adapted to the specific field of application, the type of the channel 300 and other considerations, such as requirements or constraints.
- the data communications system may be a radio communications system, in particular a mobile radio communications system, in which the channel 300 is represented by a radio link.
- Examples for such communications systems are digital video broadcasting-satellite (DVB-S) systems such as e.g. the DVB-S2 system, wireless local area network (WLAN) systems such as e.g. WLAN802.11n or wireless metropolitan area network (WMAN) systems such as e.g. stipulated in IEEE 802.16.
- embodiments may comprise communications systems in which the channel 300 is established by a wired electrical or an optical link.
- the encoder 101 provides for a forward error correction (FEC) in the communications system.
- FEC forward error correction
- BCH Bose-Chaudhuri-Hocquenghem
- LDPC Low Density Parity-Check
- parity-check matrix H Any linear block code can be described by a parity-check matrix H.
- c is a codeword of the linear block code represented by the parity-check matrix H if and only if equation (1) is valid.
- Superscript T refers to the transpose of the indexed quantity.
- Codeword generation out of the dataword d may be written as
- one or more generator matrices G may be computed from the parity-check matrix H. If binary data are used (i.e. symbols are represented by bits), operations in equations (1) and (2) are carried out in the binary field GF( 2 ).
- the codeword c is fed into the modulator 102 . Different modulation and/or puncturing schemes may be applied therein. After modulation, optional puncturing and adding of supplementary data, the codeword c is sent out by the transmitter 100 as a data block.
- the output of the modulator 102 may be subjected to further data processing in order to prepare the signal 103 for channel transmission.
- further data processing is not depicted in FIG. 1 .
- further data processing may comprise digital-to-analog conversion, up-conversion of the signal to a radio frequency band, amplification and radiating the amplified signal via a transmit antenna (not shown).
- the input signal 204 is obtained e.g. by a receive antenna (not shown) or a physical channel 300 and, in one embodiment, may be subjected to signal processing not depicted in FIG. 1 , such as e.g. filtering, down-conversion to an intermediate frequency (IF) band or baseband, analog-to-digital conversion and so on.
- the demodulator 201 may demodulate and optionally de-puncture the received signal 204 and outputs a received vector r.
- the decoder 202 decodes the received vector r to yield the decoded word x.
- the parity-check matrix H can be used to generate a graph which also represents a block code, a so-called Tanner graph.
- FIG. 2 illustrates a parity-check matrix H of an LDPC code and the corresponding Tanner graph (for simplicity, the parity-check matrix H shown here is much smaller than the parity-check matrices of practically used LDPC codes).
- a Tanner graph is a bipartite graph.
- the nodes of this graph can be divided into two classes.
- One class is termed check nodes (C-nodes, drawn as squares) and the other class is termed variable or bit nodes (V-nodes, drawn as circles).
- the C-nodes correspond to the rows of the parity-check matrix H (e.g. C 1 corresponds to row 1 of H) and the V-nodes correspond to the columns of the parity-check matrix H (e.g. V 1 corresponds to column 1 ). If the corresponding row of a C-node in the H matrix has a 1 at the column v it means that this C-node is connected to the V-node V v and vice versa.
- the degree of a node in the Tanner graph is the number of lines connected to this node (e.g. V-node V 4 has degree 2 , C-node C 1 has degree 3 ).
- LDPC codes are decoded using belief propagation (BP) algorithms.
- BP algorithms are also known as sum-product algorithms or message passing algorithms. These algorithms operate via passing messages through the Tanner graph.
- V is the set of nodes of the configuration graph CG
- E is the set of lines of the configuration graph CG.
- Every row p of HD represents a parity-check.
- Each parity-check p defines an index set J p of codeword symbols participating in parity-check p.
- the index set J p is defined by the elements of row p of the parity-check matrix HD which are unequal to zero.
- Each index set J p of codeword symbols associated with a parity-check p defines a plurality of symbol groups v pi indexed by i, whereby each symbol group v pi is a symbol pattern which satisfies the parity-check p at index set J p of codeword symbols.
- a symbol group v pi is referred to as a node v pi .
- the set E of lines of the configuration graph CG is built according to the following rule: Two nodes v ji and v kl are interconnected if at least one symbol of node v ji and one symbol of node v kl , which are located at any common position of the index sets J i and J k , are different.
- a set of nodes wherein no node of the set is connected to any other node of the set is referred to as a vertex packing.
- the construction principle explained above ensures that any vertex packing with m elements (i.e. with m nodes) represents a codeword.
- a vertex packing with m elements is termed m-packing in the following.
- any m-packing is a (valid) codeword.
- the configuration graph CG is a representation of a rule teaching how to construct a codeword from nodes v pi (i.e. symbol groups) associated with index sets J p of codeword symbol positions.
- Such composition of an m-packing (codeword) may be carried out step by step by picking up one v pi node after the other.
- decoding and in particular maximum likelihood (ML) decoding can be performed by finding the maximum-weighted vertex packing.
- VPP maximum-weighted vertex packing problem
- binary block codes are considered in the following.
- the elements of the parity-check matrix and the symbols are bits, i.e. 0 or 1.
- a parity-check p can only be satisfied with an even number of participating bits set to 1.
- a node (bit group) v pi always has an even number of value 1 bits. If, for example, row p of parity-check matrix HD has a number of z p entries of value 1, the set of nodes associated with parity-check p is given by the set of all z p -tuples satisfying the condition to have an even number of bits of value 1.
- H D ( 1 1 0 1 0 0 1 1 0 1 )
- parity-check matrix underlying the configuration graph CG need not be a matrix of the form HD as defined above. It could also be a matrix HD having a triangle sub-matrix D or any other adequate parity-check matrix H representing the block code used.
- y i being the log-likelihood ratios calculated out of the received data samples r i by e.g.
- C denotes the set of (valid) codewords.
- VPP maximum-weighted vertex packing problem
- the solution of the maximum-weighted vertex packing problem may require high computational expenses.
- more simple algorithms for vertex packing decoding are provided.
- the configuration graph CG Because of the underlying structure of the configuration graph CG, in one embodiment, it is possible to perform decoding by using only a partial structure of the configuration graph CG rather than the whole configuration graph CG.
- This approach uses the concept to find the maximum node (i.e. symbol group) of a parity-check p in the configuration graph CG depending on maximum nodes already decided in previous parity-checks and reliability information of received data.
- only the “next maximum possible node v pi ” may be of interest.
- An embodiment of a decoding process is illustrated in FIG. 4 .
- the maximum node of parity-check p which is not connected to any node in P, is computed at 403 .
- the maximum node may be calculated based on weights associated with the nodes of parity-check p. More specifically, the maximum node of parity-check p may be the node v pi having maximum weight w i .
- Another possible variant of the algorithm could be to generate codewords by not only selecting the maximum nodes of a parity-check, but also by selecting those with the second highest cost or any other nodes, using e.g. local search techniques.
- packing P contains one node.
- the symbols (e.g. bits) of the maximum added node may be fixed according at 405 . If the symbols are fixed, the algorithm may be configured to never reconsider its choice in one embodiment. Such algorithm is often referred to as a greedy algorithm. It is to be noted that the algorithm may also be devised to reconsider choices, i.e. to change one or more symbols of a maximum node which has already been added to packing P.
- This new parity-check is then started at 402 and the process described above is repeated for this new parity-check.
- the maximum node of the current parity-check is obtained amongst those nodes which symbols coincide with the already fixed symbols, because the maximum node of the current parity-check p is obtained amongst those nodes which are not connected in the configuration graph CG to any other node in packing P.
- the maximum node of the current parity-check p is selected based on the already selected node(s) and on reliability information on received data.
- This new maximum node is then added to P at 404 and optionally, the symbols thereof are fixed at 405 . This process is repeated until at 407 , it is found that all parity-checks have been processed.
- packing P is a vertex packing with m elements (i.e. is an m-packing) and thus represents a codeword. This codeword is then output at 408 .
- FIG. 5 A detailed example of a decoding algorithm using the decoding scheme of FIG. 4 is shown in FIG. 5 .
- a binary representation of received data and code is used for ease of explanation.
- block B 1 of the algorithm is described.
- the hard decision x j of all bits participating to parity-check p is calculated from reliability information, e.g. log-likelihood ratios y or soft output values ⁇ tilde over (y) ⁇ of a previous decoder at 506 .
- This hard decision x j of all bits participating to parity-check p may be considered to represent the maximum node, if this node satisfies the parity-check p and is not interconnected to any other node in packing P.
- the number O of “ones” in the node under consideration is counted. If this number is even, the node satisfies the parity-check p and therefore, the bits thereof may be fixed.
- this scheme is repeatedly applied to get the proper maximum nodes of the other parity-checks in such way that it is always maintained that the set of nodes is a valid vertex packing.
- this bit may be calculated according to 511 by
- the m-packing yielded by this decoding scheme is a (valid) codeword of local maximum cost.
- the above algorithm relies on the following concept.
- the part of the reliability information associated with a first parity-check p is translated into a first hard-decided group of symbols.
- it is checked whether or not the first hard-decided group of symbols satisfies the first parity-check p.
- this check may e.g. simply be performed by evaluating whether or not the number of ones of the hard-decided group of bits is even or odd.
- the first hard-decided group of symbols is selected as a node of packing P and its symbols may be fixed (or may only be fixed after being verified and possibly changed at a later time when one or more following parity-checks have been processed). If, however, the first parity-check is not satisfied, one or more symbols of the first hard-decided group of symbols are changed based on the reliability information such that the first parity-check p is then satisfied. The first hard-decided group of symbols including the changed symbol is then selected as a node of packing P and its symbols may be fixed (or may only be fixed after being verified and possibly changed at a later time when one or more following parity-checks have been processed).
- the first group of symbols (first node) relating to the first parity-check is found.
- the part of the reliability information associated with the second parity-check is translated into a second hard-decided group of symbols. It is checked whether or not a second hard-decided group of symbols, which is compatible with the selected first group of symbols, satisfies the second parity-check. If the second parity-check is satisfied, this compatible second hard-decided group of symbols is selected as a second node of packing P and its symbols may be fixed (or may only be fixed after being verified and possibly changed at a later time when one or more following parity-checks have been processed).
- the second parity-check is not satisfied, one or more “free” (i.e. not yet fixed) symbols of the compatible second hard-decided group of symbols are changed based on the reliability information such that the second parity-check is then satisfied.
- This second hard-decided group of symbols including the changed symbol is then selected as a second node of packing P and its symbols may be fixed (or may only be fixed after being verified and possibly changed at a later time when one or more following parity-checks have been processed).
- the codeword may then be composed group-by-group out of these groups of symbols.
- the groups of symbols may be associated with a plurality of parity-checks rather than exactly one parity-check.
- this group may be arranged by combining two groups of symbols each associated with one of these parity-checks.
- Another possibility is to redefine the configuration graph CG in a way that a row of the configuration graph CG is associated with a plurality of parity-checks.
- the set of indices J p associated with this “combinatorial” parity-check p may be given by the set union of the sets of indices relating to the individual parity-checks participating to the “combinatorial” parity-check p.
- groups of symbols as referred herein (as well as nodes of the CG) may relate to different numbers of parity-checks.
- the reliability values e.g. the log-likelihood ratios y i of the received data signal
- y [ ⁇ 5, ⁇ 1,1, ⁇ 5, ⁇ 6].
- the reliability values of the corresponding received data are ⁇ 5, ⁇ 1, ⁇ 5.
- the hard decision of these reliability values is 1, 1, 1. Because the number of ones in the hard decision is odd, one bit should be flipped, according to the above algorithm those with the lowest absolute reliability value, here the bit at position 1 (i.e.
- the reliability values of the corresponding received data are ⁇ 1, 1, ⁇ 6.
- the hard decision of these values using bit at position 1 be fixed to zero from the first step is 0, 0, 1. Again the number of ones in the hard decision is odd, so those bit which has the smallest absolute reliability value but is not fixed by previous steps is to be flipped. Here this is bit at position 2 . So the resulting hard decision after flipping is 0, 1, 1, which corresponds to node v 11 in the configuration graph CG.
- the algorithm relies on making a locally optimum choice (of the maximum node) at each stage of iteration with the hope of finding the global optimum (i.e. ML decoding). For instance, the degree of locality may be varied.
- the computation of the maximum node only relies on reliability information (e.g. log-likelihood ratios or soft output values) associated with the node of the current parity-check.
- reliability information used for computing a maximum node may also consider the “potential” of this selection in view of subsequent parity-checks, i.e. reliability information associated with parity-checks to be done later.
- the algorithm may reconsider choices made in the (near) past, resulting e.g. in that bits of an added maximum node may not necessarily be irrevocably fixed for the next parity-check.
- the outputted m-packing P of the algorithm described above is dependent on the order of parity-checks used in the iteration process.
- the algorithm may be repeated a number of times, every time starting from another parity-check.
- the algorithm may be started m times, i.e. from each parity-check and/or using different processing sequences of the parity-checks. This leads to a list of m codewords. Amongst these codewords, the codeword with the maximum cost may be chosen as a decoding result.
- a parity-check matrix of the form H D [D I], with I is the m ⁇ m identity matrix, guarantees that an arbitrary order of parity checks p in the repeatedly performed algorithm may be used.
- Block B 2 of FIG. 5 illustrates a flow chart showing a specific example of an extended (i.e. repeatedly performed) algorithm as outlined above.
- the steps in block B 2 relate to the “outer” algorithm relying on repeated executions of the “inner” algorithm of block B 1 , and a selection of the decoded codeword x as the codeword having maximum cost of the codewords x yielded by the inner algorithm.
- an incremental processing order of parity-checks is used.
- the cost of a codeword x may be computed at 531 on the basis of the log-likelihood ratio vector y according to
- Cost( x ) ⁇ j ( ⁇ 2 x j +1) y j . (7)
- Reliability information of other quantities than log-likelihood ratios may also be used as vector y in equation (7).
- time complexity of the (inner) algorithm depicted in FIG. 4 or block B 1 in FIG. 5 is O(m)
- This low complexity makes the extended algorithm especially interesting for high rate codes.
- FIG. 6 is a schematic representation of a decoder 2020 which may be used as a decoder 202 in FIG. 1 according to one embodiment.
- the decoder 2020 may comprise a block D 1 which is configured to perform e.g. calculations relating to the (inner) algorithm as depicted in FIG. 4 or FIG. 5 (steps in block B 1 ), and may optionally be equipped with a block D 2 which is configured to perform e.g. calculations relating to the extended algorithm such as e.g. depicted in block B 2 of FIG. 5 .
- Block D 1 may comprise a first memory 2021 , a free bits register 2022 , an absolute minimum calculation unit 2023 , a first codeword register 2024 and a bit flipping unit 2025 .
- Block D 2 may comprise a second memory 2026 , a cost calculation and comparison unit 2027 and a second codeword register 2028 .
- the first codeword register 2024 is connected to the cost calculation and comparison unit 2027 via data link 2029 .
- Data processing in decoder 2020 may be performed as described above, e.g. according to the decoding method exemplified in FIGS. 4 or 5 . Reliability information of the received symbols is stored in the first memory 2021 .
- Soft output values ⁇ tilde over (y) ⁇ of a previous decoder or log-likelihood ratios y or both may e.g. be used.
- Free bits register 2022 may hold the aforementioned set of indices “FreeBits” indicating those bits of packing P which currently are not yet fixed.
- the absolute minimum calculation unit 2023 calculates the bit to be flipped according to equation (6). In other words, if a bit in a node has to be flipped, the absolute minimum calculation unit 2023 calculates this bit, namely the bit having the index j. This index j (i.e. bit position) is reported to the bit flipping unit 2025 .
- the bit flipping unit 2025 accesses first codeword register 2024 and flips the bit with index j (being a bit of the current node added to packing P).
- first codeword register 2024 stores the m-packing P, i.e. codeword x.
- Codeword x forms the output of the (inner) algorithm B 1 for one run.
- the absolute minimum calculation unit 2023 , the free bits register 2022 and the bit flipping unit 2025 form a selection unit configured to select groups of bits associated with a specific parity-check based on the reliability information obtained from first memory 2021 .
- the first codeword register 2024 is then used to compose the codeword x out of such selected groups of bits.
- m codewords x are passed to the cost calculation and comparison unit 2027 of block D 2 .
- the cost calculation and comparison unit 2027 performs a calculation of equation (7) for each codeword x by using e.g. log-likelihood ratios y as stored in the second memory 2026 .
- the codeword x having maximum cost is passed to the second codeword register 2028 as maximum codeword ⁇ circumflex over (x) ⁇ .
- FIG. 7 is a schematic representation of a decoder 2050 according to one embodiment.
- the decoder 2050 may comprise a first decoder 2051 , a second decoder 2052 and a maximum unit 2053 .
- the second decoder 2052 may be represented by block D 1 of the decoder 2020 of FIG. 6
- the maximum unit 2053 may be represented by block D 2 of decoder 2020 of FIG. 6 .
- the first decoder 2051 may be a soft output decoder providing for soft output values ⁇ tilde over (y) ⁇ .
- a belief propagation (BP) decoder could be used.
- the first decoder 2051 may perform a first decoding process on the basis of received log-likelihood ratios y.
- the soft output values ⁇ tilde over (y) ⁇ of the first decoder 2051 are then used as input values to the second decoder 2052 .
- the second decoder 2052 may be a VPP decoder. The performance of the second decoder 2052 , which is optionally followed by the maximum unit 2053 , is improved by the pre-decoding performed in the first decoder 2051 .
- the codewords x outputted by the second decoder 2052 are of higher quality than hard decisions of the soft output values ⁇ tilde over (y) ⁇ of the first decoder 2051 .
- the maximum unit 2053 may use the (original) log-likelihood ratio values y rather than the soft output values ⁇ tilde over (y) ⁇ .
- FIG. 8 shows simulation results for an example block code.
- the decoding performance is provided in terms of word error rate (WER) and is plotted versus the energy per bit to noise power spectral density ratio E b /N 0 in units of dB.
- the block code 96.3.963 used in the simulation of FIG. 8 is disclosed in D. MacKay, Encyclopedia of Sparse Graph Codes (hypertext archive), online available at: http://www.inference.phy.cam.ac.uk/mackay/codes/data.html.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Theoretical Computer Science (AREA)
- Computational Mathematics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- Computing Systems (AREA)
- Error Detection And Correction (AREA)
Abstract
Description
- The invention relates to the field of communications and, more specifically, to systems dedicated to decoding of error correction codes such as linear block codes.
- Coding and decoding of data by the use of parity-check codes and/or block codes are techniques frequently used in data transmission systems in order to ensure reliability of data transmission. By coding, data is pre-processed in such a way as to allow for reliable data recovery by decoding. Decoding is the process of reconstructing transmitted data which has been coded and transmitted through unreliable media into codewords. Many different coding/decoding approaches are available depending on system and transmission channel considerations.
- Aspects of the invention are made more evident by way of example in the following detailed description of embodiments when read in conjunction with the attached drawing figures, wherein
-
FIG. 1 is a schematic representation of a communications system comprising a transmitter and a receiver; -
FIG. 2 shows a parity-check matrix H and a Tanner graph representation of the parity-check matrix H; -
FIG. 3 shows a parity-check matrix HD and a configuration graph defined by the parity-check matrix HD; -
FIG. 4 is a flow diagram illustrating a method of decoding; -
FIG. 5 is a flow diagram illustrating a method of decoding; -
FIG. 6 is a schematic representation of a decoder; -
FIG. 7 is a schematic representation of a decoder; and -
FIG. 8 is a graph showing for a first code the word error ratio of a decoder output plotted versus the energy per bit to noise power spectral density ratio. - In the following description and claims, the terms “coupled” and “connected”, along with derivatives may be used. It should be understood that these terms may indicate that two elements co-operate or interact with each other regardless of whether or not they are in direct physical or electrical contact with each other. Further, although binary data symbols (i.e. bits) and binary codes are used for the purpose of explanation throughout the following description, higher order data symbols and non-binary codes may equally be applied and such alternatives are contemplated as falling within the scope of the present invention.
-
FIG. 1 shows an embodiment of a communications system comprising atransmitter 100 and areceiver 200. Thetransmitter 100 comprises anencoder 101 and may comprise further data processing units such as amodulator 102, a puncturing device (not shown) etc. Thereceiver 200 may comprise ademodulator 201, further data processing units such as e.g. a de-puncturing device (not shown) and adecoder 202. Anoutput signal 103 of thetransmitter 100 is transmitted via achannel 300 to form aninput signal 204 of thereceiver 200. - It is to be noted that
FIG. 1 provides a schematic and strongly simplified representation of thetransmitter 100 and thereceiver 200 of the communications system. Usually, thetransmitter 100 and thereceiver 200 are equipped with additional data processing units (not shown) which are typically adapted to the specific field of application, the type of thechannel 300 and other considerations, such as requirements or constraints. More specifically, the data communications system may be a radio communications system, in particular a mobile radio communications system, in which thechannel 300 is represented by a radio link. Examples for such communications systems are digital video broadcasting-satellite (DVB-S) systems such as e.g. the DVB-S2 system, wireless local area network (WLAN) systems such as e.g. WLAN802.11n or wireless metropolitan area network (WMAN) systems such as e.g. stipulated in IEEE 802.16. Moreover, embodiments may comprise communications systems in which thechannel 300 is established by a wired electrical or an optical link. - In
FIG. 1 , data symbols d=(d1, . . . ,dk) to be transmitted to thereceiver 200 are fed atinput 104 into theencoder 101 of thetransmitter 100. In one embodiment, theencoder 101 provides for a forward error correction (FEC) in the communications system. In one embodiment, theencoder 101 may use a linear block code for encoding the data bits di, i=1, . . . ,k. For instance, BCH (Bose-Chaudhuri-Hocquenghem) codes, terminated Turbo Codes or LDPC (Low Density Parity-Check) codes may be used. LDPC-codes are also known as Gallager codes and are proposed for many future communications systems for FEC as an alternative or complement to Turbo Codes. - Any linear block code can be described by a parity-check matrix H. The parity-check matrix H is of format m×n, i.e. has m rows and n columns. All codewords c=(c1, . . . ,cn) must fulfill the equation
-
HcT=0. (1) - Thus, c is a codeword of the linear block code represented by the parity-check matrix H if and only if equation (1) is valid. Superscript T refers to the transpose of the indexed quantity.
- Codeword generation out of the dataword d may be written as
-
c=dG. (2) - G is a generator matrix of the linear block code and is of k×n format with k=n−m (with k, n, m being integers). As known in the art, one or more generator matrices G may be computed from the parity-check matrix H. If binary data are used (i.e. symbols are represented by bits), operations in equations (1) and (2) are carried out in the binary field GF(2).
- The codeword c is fed into the
modulator 102. Different modulation and/or puncturing schemes may be applied therein. After modulation, optional puncturing and adding of supplementary data, the codeword c is sent out by thetransmitter 100 as a data block. - In one embodiment, the output of the
modulator 102 may be subjected to further data processing in order to prepare thesignal 103 for channel transmission. Such further data processing is not depicted inFIG. 1 . By way of example, further data processing may comprise digital-to-analog conversion, up-conversion of the signal to a radio frequency band, amplification and radiating the amplified signal via a transmit antenna (not shown). - At the
receiver 200, theinput signal 204 is obtained e.g. by a receive antenna (not shown) or aphysical channel 300 and, in one embodiment, may be subjected to signal processing not depicted inFIG. 1 , such as e.g. filtering, down-conversion to an intermediate frequency (IF) band or baseband, analog-to-digital conversion and so on. Thedemodulator 201 may demodulate and optionally de-puncture the receivedsignal 204 and outputs a received vector r. The n elements ri, i=1, . . . ,n of the received vector r may be real-valued. - The
decoder 202 decodes the received vector r to yield the decoded word x. The n elements xi, i=1, . . . ,n of the decoded word x are estimates of the n elements ci, i=1, . . . ,n of codeword c. - The parity-check matrix H can be used to generate a graph which also represents a block code, a so-called Tanner graph.
FIG. 2 illustrates a parity-check matrix H of an LDPC code and the corresponding Tanner graph (for simplicity, the parity-check matrix H shown here is much smaller than the parity-check matrices of practically used LDPC codes). - A Tanner graph is a bipartite graph. The nodes of this graph can be divided into two classes. One class is termed check nodes (C-nodes, drawn as squares) and the other class is termed variable or bit nodes (V-nodes, drawn as circles). The C-nodes correspond to the rows of the parity-check matrix H (e.g. C1 corresponds to
row 1 of H) and the V-nodes correspond to the columns of the parity-check matrix H (e.g. V1 corresponds to column 1). If the corresponding row of a C-node in the H matrix has a 1 at the column v it means that this C-node is connected to the V-node Vv and vice versa. The degree of a node in the Tanner graph is the number of lines connected to this node (e.g. V-node V4 hasdegree 2, C-node C1 has degree 3). - In many cases LDPC codes are decoded using belief propagation (BP) algorithms. BP algorithms are also known as sum-product algorithms or message passing algorithms. These algorithms operate via passing messages through the Tanner graph.
- In the following, a new representation of the decoding problem for linear block codes is presented. The new representation is based on the construction of a configuration graph CG=(V,E). This configuration graph CG will then be used for the construction of a new decoding scheme. V is the set of nodes of the configuration graph CG and E is the set of lines of the configuration graph CG.
- Every linear block code can be represented in the form HD=[D I], with I is the m×m identity matrix and D is an m×(n−m) matrix. Every row p of HD represents a parity-check. Each parity-check p defines an index set Jp of codeword symbols participating in parity-check p. Thus, the index set Jp is defined by the elements of row p of the parity-check matrix HD which are unequal to zero. Each index set Jp of codeword symbols associated with a parity-check p defines a plurality of symbol groups vpi indexed by i, whereby each symbol group vpi is a symbol pattern which satisfies the parity-check p at index set Jp of codeword symbols.
- In the following, a symbol group vpi is referred to as a node vpi. V is defined as the set of all nodes, V={vpi}, for all parity-checks p and all symbol patterns i satisfying parity-check p. The set E of lines of the configuration graph CG is built according to the following rule: Two nodes vji and vkl are interconnected if at least one symbol of node vji and one symbol of node vkl, which are located at any common position of the index sets Ji and Jk, are different.
- A set of nodes wherein no node of the set is connected to any other node of the set is referred to as a vertex packing. The construction principle explained above ensures that any vertex packing with m elements (i.e. with m nodes) represents a codeword. A vertex packing with m elements is termed m-packing in the following. Thus, any m-packing is a (valid) codeword.
- Based on a configuration graph CG, new decoding schemes are available. According to the above, a configuration graph CG may be illustrated by p rows. Each row is associated with a specific parity-check p, i.e. with a row of the parity-check matrix HD. Each row p of the configuration graph CG is represented by the set of nodes vpi associated with parity-check p. Nodes which are incompatible (i.e. which can not simultaneously be elements of an m-packing (i.e. a codeword)) are interconnected. On the other hand, nodes vpi with p=0, . . . ,m-1, which are not interconnected, can always be composed to form a codeword. Thus, the configuration graph CG is a representation of a rule teaching how to construct a codeword from nodes vpi (i.e. symbol groups) associated with index sets Jp of codeword symbol positions. Such composition of an m-packing (codeword) may be carried out step by step by picking up one vpi node after the other.
- By assigning weights to the nodes vpi of the configuration graph CG according to the received symbols, decoding and in particular maximum likelihood (ML) decoding can be performed by finding the maximum-weighted vertex packing. This problem will be referred to as maximum-weighted vertex packing problem (VPP). As will be described later, optimum ML decoders and sub-optimum ML decoders using the decoding scheme described above are feasible.
- Generally, methods of decoding described herein allow for decoding of binary linear block codes as well as higher order linear block codes. For the sake of simplicity and without loss of generality, binary block codes are considered in the following. In this embodiment, the elements of the parity-check matrix and the symbols are bits, i.e. 0 or 1. A parity-check p can only be satisfied with an even number of participating bits set to 1. Thus, a node (bit group) vpi always has an even number of
value 1 bits. If, for example, row p of parity-check matrix HD has a number of zp entries ofvalue 1, the set of nodes associated with parity-check p is given by the set of all zp-tuples satisfying the condition to have an even number of bits ofvalue 1. - An example of a configuration graph CG associated with parity-check matrix
-
- is illustrated in
FIG. 3 . Index set J0 is given by the bits at 0, 1, 3 in row p=0 of matrix HD having thepositions value 1. Index set J1 is given by the bits at 1, 2, 4 in row p=1 of matrix HD having thepositions value 1. Here, e.g. node v12 represents bit group (101) associated with the index set J1 of setting the 1 and 4 of the codeword to 1 and thebits bit 2 of the codeword to 0. Because parity-check p=0 and parity-check p=1 have thebit 1 atposition 1 in common, node v12 is connected to the nodes v00 and v02, i.e. bit groups (000) and (101). These nodes v00 and V02 correspond to combinations of bits satisfying parity-check p=0 but assuming the bit atposition 1 is 0, i.e. are incompatible with node v12. In this example, every set consisting of two nodes vpi, p=0,1 and i=0, . . . ,3 which are not connected corresponds to a codeword. - It is to be noted that the parity-check matrix underlying the configuration graph CG need not be a matrix of the form HD as defined above. It could also be a matrix HD having a triangle sub-matrix D or any other adequate parity-check matrix H representing the block code used.
- Based on the configuration graph CG various decoding algorithms are available. Considering ML decoding, the most likely codeword xML maximizes the expression (assuming a binary linear code and a binary modulation scheme)
-
x ML=argmaxxεc(i=1, . . . ,n(−2x i+1)y i) (3) - with yi being the log-likelihood ratios calculated out of the received data samples ri by e.g.
-
- Here, C denotes the set of (valid) codewords.
- Let
-
c(P)=Σviε w i (5) - be the cost of a vertex packing P, with wi being a weight assigned to node vi. Decoding can be performed by finding the m-packing with the maximum cost. If the weights wi are assigned to the nodes of the configuration graph CG in such way that every m-packing (codeword) has the same cost as the maximum likelihood sum of the corresponding codeword, then ML decoding can be performed by finding the m-packing with the maximum cost. When using a parity-check matrix of the form HD as defined above and an adequate cost function, the maximum vertex packing on the configuration graph CG always has m elements. In this case, the decoding problem can be formulated as a maximum-weighted vertex packing problem (VPP).
- In one embodiment, the solution of the maximum-weighted vertex packing problem (VPP) may require high computational expenses. In the following embodiment, more simple algorithms for vertex packing decoding are provided.
- Because of the underlying structure of the configuration graph CG, in one embodiment, it is possible to perform decoding by using only a partial structure of the configuration graph CG rather than the whole configuration graph CG. This approach uses the concept to find the maximum node (i.e. symbol group) of a parity-check p in the configuration graph CG depending on maximum nodes already decided in previous parity-checks and reliability information of received data. In particular, when using a greedy algorithm for decoding, in one embodiment, only the “next maximum possible node vpi” may be of interest. An embodiment of a decoding process is illustrated in
FIG. 4 . - The process starts with a packing P={ } having no elements (nodes). A specific parity-check p=StartPC is selected for the first iteration, e.g. p=0, at 401. Then, the parity-check p is started at 402.
- The maximum node of parity-check p, which is not connected to any node in P, is computed at 403. In the first iteration, P={ } and therefore, any node associated with p=StartPC is possible. The maximum node may be calculated based on weights associated with the nodes of parity-check p. More specifically, the maximum node of parity-check p may be the node vpi having maximum weight wi. The weights wi may be calculated from any reliability information on received data such as e.g. log-likelihood ratios y=(y1, . . . ,yn) (see equation (4)) or soft output values {tilde over (y)} of a previous decoder, as will be explained in more detail later. Another possible variant of the algorithm could be to generate codewords by not only selecting the maximum nodes of a parity-check, but also by selecting those with the second highest cost or any other nodes, using e.g. local search techniques.
- The maximum node is added to packing P at 404. In the first iteration, packing P now contains one node.
- The symbols (e.g. bits) of the maximum added node may be fixed according at 405. If the symbols are fixed, the algorithm may be configured to never reconsider its choice in one embodiment. Such algorithm is often referred to as a greedy algorithm. It is to be noted that the algorithm may also be devised to reconsider choices, i.e. to change one or more symbols of a maximum node which has already been added to packing P.
- After the first iteration, a new parity-check p is selected at 406, e.g. p=1. This new parity-check is then started at 402 and the process described above is repeated for this new parity-check. Note that the maximum node of the current parity-check is obtained amongst those nodes which symbols coincide with the already fixed symbols, because the maximum node of the current parity-check p is obtained amongst those nodes which are not connected in the configuration graph CG to any other node in packing P. Thus, the maximum node of the current parity-check p is selected based on the already selected node(s) and on reliability information on received data.
- This new maximum node is then added to P at 404 and optionally, the symbols thereof are fixed at 405. This process is repeated until at 407, it is found that all parity-checks have been processed. When all parity-checks have been processed according to the above scheme, packing P is a vertex packing with m elements (i.e. is an m-packing) and thus represents a codeword. This codeword is then output at 408.
- A detailed example of a decoding algorithm using the decoding scheme of
FIG. 4 is shown inFIG. 5 . Here, a binary representation of received data and code is used for ease of explanation. First, only block B1 of the algorithm is described. - For the sake of brevity, only some of the steps shown in
FIG. 5 are discussed in detail in the following. After initialization at 501 and 502, without loss of generality, in the first iteration step i=0, p is set to StartPC, e.g. p=0 (act 503). All bits participating to parity-check p (i.e. the bits defined by Jp) are derived at 504, 505. The hard decision xj of all bits participating to parity-check p is calculated from reliability information, e.g. log-likelihood ratios y or soft output values {tilde over (y)} of a previous decoder at 506. This hard decision xj of all bits participating to parity-check p may be considered to represent the maximum node, if this node satisfies the parity-check p and is not interconnected to any other node in packing P. At 507, 508, 509 and 510, the number O of “ones” in the node under consideration is counted. If this number is even, the node satisfies the parity-check p and therefore, the bits thereof may be fixed. A set of indices termed FreeBits indicates those bits of the packing P which currently are not yet fixed. In the first iteration step i=p=0, FreeBits={1, . . . ,n}. i.e. no bits are fixed. Therefore, if the number of ones of the hard decision is even, the hard decision satisfies the parity-check p and may thus be considered to already represent the maximum node (bit group) of this parity-check p. Otherwise, if the number of ones of the hard decision is odd, by identifying the bit with the minimum absolute value (minimum reliability) and flipping the sign thereof (at 511, 512), the resulting hard decision may be considered to represent the maximum node of the parity-check p. In both cases, the bits of this maximum node are then fixed by updating the set FreeBits at 513. The set FreeBits is updated by FreeBits=FreeBits\Jp, i.e. the bits at the positions indicated by Jp are excluded. As mentioned before, Jp is the set of indices of bits participating to current parity-check p. In the first iteration step i=p=0, the bits at the bit positions J0 are fixed. - In the following iteration steps, this scheme is repeatedly applied to get the proper maximum nodes of the other parity-checks in such way that it is always maintained that the set of nodes is a valid vertex packing. To do so, in the next iteration step i=i+1 (act 514), the hard decision of bits of a new parity-check p+1 is done, considering the already fixed bits. If the number of ones of this hard decision (including the already fixed bits, if any) is even, this hard decision satisfies the new parity-check p+1 and may thus be considered to already represent the maximum node (bit group) related to this parity-check. However, if the number of ones of this hard decision (including the already fixed bits, if any) is odd, in order to maintain an even number of ones, this time only that bit is flipped which has the lowest reliability (e.g. lowest absolute value of log-likelihood ratio or soft output value) but is among those bits which are not already fixed by another parity-check. The index j of this bit may be calculated according to 511 by
-
- It is to be noted that it is also possible to flip more than one bit. As an example, in order to maintain an even number of ones, those three, five etc. bits having the lowest reliability but are among those bits which are not already fixed by another parity-check may be flipped. Decoding according to block B1 of the algorithm may be accomplished if all parity-checks are processed, i.e. i=m−1 (i.e. p=m−1), see 515. The m-packing yielded by this decoding scheme is a (valid) codeword of local maximum cost.
- In more general terms, the above algorithm relies on the following concept. First, the part of the reliability information associated with a first parity-check p is translated into a first hard-decided group of symbols. Then, it is checked whether or not the first hard-decided group of symbols satisfies the first parity-check p. As explained above, in case the symbols are bits, this check may e.g. simply be performed by evaluating whether or not the number of ones of the hard-decided group of bits is even or odd. If the first parity-check is satisfied, the first hard-decided group of symbols is selected as a node of packing P and its symbols may be fixed (or may only be fixed after being verified and possibly changed at a later time when one or more following parity-checks have been processed). If, however, the first parity-check is not satisfied, one or more symbols of the first hard-decided group of symbols are changed based on the reliability information such that the first parity-check p is then satisfied. The first hard-decided group of symbols including the changed symbol is then selected as a node of packing P and its symbols may be fixed (or may only be fixed after being verified and possibly changed at a later time when one or more following parity-checks have been processed). That way, the first group of symbols (first node) relating to the first parity-check is found. In the next iteration step, the part of the reliability information associated with the second parity-check is translated into a second hard-decided group of symbols. It is checked whether or not a second hard-decided group of symbols, which is compatible with the selected first group of symbols, satisfies the second parity-check. If the second parity-check is satisfied, this compatible second hard-decided group of symbols is selected as a second node of packing P and its symbols may be fixed (or may only be fixed after being verified and possibly changed at a later time when one or more following parity-checks have been processed). However, if the second parity-check is not satisfied, one or more “free” (i.e. not yet fixed) symbols of the compatible second hard-decided group of symbols are changed based on the reliability information such that the second parity-check is then satisfied. This second hard-decided group of symbols including the changed symbol is then selected as a second node of packing P and its symbols may be fixed (or may only be fixed after being verified and possibly changed at a later time when one or more following parity-checks have been processed). The codeword may then be composed group-by-group out of these groups of symbols.
- Several variants may be applied to this process. The groups of symbols (i.e. nodes of CG) may be associated with a plurality of parity-checks rather than exactly one parity-check. By way of example, if one group of symbols is associated with e.g. two parity-checks, this group may be arranged by combining two groups of symbols each associated with one of these parity-checks. Another possibility is to redefine the configuration graph CG in a way that a row of the configuration graph CG is associated with a plurality of parity-checks. The set of indices Jp associated with this “combinatorial” parity-check p may be given by the set union of the sets of indices relating to the individual parity-checks participating to the “combinatorial” parity-check p. As a result, groups of symbols as referred herein (as well as nodes of the CG) may relate to different numbers of parity-checks.
- In the following, a decoding example for the small code shown in
FIG. 3 is presented by way of example. It is assumed that the reliability values (e.g. the log-likelihood ratios yi of the received data signal) have been calculated at thereceiver 200 as y=[−5,−1,1,−5,−6]. It is further assumed that the algorithm starts at parity-check p=0. The bits at 0, 1 and 3 are participating to parity-check p=0. The reliability values of the corresponding received data are −5, −1, −5. The hard decision of these reliability values is 1, 1, 1. Because the number of ones in the hard decision is odd, one bit should be flipped, according to the above algorithm those with the lowest absolute reliability value, here the bit at position 1 (i.e. having index 1). This leads to the hard decision of 1, 0, 1 which corresponds to node v02 in the configuration graph CG ofpositions FIG. 3 . So, packing P={v02}. The following bits are fixed in the resulting codeword: [1, 0, x, 1, x]. Here, x refers to a bit which is not yet fixed. Thus, FreeBits={2, 4}. - In the next step parity-check p=1 is considered. The bits at
1, 2 and 4 are participating to parity-check p=1.positions Bit position 1 is common to parity-checks p=0 and p=1. The reliability values of the corresponding received data are −1, 1, −6. The hard decision of these values using bit atposition 1 be fixed to zero from the first step is 0, 0, 1. Again the number of ones in the hard decision is odd, so those bit which has the smallest absolute reliability value but is not fixed by previous steps is to be flipped. Here this is bit atposition 2. So the resulting hard decision after flipping is 0, 1, 1, which corresponds to node v11 in the configuration graph CG. Thus, P={v02, v11} and the resulting codeword is [1,0,1,1,1]. - Many variants of this algorithm are possible. The algorithm relies on making a locally optimum choice (of the maximum node) at each stage of iteration with the hope of finding the global optimum (i.e. ML decoding). For instance, the degree of locality may be varied. In the above example, the computation of the maximum node only relies on reliability information (e.g. log-likelihood ratios or soft output values) associated with the node of the current parity-check. However, reliability information used for computing a maximum node may also consider the “potential” of this selection in view of subsequent parity-checks, i.e. reliability information associated with parity-checks to be done later. Further, as already mentioned above, the algorithm may reconsider choices made in the (near) past, resulting e.g. in that bits of an added maximum node may not necessarily be irrevocably fixed for the next parity-check.
- Typically, the outputted m-packing P of the algorithm described above is dependent on the order of parity-checks used in the iteration process. Thus, the algorithm may be repeated a number of times, every time starting from another parity-check. For instance, the algorithm may be started m times, i.e. from each parity-check and/or using different processing sequences of the parity-checks. This leads to a list of m codewords. Amongst these codewords, the codeword with the maximum cost may be chosen as a decoding result. It is to be noted that, inter alia, a parity-check matrix of the form HD=[D I], with I is the m×m identity matrix, guarantees that an arbitrary order of parity checks p in the repeatedly performed algorithm may be used. On the other hand, a parity-check matrix of the form HD=[D T], with T is a triangular matrix may be used and may have advantages, e.g. in terms of storage memory needed, but may require special processing orders for the discussed algorithms.
- Block B2 of
FIG. 5 illustrates a flow chart showing a specific example of an extended (i.e. repeatedly performed) algorithm as outlined above. The steps in block B2 relate to the “outer” algorithm relying on repeated executions of the “inner” algorithm of block B1, and a selection of the decoded codeword x as the codeword having maximum cost of the codewords x yielded by the inner algorithm. Subsequent iterations of the outer algorithm are initiated by updating StartPC=StartPC+ 1, seeact 530. Here, as a simple example, an incremental processing order of parity-checks is used. The cost of a codeword x may be computed at 531 on the basis of the log-likelihood ratio vector y according to -
Cost(x)=Σj(−2x j+1)y j. (7) - Reliability information of other quantities than log-likelihood ratios, e.g. the soft output values of a previous decoder, may also be used as vector y in equation (7).
- At 532, it is checked whether or not the cost of the current codeword x is greater than the maximum cost “MaxCost” of all codewords computed so far. If yes, codeword x and thus MaxCost are updated at 533.
- As the time complexity of the (inner) algorithm depicted in
FIG. 4 or block B1 inFIG. 5 is O(m), when starting the algorithm m times, this results in a time complexity of O(m2) of the extended (i.e. repeatedly performed) algorithm. This low complexity makes the extended algorithm especially interesting for high rate codes. -
FIG. 6 is a schematic representation of adecoder 2020 which may be used as adecoder 202 inFIG. 1 according to one embodiment. Thedecoder 2020 may comprise a block D1 which is configured to perform e.g. calculations relating to the (inner) algorithm as depicted inFIG. 4 orFIG. 5 (steps in block B1), and may optionally be equipped with a block D2 which is configured to perform e.g. calculations relating to the extended algorithm such as e.g. depicted in block B2 ofFIG. 5 . - Block D1 may comprise a
first memory 2021, a free bits register 2022, an absoluteminimum calculation unit 2023, afirst codeword register 2024 and abit flipping unit 2025. Block D2 may comprise asecond memory 2026, a cost calculation andcomparison unit 2027 and asecond codeword register 2028. Thefirst codeword register 2024 is connected to the cost calculation andcomparison unit 2027 viadata link 2029. Data processing indecoder 2020 may be performed as described above, e.g. according to the decoding method exemplified inFIGS. 4 or 5. Reliability information of the received symbols is stored in thefirst memory 2021. Soft output values {tilde over (y)} of a previous decoder or log-likelihood ratios y or both may e.g. be used. Free bits register 2022 may hold the aforementioned set of indices “FreeBits” indicating those bits of packing P which currently are not yet fixed. The absoluteminimum calculation unit 2023, in each iteration step, calculates the bit to be flipped according to equation (6). In other words, if a bit in a node has to be flipped, the absoluteminimum calculation unit 2023 calculates this bit, namely the bit having the index j. This index j (i.e. bit position) is reported to thebit flipping unit 2025. Thebit flipping unit 2025 accessesfirst codeword register 2024 and flips the bit with index j (being a bit of the current node added to packing P). When all parity-checks are processed,first codeword register 2024 stores the m-packing P, i.e. codeword x. Codeword x forms the output of the (inner) algorithm B1 for one run. Thus, the absoluteminimum calculation unit 2023, the free bits register 2022 and thebit flipping unit 2025 form a selection unit configured to select groups of bits associated with a specific parity-check based on the reliability information obtained fromfirst memory 2021. Thefirst codeword register 2024 is then used to compose the codeword x out of such selected groups of bits. - When this (inner) algorithm is repeated e.g. m times, m codewords x are passed to the cost calculation and
comparison unit 2027 of block D2. The cost calculation andcomparison unit 2027 performs a calculation of equation (7) for each codeword x by using e.g. log-likelihood ratios y as stored in thesecond memory 2026. The codeword x having maximum cost is passed to thesecond codeword register 2028 as maximum codeword {circumflex over (x)}. -
FIG. 7 is a schematic representation of adecoder 2050 according to one embodiment. Thedecoder 2050 may comprise afirst decoder 2051, asecond decoder 2052 and amaximum unit 2053. Note that thesecond decoder 2052 may be represented by block D1 of thedecoder 2020 ofFIG. 6 and themaximum unit 2053 may be represented by block D2 ofdecoder 2020 ofFIG. 6 . - The
first decoder 2051 may be a soft output decoder providing for soft output values {tilde over (y)}. For example, a belief propagation (BP) decoder could be used. Thefirst decoder 2051 may perform a first decoding process on the basis of received log-likelihood ratios y. The soft output values {tilde over (y)} of thefirst decoder 2051 are then used as input values to thesecond decoder 2052. Thesecond decoder 2052 may be a VPP decoder. The performance of thesecond decoder 2052, which is optionally followed by themaximum unit 2053, is improved by the pre-decoding performed in thefirst decoder 2051. That is, the codewords x outputted by thesecond decoder 2052 are of higher quality than hard decisions of the soft output values {tilde over (y)} of thefirst decoder 2051. Note that for the evaluation of the maximum codeword {circumflex over (x)}, themaximum unit 2053 may use the (original) log-likelihood ratio values y rather than the soft output values {tilde over (y)}. -
FIG. 8 shows simulation results for an example block code. The decoding performance is provided in terms of word error rate (WER) and is plotted versus the energy per bit to noise power spectral density ratio Eb/N0 in units of dB. The block code 96.3.963 used in the simulation ofFIG. 8 is disclosed in D. MacKay, Encyclopedia of Sparse Graph Codes (hypertext archive), online available at: http://www.inference.phy.cam.ac.uk/mackay/codes/data.html. By way of example, code 96.3.963 is represented by a parity-check matrix H having n=96 columns and m=48 rows. These and all other codes disclosed in this document may be used inencoder 101 for encoding and are incorporated in this document by reference. The simulations have been performed with thedecoder 2050 ofFIG. 7 , i.e. a cascade of aBP decoder 2051 and aVPP decoder 2052 followed by themaximum unit 2053. TheBP decoder 2051 was run with 50 iterations at maximum, and theVPP decoder 2052 used the algorithm shown inFIG. 5 . InFIG. 8 , the performance of theBP decoder 2051 alone (line 96.3.963_BP—50) is compared to the performance of the combined BP-VPP decoder 2050 (line 96.3.963_VPPPSnew). For the code 96.3.963, a performance improvement in terms of WER up to 0.6 dB is obtained, seeFIG. 8 . Thus, VPP post decoding may significantly improve the decoding performance of a soft output decoder. - While a particular feature or aspect of an embodiment of the invention may have been disclosed with respect to only one of several implementations, such feature or aspect may be combined with one or more other features or aspects of the other implementations as may be desired and advantageous for any given or particular application.
Claims (23)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/199,414 US8190977B2 (en) | 2008-08-27 | 2008-08-27 | Decoder of error correction codes |
| DE102009035921.4A DE102009035921B4 (en) | 2008-08-27 | 2009-08-03 | Decoder for error correction codes |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/199,414 US8190977B2 (en) | 2008-08-27 | 2008-08-27 | Decoder of error correction codes |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20100058149A1 true US20100058149A1 (en) | 2010-03-04 |
| US8190977B2 US8190977B2 (en) | 2012-05-29 |
Family
ID=41727092
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/199,414 Active 2031-03-30 US8190977B2 (en) | 2008-08-27 | 2008-08-27 | Decoder of error correction codes |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US8190977B2 (en) |
| DE (1) | DE102009035921B4 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2508435A (en) * | 2012-12-03 | 2014-06-04 | Canon Kk | Decoding of data received from one source by several receivers |
| CN109586731A (en) * | 2017-09-29 | 2019-04-05 | 奈奎斯特半导体有限公司 | System and method for decoding and error code |
| US20200036395A1 (en) * | 2018-07-27 | 2020-01-30 | Nyquist Semiconductor Limited | Systems and methods for decoding error correcting codes with self-generated llr |
| US10762426B2 (en) * | 2016-08-12 | 2020-09-01 | Beijing Deephi Intelligent Technology Co., Ltd. | Multi-iteration compression for deep neural networks |
| WO2023205345A3 (en) * | 2022-04-21 | 2023-11-30 | Twist Bioscience Corporation | Codecs for dna data storage |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8495479B1 (en) * | 2010-11-22 | 2013-07-23 | Marvell International Ltd. | Defect detection and correction via monitoring of syndromes and bit flips in decoder |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6581179B1 (en) * | 1996-06-25 | 2003-06-17 | Ericsson Inc. | Methods for generating side information in the presence of time-selective fading |
| US20050210358A1 (en) * | 2002-05-31 | 2005-09-22 | Koninklijke Phillips Electronics N.V. | Soft decoding of linear block codes |
| US20070033509A1 (en) * | 1999-12-24 | 2007-02-08 | Gupta Alok K | Method and apparatus for concatenated channel coding with variable code rate and coding gain in a data transmission system |
| US20090132894A1 (en) * | 2007-11-19 | 2009-05-21 | Seagate Technology Llc | Soft Output Bit Threshold Error Correction |
| US20090132897A1 (en) * | 2007-11-19 | 2009-05-21 | Seagate Technology Llc | Reduced State Soft Output Processing |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100703271B1 (en) | 2004-11-23 | 2007-04-03 | 삼성전자주식회사 | Low density parity check code decoding method and apparatus using integrated node processing |
-
2008
- 2008-08-27 US US12/199,414 patent/US8190977B2/en active Active
-
2009
- 2009-08-03 DE DE102009035921.4A patent/DE102009035921B4/en not_active Expired - Fee Related
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6581179B1 (en) * | 1996-06-25 | 2003-06-17 | Ericsson Inc. | Methods for generating side information in the presence of time-selective fading |
| US20070033509A1 (en) * | 1999-12-24 | 2007-02-08 | Gupta Alok K | Method and apparatus for concatenated channel coding with variable code rate and coding gain in a data transmission system |
| US7293223B2 (en) * | 1999-12-24 | 2007-11-06 | Alok Kumar Gupta | Method and apparatus for concatenated channel coding with variable code rate and coding gain in a data transmission system |
| US20050210358A1 (en) * | 2002-05-31 | 2005-09-22 | Koninklijke Phillips Electronics N.V. | Soft decoding of linear block codes |
| US20090132894A1 (en) * | 2007-11-19 | 2009-05-21 | Seagate Technology Llc | Soft Output Bit Threshold Error Correction |
| US20090132897A1 (en) * | 2007-11-19 | 2009-05-21 | Seagate Technology Llc | Reduced State Soft Output Processing |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2508435A (en) * | 2012-12-03 | 2014-06-04 | Canon Kk | Decoding of data received from one source by several receivers |
| GB2508435B (en) * | 2012-12-03 | 2015-05-27 | Canon Kk | Method and device for improving decoding of data received from one source by several receivers |
| US9281842B2 (en) | 2012-12-03 | 2016-03-08 | Canon Kabushiki Kaisha | Method and device for improving decoding of data received from one source by several receivers |
| US10762426B2 (en) * | 2016-08-12 | 2020-09-01 | Beijing Deephi Intelligent Technology Co., Ltd. | Multi-iteration compression for deep neural networks |
| CN109586731A (en) * | 2017-09-29 | 2019-04-05 | 奈奎斯特半导体有限公司 | System and method for decoding and error code |
| US20200036395A1 (en) * | 2018-07-27 | 2020-01-30 | Nyquist Semiconductor Limited | Systems and methods for decoding error correcting codes with self-generated llr |
| US10715182B2 (en) * | 2018-07-27 | 2020-07-14 | Innogrit Technologies Co., Ltd. | Systems and methods for decoding error correcting codes with self-generated LLR |
| WO2023205345A3 (en) * | 2022-04-21 | 2023-11-30 | Twist Bioscience Corporation | Codecs for dna data storage |
Also Published As
| Publication number | Publication date |
|---|---|
| US8190977B2 (en) | 2012-05-29 |
| DE102009035921B4 (en) | 2018-03-08 |
| DE102009035921A1 (en) | 2010-04-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8438459B2 (en) | Apparatus and method for decoding using channel code | |
| US7992066B2 (en) | Method of encoding and decoding using low density parity check matrix | |
| CN1866751B (en) | Construction method and device for low density parity codes | |
| US8010869B2 (en) | Method and device for controlling the decoding of a LDPC encoded codeword, in particular for DVB-S2 LDPC encoded codewords | |
| US7500172B2 (en) | AMP (accelerated message passing) decoder adapted for LDPC (low density parity check) codes | |
| JP4820368B2 (en) | Encoding and decoding method using LDPC code | |
| US20240063826A1 (en) | Method and apparatus for channel encoding and decoding in communication or broadcasting system | |
| US9432052B2 (en) | Puncture-aware low density parity check (LDPC) decoding | |
| US8468430B2 (en) | Product code decoding method and device | |
| US10637510B2 (en) | Methods and devices for error correcting codes decoding | |
| US20110010602A1 (en) | Method and apparatus for performing decoding using ldpc code | |
| EP3293885B1 (en) | Check node processing for syndrome computation in the decoding of non-binary codes, in particular non-binary ldpc codes | |
| JP4598085B2 (en) | Check matrix generation method | |
| US10374632B2 (en) | Low density parity check coded modulation for optical communications | |
| US20160142074A1 (en) | Structure and decoder architecture of a class of low-density parity-check code | |
| WO2008016117A1 (en) | Inspection matrix generation method, encoding method, communication device, communication system, and encoder | |
| US8190977B2 (en) | Decoder of error correction codes | |
| US7752524B2 (en) | Method and device for decoding DVB-S2 LDPC encoded codewords | |
| CN100583650C (en) | Encoding and decoding method and device using LDPC code | |
| KR20150076583A (en) | Method and apparatus for decoding of nonbinary parity-check codes in broadcasting and communication systems | |
| JP4832447B2 (en) | Decoding apparatus and method using channel code | |
| CN101595644B (en) | Apparatus and method for decoding using channel code | |
| US20080270877A1 (en) | Method of Encoding and Decoding Using Low Density Parity Check Code | |
| JP5523064B2 (en) | Decoding apparatus and method | |
| Zolotarev et al. | Usage of divergence within concatenated multithreshold decoding convolutional codes |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: INFINEON TECHNOLOGIES AG,GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LUNGLMAYR, MICHAEL;BERKMANN, JENS;REEL/FRAME:021451/0446 Effective date: 20080827 Owner name: INFINEON TECHNOLOGIES AG, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LUNGLMAYR, MICHAEL;BERKMANN, JENS;REEL/FRAME:021451/0446 Effective date: 20080827 |
|
| AS | Assignment |
Owner name: INTEL MOBILE COMMUNICATIONS TECHNOLOGY GMBH, GERMA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INFINEON TECHNOLOGIES AG;REEL/FRAME:027548/0623 Effective date: 20110131 |
|
| AS | Assignment |
Owner name: INTEL MOBILE COMMUNICATIONS GMBH, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTEL MOBILE COMMUNICATIONS TECHNOLOGY GMBH;REEL/FRAME:027556/0709 Effective date: 20111031 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: INTEL DEUTSCHLAND GMBH, GERMANY Free format text: CHANGE OF NAME;ASSIGNOR:INTEL MOBILE COMMUNICATIONS GMBH;REEL/FRAME:037057/0061 Effective date: 20150507 |
|
| FPAY | Fee payment |
Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
| AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: CONFIRMATORY ASSIGNMENT EFFECTIVE AS OF JANUARY 1, 2018;ASSIGNOR:INTEL DEUTSCHLAND GMBH;REEL/FRAME:053477/0001 Effective date: 20200615 |
|
| AS | Assignment |
Owner name: APPLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTEL CORPORATION;REEL/FRAME:053518/0586 Effective date: 20191130 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |