[go: up one dir, main page]

HK1165111A1 - Decoding method - Google Patents

Decoding method Download PDF

Info

Publication number
HK1165111A1
HK1165111A1 HK12105209.2A HK12105209A HK1165111A1 HK 1165111 A1 HK1165111 A1 HK 1165111A1 HK 12105209 A HK12105209 A HK 12105209A HK 1165111 A1 HK1165111 A1 HK 1165111A1
Authority
HK
Hong Kong
Prior art keywords
branchwords
block
encoder
received
state
Prior art date
Application number
HK12105209.2A
Other languages
Chinese (zh)
Other versions
HK1165111B (en
Inventor
黃沛昌
黄沛昌
多布里卡.瓦西奇
Original Assignee
联想创新有限公司(香港)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2008906238A external-priority patent/AU2008906238A0/en
Application filed by 联想创新有限公司(香港) filed Critical 联想创新有限公司(香港)
Publication of HK1165111A1 publication Critical patent/HK1165111A1/en
Publication of HK1165111B publication Critical patent/HK1165111B/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/413Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors tail biting Viterbi decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4161Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management
    • H03M13/4169Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management using traceback

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

The decoding method comprises storing, successively performing, determining, first performing, second performing, and outputting. The storing stores the N received branchwords in memory. The successively performing performs Viterbi updates on a sequence of branchwords. The determining determines a first encoder state at the end of the third block most likely to have generated the final branchword in the sequence from the best path metric. The first performing performs a Viterbi traceback procedure from the first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords. The second performing performs a Viterbi traceback procedure from that second encoder state at the start of the third block to determine a third encoder state at the start of the second block of branchwords. The outputting outputs a derived tail-biting path, if the second and third encoder states are identical.

Description

Decoding method
Technical Field
The present invention relates to decoding a general-purpose code generated by a convolutional encoder using a tail-biting convolutional code (tail-biting convolutional code).
Background
In recent years, when an information signal is communicated from a transmitter to a receiver via a communication channel, the information signal may be corrupted by channel-related noise. To prevent this noise from corrupting the received information, channel coding techniques may be employed. Generally, coding to mitigate the effects of channel noise is achieved by introducing redundancy into the information of the communicating parties. Due to this redundancy, the likelihood of noise corrupting the communicated information is reduced.
A convolutional code is a channel code for mitigating the effects of channel noise in information transmission. Convolutional codes are well known in the art and are used as standards for certain types of communication systems. One type of convolutional code is known in the art as a tail-biting convolutional code.
Frames or blocks of information are encoded and communicated in a block-wise fashion using a tail-biting convolutional code. The term "tail-biting" means that the encoder starts and ends with the same encoder state. The decoder knows that the encoder starts and ends with the same state, but does not know the value (or identity) of the state.
In the art, a maximum similarity Decoder for a convolutional code is called a Viterbi Decoder (Viterbi Decoder). As is well known, a viterbi decoder decodes a sequence of received symbols by looking for the most likely uncorrupted sequence of symbols, assuming that the actual corrupted sequence was received. The maximum similarity decoder for the tail-biting convolutional code employs viterbi decoding, but imposes a great demand on computational resources. Alternatively, if the computational resources are minimized, the accuracy of viterbi decoding is reduced.
Disclosure of Invention
The present invention is directed to solving one or more of the problems set forth above, or at least improving upon such problems.
In one embodiment of the present invention, there is provided a decoding method of decoding N received branchwords generated by a convolutional encoder using a tail-biting convolutional code, the decoding method including: storing the N received branchwords in a memory; performing viterbi updates successively on a sequence of branchwords, the sequence comprising a first block comprising S successive branchwords of the N received branchwords, a second block comprising the N received branchwords, and a third block comprising T successive branchwords of the N received branchwords, where S and T are less than N, and the viterbi updates generate updated path metrics; determining, from the best path metric, a first encoder state that is most likely to generate a final branchcharacter in the sequence at the end of the third block; performing a viterbi traceback procedure a first time from the first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords; performing a viterbi traceback procedure a second time from the second encoder state at the start of the third block to determine a third encoder state at the start of the second block of branchwords; and outputting the derived tail-biting path if the second encoder state and the third encoder state are the same.
In another embodiment of the present invention, if the second encoder state and the third encoder state are not the same, the method further comprises: replacing the second encoder state with the third encoder state; repeating the second execution; and outputting the derived truncated path.
Conveniently, the sequence of branchwords in the successive executions is formed by logical circular reads of the N received branchwords stored in memory.
In yet another embodiment of the present invention, S is equal to T.
Conveniently, the first block comprises S consecutive branchwords starting from the end of the second block with N received branchwords.
Further, the third block includes T consecutive branchwords starting at the beginning of the second block having N received branchwords.
Another aspect of the present invention provides a decoding apparatus for decoding N received branchwords generated by a convolutional encoder using a tail-biting convolutional code, the decoding apparatus comprising: a memory storing the N received branchwords; a data processing unit comprising: a sequential processing unit that sequentially performs Viterbi updates for a sequence of branchwords, the sequence comprising a first block comprising S sequential ones of the N received branchwords, a second block comprising the N received branchwords, and a third block comprising T sequential ones of the N received branchwords, where S and T are less than N, and the Viterbi updates generate updated path metrics; a determination unit that determines, based on the best path metric, a first encoder state that is most likely to generate a final branchcharacter in the sequence at the end of the third block; a first execution unit that performs a viterbi traceback procedure starting from the first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords; a second execution unit that performs a viterbi traceback procedure starting from the second encoder state at the start of the third block to determine a third encoder state at the start of the second block of branchwords; and an output unit that outputs the derived tail-biting path if the second encoder state and the third encoder state are the same.
Drawings
Other characteristics and advantages of the invention will appear from the description of a non-limiting example with reference to the following drawings, in which:
fig. 1 depicts a prior art convolutional encoder.
Fig. 2 illustrates a single state-transition trellis section reflecting the operation of the encoder shown in fig. 1.
Fig. 3 shows a state transition trellis, which illustrates the operation of the encoder of fig. 1, assuming a particular starting state and information bits for encoding.
Fig. 4 illustrates an exemplary radio receiver system including a digital signal processor for decoding a received branchword generated by the encoder shown in fig. 1.
Fig. 5 illustrates the manner in which the blocks of received branchwords are stored in a memory device forming part of the radio receiver shown in fig. 4.
Fig. 6 is a flow chart illustrating a sequence of operations performed by the digital signal processor forming part of the radio receiver shown in fig. 4 during decoding of a block of received branchwords generated by the encoder shown in fig. 1.
Detailed Description
For clarity of the following description, the same reference numerals will be used to refer to the same features and steps in the figures showing the prior art and in the figures showing the invention.
Fig. 1 depicts an exemplary convolutional encoder having a rate of 1/2, i.e., the encoder generates two output bits (i.e., 2-bit branchwords) for each information bit to be encoded. Encoder 10 includes two single bit memory cells 12 and 14, and two adder circuits 16 and 18. Memory unit 12 and adder circuits 16 and 18 receive a sequence s (i) of information bits to be encoded. Memory unit 12 provides its contents to memory unit 14 each time a new bit of information is received. The encoder may be considered to include "upstream" and "downstream" paths. Each path includes an adder circuit and a connection to the information bit stream, and one or both of the memory units 12 and 14.
The output of the "upstream" path of the encoder (i.e., the path including adder circuit 16) includes the first bit of the generated branch character. The output is generated by adding the current bit and two previous bits. If the resulting sum is odd, adder 16 outputs a logic 1; if the resulting sum is an even number, adder 16 outputs a logic 0. The output of the "downstream" path of the encoder (i.e., the path including adder circuit 18) includes the second bit of the branched character. The output is generated by adding the current bit and the bit that precedes the current bit by two bits. If the resulting sum is odd, adder 18 outputs a logic 1; if the resulting sum is an even number, adder 18 outputs a logic 0. Since only three bits are used to determine the output branchword, the constraint length (constraint length) of the encoder is considered to be 3. The memory is 2. The more output bits per input bit and the longer the constraint length, the more efficient the encoding is; i.e. the stronger the code is with respect to channel noise.
It should be understood that the encoder shown in fig. 1 is merely exemplary, and that in practical embodiments of the present invention, the encoder may use a greater number of memory cells and adder circuits to generate a greater number of bit outputs for each branch character.
The operation of the conventional encoder shown in fig. 1 may be represented by a trellis diagram (trellis diagram) such as that shown in fig. 2. The trellis describes how the state of the encoder is transformed from one information bit time (information bit time) to the next. The encoder state is the contents of the encoder memory location at any one time, which is read as the state "word". On the left and right side of the trellis are the allowable states of the encoder: 00, 01, 10 and 11. The state on the left side of the trellis represents the current state of the encoder. The state to the right of the trellis represents the next state of the encoder.
For example, regardless of the value of the current bit, if both previous bits are 0 (e.g., the contents of memory cells 12 and 14 are both 0), then the encoder is in state 00 (the trellis node at the top left corner of the trellis). If the current bit is 1, then the arrival of the next bit means that the encoder transitions to state 10. That is, with the arrival of the next bit, the bit in memory cell 14 is replaced by bit (0) in memory cell 12, and the bit in memory cell 12 is replaced by the current bit (1). The transition is indicated by a slash starting from the current state 00 at the top left of the trellis and extending down to the next state 10. The second state starts from the bottom left of the trellis. This state transition is a representation (in parentheses) of the encoder's output branchword, in this case 11.
If the current bit is a 0 instead of a 1, then the arrival of the next bit means that the encoder transitions to the same state 00 (as indicated by the horizontal line across the top of the trellis). The trellis diagram represents all the allowable transitions of the encoder states. For example, according to the diagram shown in fig. 2, the encoder cannot transition from state 00 to state 11 (there is no line connecting the left 00 and the right 11). In this regard, it is clear from the fact that the state can only change one bit at a time. A plurality of trellis of the type shown in fig. 2 are linked to form a trellis that represents a series of encoder state transitions over time. The trellis shown in fig. 3 represents the encoding of the information bit sequence 101100 … by an encoder with a start state of 00. The grid includes six separate grid sections of the type shown in fig. 2. In the example shown in fig. 3, the input bit stream causes a state change shown by a solid line, starting from state 00, 10, 01, 10, 11, 01, 00 …. A discrete time i is shown at the top of the grid. The encoder outputs branch characters shown in parentheses: 11, 01, 00, 10, 10, 11 …. Each state transition across the trellis section shown by the solid line is an allowed transition corresponding to a given current state and information bit to be encoded. Other possible allowed transitions are shown in dashed lines.
As shown in fig. 3, for any given state in the trellis at a particular time instant, there may be two previous states if a transition to the given state occurs. This is clear from fig. 2 or fig. 3: the states on the right side of the trellis section are associated with the two states on the left side of the trellis section by two transition paths. Furthermore, given a particular start state, any information bit stream to be encoded will result in a unique path through the trellis. These two points provide the basis for the application of viterbi decoding of the branchwords generated by the convolutional encoder.
The code words (code words) generated by the exemplary encoder shown in fig. 1 are transmitted to a decoder over a communication channel. The decoder operates to determine the sequence of information bits encoded by the encoder. The determination is made based on the branchcharacter received by the decoder. This is relatively straightforward assuming an ideal communication channel and knowing the encoder start state. The decoder takes a trellis describing the state transitions of the encoder and knows the starting state, using the received branchwords to determine the state transitions made by the encoder when encoding. Based on these state transitions, the bit sequences that result in these transitions may be determined.
In general, an ideal communication channel is not available in reality. Therefore, a realistic decoder must be able to handle the fact that: some of the received branch words contain bit errors. For example, while the encoder generates a branchword 00, the decoder receives a branchword 01. Thus, the decoder may misinterpret the sequence of states experienced by the encoder. In contrast to prior art viterbi encoders where the start and end states of the encoder are always equal to zero, a truncated viterbi encoder does not always know the start and end states of the encoder. What a truncated viterbi encoder knows is only that the start and end states of the encoder are ideally the same. However, with incomplete information of the start state and subsequent state sequences of the encoder, the decoder may experience errors in determining the encoder information bits.
The problem of channel errors is mitigated by using a viterbi decoder, as is well known in the art. Assuming branch characters that may contain bit errors, the viterbi decoder selects the most likely path through the coding trellis. A viterbi decoder can do so from any of a number of starting states (assuming the decoder does not know the starting state). The selection of the most likely path is made gradually, receiving the branchcharacters one at a time. The result of applying the viterbi technique to each successively received branch character is: a path metric (path metric) is maintained that reflects the likelihood that the path associated with the metric is the path taken by the encoder.
As part of determining the best estimate of the path taken by the encoder, a decision vector is generated that reflects, for each state (at a given discrete time), which of the two possible paths to that state is the preferred path. The vector records the "better path" decision for each state in the trellis. Those paths that are not selected as preferred paths are considered "pruned". The clipped path has no effect on the final decoding of the branchword. In a practical environment, channel characters are corrupted by noise and interference. To provide more decoding information to the viterbi decoder, soft received (soft received) branch words are used to compute branch and path metrics for path selection. These soft received branch characters are real numbers. Hereinafter, the term "branch character" assumes a soft branch character.
To a state, there are at most two paths. Thus, as is known in the art, the determination as to which path to maintain and which path to clip may be represented by a single bit. In the exemplary embodiment of the encoder shown in fig. 1 and 2, at each discrete point in time, there are four states. Thus, at each time, a four-bit determination vector is determined and stored in memory. Once the Viterbi technique has been applied to the received branch character, the stored decision vector provides the basis for a prior art Viterbi traceback (traceback) procedure. It is this trace back process that decodes the received branch character. Further details of conventional Viterbi decoding are described in Clark and Cain, in Air-correction Coding for Digital Communication, Chapter 6(1981), the entire contents of which are incorporated herein by reference.
Fig. 4 shows an exemplary embodiment of a viterbi decoder 20 forming part of a radio receiver system. The decoder 20 is connected to an antenna 22 and radio receiver circuitry 24, which radio receiver circuitry 24 receives the analog radio signal x (t) and provides digital branch characters to the decoder 20 at discrete times c (i).
The decoder 20 includes a Digital Signal Processor (DSP)26, the digital signal processor 26 being connected to a Read Only Memory (ROM)28 and a Random Access Memory (RAM) 30. The RAM30 stores buffers for the N received branch characters of the present invention, as well as the results of the viterbi update.
The decoder 20 is operative to decode a branchcharacter received from a radio communication channel. These branchwords are generated by an encoder that employs a tail-biting convolutional code. Such an encoder may be as described above with reference to figures 1 and 2. The finger characters communicate non-ideally because the channel is noisy. That is, the branch word may contain one or more bit errors. The decoding operation performed by decoder 20 attempts to extract the correspondent's information from these branch characters.
Decoder 20 employs a prior art viterbi decoding technique to decode a block of N received branchwords generated by a convolutional encoder using a tail-biting convolutional code. However, the decoder 20 does this by continuously performing a viterbi update on a sequence of branch characters that is longer than the N received branch characters generated by the convolutional encoder. The sequence of branch characters over which the viterbi update is successively performed is constructed by adding one sequence of branch characters to the beginning of the N received branch characters and another sequence of branch characters to the end of the N received branch characters.
Preferably, this is done in the manner shown in fig. 5. As shown in this figure, the sequence of branchwords may be formed by a logical circular reading of the N received branchwords stored in RAM 30. The first block comprising S consecutive branch characters from among the N received branch characters may be read from the end of the block of N received branch characters stored in RAM 30. Similarly, blocks of T consecutive branchwords may be read from the beginning of the blocks of N received branchwords stored in RAM 30. By first reading a first block 40 (the first block 40 comprising S consecutive ones of the N received branchwords), then reading a second block 42 (the second block 42 being formed of the N received branchwords), and finally reading a third block 44 (the third block 44 comprising T consecutive branchwords from the beginning of the second block of the N received branchwords), a sequence of branchwords over which the viterbi update is performed consecutively can be formed in a computationally convenient manner. Each viterbi update generates path metrics and decision vectors based on those metrics previously described.
In the present case of viterbi decoding, the decoder 20 adopts the following rule. If we start accumulating branch metrics (branch metrics) along the path through the trellis shown in FIG. 3, the following phenomena will be observed: whenever two paths merge into one, only the most likely one (best path or survivor path) needs to be retained, since the currently preferred path is always preferred for all possible extensions of these paths. For any given extension of these paths, the two paths extend through the same branch metric. This process is described by an add-compare-select (ACS) recursion, which recursively determines, for each step in the trellis, the path to each state with the best path metric.
Thus, as shown in FIG. 6, the decoder 20 successively performs Viterbi updates on the sequence of N + S + T branch characters read from the RAM30 in the manner shown in FIG. 5. The viterbi update generates path metrics that are updated for each branch character until the end of the sequence of N + S + T branch characters is reached.
In this regard, the decoder 20 determines the first encoder state that is most likely to have generated the final branchcharacter in the sequence based on the best path metric. A viterbi traceback procedure is then performed from this first encoder state to determine the second encoder state at the start of the third block 44 of branchwords. Starting from this second encoder state, a second viterbi traceback procedure is performed starting from the end of the second block of branchwords 42 and up to the start of the second block of branchwords 42 to determine a third encoder state.
If the second encoder state and the third encoder state (i.e., if the start state and the end state of the viterbi traceback process performed on the second block 42 of branchwords) are found to be the same, the decoder 20 has found the best tail-biting path.
If the second encoder state and the third encoder state are found to be different, the decoder 20 may selectively repeat the Viterbi traceback process performed on the second block of branchwords 42 by replacing the second encoder state with the third encoder state and repeating the traceback process. The derived truncated path is then output. It has been found that no further cycles of the viterbi traceback process are typically required.
Suitably, the values of S and T are the same, i.e. the first and third blocks are made up of the same number of branchwords which form a subset of the N received branchwords stored in the RAM 30. However, in other embodiments of the invention, the first and third blocks of branchwords may comprise different numbers of branchwords.
The above-described method of decoding the N received branch characters generated by a convolutional encoder using a tail-biting convolutional code advantageously provides more reliable path metrics for use during the viterbi traceback process by lengthening the sequence of branch characters over which the viterbi update is performed. It has been found that using this method, the best truncated path can be found by performing the traceback process only once (or at most twice) on the second block of N received branchwords. Furthermore, the manner in which the sequences of branch characters are formed as shown in FIG. 5 is computationally very easy to perform, and thus the improved accuracy of the above method can be achieved with minimal additional computational resources.
Although in the above embodiments the invention has been implemented primarily using digital signal processing, in other embodiments the invention may be implemented primarily in hardware using hardware components such as application specific integrated circuits. The present invention can also be implemented primarily using computer software or a combination of hardware and software.
While the invention has been described with reference to exemplary embodiments thereof, the invention is not limited to these embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
This application is based on and claims priority from australian provisional patent application No.2008906238 filed on 2.12.2008, the entire disclosure of which is incorporated herein by reference.
Industrial applicability
According to the present invention, a method of decoding tail-biting convolutional codes using viterbi decoding can be provided that minimizes the requirements on storage and computational resources and optimizes the accuracy of such decoding.

Claims (6)

1. A decoding method of decoding N received branchwords generated by a convolutional encoder using a tail-biting convolutional code, the decoding method comprising:
storing the N received branchwords in a memory;
performing viterbi updates successively on a sequence of branchwords, the sequence comprising a first block comprising S successive branchwords added to the beginning of the N received branchwords, a second block comprising the N received branchwords, and a third block comprising T successive branchwords added to the end of the N received branchwords, where S and T are less than N, and the viterbi updates generate updated path metrics;
determining, at the end of the third block, a first encoder state most likely to generate a final branchcharacter in the sequence according to a best path metric;
performing a viterbi traceback procedure a first time from the first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords;
performing a viterbi traceback procedure a second time from the second encoder state at the start of the third block to determine a third encoder state at the start of the second block of branchwords; and
outputting the derived truncated path if the second encoder state and the third encoder state are the same.
2. The decoding method of claim 1, wherein if the second encoder state and the third encoder state are not the same, the method further comprises:
replacing the second encoder state with the third encoder state;
repeating the second execution; and
and outputting the derived truncated path.
3. The decoding method of claim 1, wherein the sequence of consecutively executing branchwords is formed by a logical circular reading of the N received branchwords stored in memory.
4. The decoding method of claim 1, wherein S is equal to T.
5. The decoding method of claim 1, wherein the first block includes S consecutive branchwords starting from the end of a second block having N received branchwords.
6. The decoding method of claim 1, wherein the third block includes T consecutive branchwords starting from the beginning of the second block having N received branchwords.
HK12105209.2A 2008-12-02 2009-10-09 Decoding method HK1165111B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AU2008906238A AU2008906238A0 (en) 2008-12-02 Viterbi decoder
AU2008906238 2008-12-02
PCT/JP2009/067947 WO2010064496A1 (en) 2008-12-02 2009-10-09 Decoding method and decoding device

Publications (2)

Publication Number Publication Date
HK1165111A1 true HK1165111A1 (en) 2012-09-28
HK1165111B HK1165111B (en) 2015-07-17

Family

ID=

Also Published As

Publication number Publication date
WO2010064496A1 (en) 2010-06-10
EP2361458A4 (en) 2014-07-09
CN102282771A (en) 2011-12-14
EP2361458A1 (en) 2011-08-31
JP2012510735A (en) 2012-05-10
CN102282771B (en) 2014-10-08
JP5370487B2 (en) 2013-12-18

Similar Documents

Publication Publication Date Title
US5802116A (en) Soft decision Viterbi decoding with large constraint lengths
US7765459B2 (en) Viterbi decoder and viterbi decoding method
EP0653715B1 (en) Integrated circuit comprising a coprocessor for Viterbi decoding
CN1808912B (en) Error correction decoder
JP3233847B2 (en) Viterbi decoding method and Viterbi decoding circuit
EP2339757B1 (en) Power-reduced preliminary decoded bits in viterbi decoder
US8009773B1 (en) Low complexity implementation of a Viterbi decoder with near optimal performance
US8489972B2 (en) Decoding method and decoding device
JPH09232972A (en) Viterbi decoder
JP5169771B2 (en) Decoder and decoding method
JP3823731B2 (en) Error correction decoder
CN102142848A (en) Decoding method and decoder of tail-biting convolutional code
US8055986B2 (en) Viterbi decoder and method thereof
JP5370487B2 (en) Decoding method and decoding apparatus
HK1165111B (en) Decoding method
JP3892471B2 (en) Decryption method
JP4226165B2 (en) Decoding device and decoding method
JP4295871B2 (en) Error correction decoder
JP3235333B2 (en) Viterbi decoding method and Viterbi decoding device
JP3337950B2 (en) Error correction decoding method and error correction decoding device
JP3120342B2 (en) Viterbi decoder
JP3720251B2 (en) Viterbi decoder
JPH08279765A (en) Decoding algorithm for convolutional code and trellis code and receiver using the same
KR0170199B1 (en) Viterbi decoder
JPH04329026A (en) viterbi decoder

Legal Events

Date Code Title Description
PC Patent ceased (i.e. patent has lapsed due to the failure to pay the renewal fee)

Effective date: 20181009