[go: up one dir, main page]

HK1032698B - Apparatus and method for coding and decoding information bits - Google Patents

Apparatus and method for coding and decoding information bits Download PDF

Info

Publication number
HK1032698B
HK1032698B HK01103280.2A HK01103280A HK1032698B HK 1032698 B HK1032698 B HK 1032698B HK 01103280 A HK01103280 A HK 01103280A HK 1032698 B HK1032698 B HK 1032698B
Authority
HK
Hong Kong
Prior art keywords
bits
information bits
sequence
scrambling
metric
Prior art date
Application number
HK01103280.2A
Other languages
Chinese (zh)
Other versions
HK1032698A1 (en
Inventor
P‧W‧登特
Original Assignee
艾利森公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/943,885 external-priority patent/US6012160A/en
Application filed by 艾利森公司 filed Critical 艾利森公司
Publication of HK1032698A1 publication Critical patent/HK1032698A1/en
Publication of HK1032698B publication Critical patent/HK1032698B/en

Links

Description

Apparatus and method for encoding and decoding information bits
Technical Field
The present invention relates to the transmission of digital data bits, and more particularly to a method of detecting errors in a first set of digital data bits using a second set of digital data bits.
Background
Transmitting digital data bits or symbols over a noisy channel creates the possibility of errors within the transmitted digital data symbols. Errors in some symbols are more fatal than errors in other symbols and must be protected with error detection and correction. In the transmission of mobile radio signals, data symbols may represent different portions of a digitized speech waveform. The speech waveform processing of the speech decoder can lead to the occurrence of unpleasant artefacts if errors occur in the more important data bits (primary bits) of the speech waveform without detection. Artifacts include unpleasant non-speech sounds that are generated in the decoded speech waveform due to decoding errors. Errors in the less important speech bits (secondary bits) of the speech waveform only lead to a tolerable increase in the background noise.
In order to be able to take various measures to mask the generation of artefacts, it is necessary to detect errors in the more important primary data bits. For example, british patent No. 8119215 describes how an erroneous segment of a speech waveform can be replaced by an earlier received speech waveform segment corresponding to a speech waveform segment approximately one or more laryngeal pulse periods prior to the erroneous segment. A more sophisticated error masking strategy called manual regeneration is described in us patent No.5,097,507.
Us patent No.5493584 discloses a method of determining a channel quality measure for a channel in a communication system. The bit prioritization process separates the discrete data into three streams. The first stream of data units is critical to the information carrying capability of the communication system. The second stream of discrete data units is made up of data units, which is important but not essential. Finally, the third stream of data units consists of data units, which are relatively less important. The encoded second stream and the first stream are used as inputs, and a random bit modulator is used to generate a stream of modulated second data units.
Us patent No.5666370 discloses an improvement in the error control coding scheme in low bit rate encoders to improve their performance in the presence of transmission errors in a typical digital cellular channel. The error control coding scheme employs non-linear block coding for those codes that are tailored to fading channels in order to provide superior error protection for data at compressed half rate speeds.
In processing input bits representing digitized speech generated by a speech digitizer, such as a Residual Excitation Linear Prediction (RELP) encoder, a vector code block excited linear prediction (VSELP) encoder, or an advanced multiband excitation (or subband) encoder (AMBE), some bits (first order bits) are not only more important to protect with error correction coding than other bits (second order bits), but they are also more important to protect with error detection processes. When error correction coding fails, the resulting artifacts can be very disturbing to the listener. Artefacts can be prevented by detecting when errors occur in the primary bits and either muting the audio output or incorporating sophisticated error bridging techniques.
Various methods currently exist for error detection in decoded digital data. One of these methods, frame-based speech decoding, divides the bits in a frame into important and less important bits and uses a Cyclic Redundancy Check (CRC) code for error detection to protect the most important bits. This process is described in the published standard for the european digital cellular system known as GSM.
In land-based cellular radiotelephone systems where capacity is limited by mutual interference between co-channel users, adding CRC codes on all transmissions does not change the carrier-to-interference ratio (C/I) of the signal and therefore has no significant effect on capacity. In satellite communication systems where capacity is limited by the amount of satellite transmission power available to combat thermal background noise, adding CRC codes to transmissions increases the number of bits or symbols that must be transmitted. This increases the required transmitter power. The error detection capability of the CRC code can only be obtained at the cost of the transmission power. Significant benefits are not achieved since increasing the transmit power even without the use of a CRC reduces the error rate. Thus, a method of improving error detection in more important groups of bits, such as first-order speech bits, without increasing the number of symbols that must be transmitted would be of great benefit. Us patent No.5,517,511 (Hardnick et al) describes a method of detecting errors in a first set of bits using a Galay block code by decoding a second set of bits. However, the Golay code can only be used for a specific code rate of 1/2, which is not sufficient in many cases. Therefore, a method of using a more flexible convolutional code is required.
Disclosure of Invention
The present invention overcomes the above-identified problems and others with an improved system for transmitting an encoded signal comprising first and second sets of data symbols. The digital data to be transmitted is divided into two sets of symbols for transmission, which are identified as the most significant bits (symbols) and the less significant bits (symbols). The most important bits are error correction coded using a convolutional code that provides high redundancy, such as a rate 1/4 tail-biting convolutional code (rate 1/4 tail-biting convolutional code), while the less important data symbols are error correction coded using a convolutional code with lower redundancy, such as a rate 1/2 tail-biting convolutional code. Tail biting codes are utilized to avoid the overhead of transmitting tail bits as wasteful as transmitting CRC check codes. The encoded less significant bits for transmission are scrambled with the most significant bits uncoded, such as by adding a pseudo-random bit pattern that depends on the most significant bits uncoded, or by changing the bit transmission order associated therewith.
The receiver separates the scrambled and encoded less significant bits from the encoded most significant bits and decodes the most significant bits using an error correction decoder adapted to decode highly redundant convolutional codes to produce a first cumulative decoding metric. The less significant bits of the code are descrambled using the most significant bits of the decoding. The descrambled less significant bits are then decoded with an error correction decoder adapted for low redundancy error correction codes and a second cumulative metric is generated. The error detector processes the first and second cumulative metrics to determine whether the metrics are located in a region of acceptable decoding or a region of data that is rejected for decoding.
According to one aspect of the present invention, there is provided an apparatus for encoding information bits for transmission, comprising a separator for classifying the information bits into a first set of information bits and a second set of information bits; a first convolutional encoder for encoding a first set of information bits to generate a third set of encoded information bits; a second convolutional encoder for encoding a second set of information bits to generate a fourth set of encoded information bits, wherein the ratio of the third set of information bits to the first set of information bits is greater than the ratio of the fourth set of information bits to the second set of information bits; scrambling circuitry for scrambling the fourth set of encoded information bits in accordance with the first set of information bits using a permutation to generate a fifth set of scrambled encoded information bits; a transmitter for transmitting each of the fifth set of scrambled encoded information bits and each of the third set of encoded information bits in a predetermined transmission sequence; and wherein said scrambling circuit comprises a pseudo-random sequence generator for generating a sequence of scrambling bits from an initial set of bits dependent on the first set of information bits.
According to another aspect of the present invention, there is provided an apparatus for receiving a signal and decoding coded and scrambled information bits therefrom to generate first and second sets of information bits, comprising: a first processor for determining a first bit sequence with an accumulated metric indicating a highest likelihood based on a plurality of first bit sequences of metric values hypothesized to contain a first set of information bits; a first re-encoder for re-encoding the hypothesized plurality of first bit sequences to generate first desired signal samples corresponding to the first set of information bits; a first metric accumulator for comparing the first expected signal sample with a corresponding received signal sample and for accumulating an accumulated metric associated with the first expected signal sample based on the comparison; descrambling circuitry for descrambling received signal samples corresponding to the second set of information bits in accordance with the determined highest likelihood first set of information bits; a second processor for successively assuming a second bit sequence comprising a second set of information bits based on a plurality of stored metric values, thereby determining a second bit sequence having a second cumulative metric representing a highest likelihood; a second re-encoder for re-encoding the hypothesized second sequence of bits to generate a second desired sequence of signal samples corresponding to a second set of information bits; a second metric accumulator for comparing the descrambled received signal with a second expected sequence of signal samples and for accumulating a plurality of second accumulated metrics in dependence on the comparison; an error indicator circuit for processing the first cumulative metric representing the highest likelihood and the second cumulative metric representing the highest likelihood to produce an error indicator signal; and wherein the error indication circuit additively combines the first and second cumulative metrics and compares the resulting sum to a threshold to generate the error indicator signal.
According to a further aspect of the present invention there is provided apparatus for receiving a signal and decoding therefrom coded and scrambled information bits to produce a first set of decoded information bits corrected for transmission errors plus an indicator signal warning of the likelihood of uncorrected errors and a second set of information bits corrected for some transmission errors, comprising: a first processor for successively assuming a first bit sequence comprising the first set of information bits according to a metric value to determine a first bit sequence having a first cumulative metric representing a highest likelihood; a first re-encoder for re-encoding the hypothesized first sequence of bits to generate a first desired sequence of signal samples corresponding to the first set of information bits; a first metric accumulator for comparing the first expected signal sample with a corresponding received signal sample and accumulating the accumulated metric according to similarity or difference; a second processor for successively assuming a second sequence comprising the second set of information bits based on the metric values to determine a second bit sequence having a second cumulative metric representing a highest likelihood; a second re-encoder for re-encoding the hypothesized second sequence of bits to generate a second desired sequence of signal samples corresponding to the second set of information bits; scrambling circuitry for scrambling said second desired signal samples corresponding to said second set of information bits with said determined highest likelihood first information bit sequence; a second metric accumulator for comparing the received signal samples with the scrambled expected signal samples and accumulating the second accumulated metric according to similarity or difference; an error indicator circuit for processing said first cumulative metric representing the highest likelihood and said second cumulative metric representing the highest likelihood to generate said indicator signal; and wherein the error indicator circuit additively combines the first and second cumulative metrics and compares the resulting sum to a threshold to produce the error indicator signal.
According to yet another aspect of the present invention, there is provided a method for encoding information bits, characterized by the steps of: classifying the information bits into a first set of bits and a second set of bits; selecting a first number of contiguous bits within a first set of bits and a second set of bits comprising only the bits of the first set; calculating a first number of coded bits from the first number of contiguous bits; selecting a second number of contiguous bits from the first and second sets of bits comprising only the second set of bits; calculating a second number of coded bits from the second number of contiguous bits; and scrambling the second number of coded bits according to the first number of coded bits by using a permutation to generate a third set of coded bits; and wherein said scrambling step further comprises: generating a scrambling mask based on the first set of bits; and applying the scrambling mask to the second set of coded bits.
According to another aspect of the present invention, there is provided a method for decoding a coded information signal comprising a first set of coded symbols corresponding to a first set of information symbols and a second set of scrambled and coded symbols corresponding to a second set of information symbols, characterized by the steps of: processing a first portion of an encoded information signal comprising a first set of encoded symbols to generate a plurality of hypotheses and associated likelihood metrics for the first set of information symbols, and re-encoding the first set of information symbols; comparing the re-encoded first set of information symbols with corresponding bits received from a receive buffer; re-encoding the second set of information symbols; scrambling said re-encoded second set of information symbols; processing a second portion of the encoded information signal comprising a second set of scrambled and encoded symbols using the plurality of hypotheses for the first set of information symbols to descramble and decode the scrambled and encoded symbols and calculate updated likelihood metrics; and when one of the hypotheses is expanded to include all of the first and second sets of information symbols, selecting one of the hypotheses with the highest likelihood metric and extracting the first and second sets of information symbols therefrom as the desired decoded symbols.
Drawings
For a more complete understanding of the present invention, reference is now made to the following detailed description taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a diagram of transmitter coding in accordance with the present invention;
FIG. 2 illustrates one embodiment of a shift register;
FIG. 3 is a diagram of a receiver decoder according to the present invention;
FIG. 4 illustrates the application of an accept/reject threshold on the sum of inverse quantization quality indicators;
FIGS. 5a-5f show the relative probabilities of acceptance and rejection for various threshold levels;
fig. 6 is another embodiment of a transmitter encoder; and
fig. 7 is another embodiment of a receiver decoder.
Detailed Description
Referring now to the drawings, and more particularly to FIG. 1, there is shown a block diagram of a coding transmitter of the present invention. For purposes of the following discussion, encoding will be described with respect to voice data, with the most significant data bits being made up of primary voice data and the less significant data bits being made up of secondary voice data. However, it should be understood that the present invention can be used with any two sets of data bits where a user wishes to protect another set of data bits with a second set of data bits.
A data symbol group 10 of N1 primary bits and N2 secondary bits is input to a splitter 15, which splits the group of data symbols into a first group of N1 primary bits 20 and a second group of N2 secondary bits 25. The primary bits 20 are encoded with a convolutional encoder 30. For example, convolutional encoder 30 comprises a rate 1/4 tail-biting convolutional encoder. The rate 1/4 tail-biting convolutional encoder 30 processes the N contiguous bits (the slice length) in the primary bits 20 to generate 4 output bits. The N contiguous bits are then shifted by 1-bit position before generating another 4-bit output. This process continues until the adjacent bits have moved entirely through the level one bit. This process ultimately provides four times the number of bits originally provided to encoder 30.
The secondary bits 25 are encoded with a second convolutional encoder 35. In one embodiment, second convolutional encoder 35 comprises a rate 1/2 convolutional encoder, which operates similar to the rate 1/4 convolutional decoder, except that it generates only two output bits per shift cycle from a selected group of N contiguous bits of the two-level data bits. Thus, only twice the number of secondary bits 25 originally provided is generated. In the preferred embodiment, the first convolutional encoder is a rate 1/5 tail biting encoder and the second convolutional encoder is a rate 1/3 convolutional encoder. Further, the coded bits from the first (rate 1/5) convolutional encoder are divided into two equal groups representing rate 2/5 punctured codes by puncturing each group, while the coded bits from the second (rate 1/3) convolutional decoder are punctured or divided into two equal groups giving two rate 2/3 codes. One of the rate 2/5 encoded groups is then combined with one of the rate 2/3 encoded groups and transmitted by the first device, while another rate 2/3 is combined with another rate 2/5 group and optionally transmitted by a second device, such as a diversity device. The receiver receives the transmission of the first device and the optional transmission of the second device and decodes the received signal.
The number of contiguous bits N selected by the encoders 30, 35 is referred to as the slice length. The constraint lengths of the encoders 30, 35 are not necessarily not equal. Longer constraint lengths provide greater error correction capability but require more complex decoders. The error correction performance is more affected by the code rate of the convolutional encoders 30 and 35 (1/4 and 1/2, respectively). The code rate is the amount of redundancy added by the encoding process. For example, the rate 1/4 encoder quadruples the number of transmission bits, while the rate 1/2 encoder doubles the number of transmission bits.
The uncoded primary bits 20 are also input to a scrambling mask generator 40. The scrambling mask generator 40 generates a scrambling mask having a number of bits equal to the bits generated by the second convolutional encoder 35. The encoded secondary bits may be scrambled by exclusive-or' ing with a scrambling code. The scrambling mask is calculated in a deterministic manner from the supplied first-order bits 20. Although the scrambling mask may be generated in any number of ways, one technique for generating the scrambling mask is illustrated in fig. 2. The scrambling mask generator 40 comprises a shift register 45 having N1 serial stages 50. The cascaded stages 50 are initially loaded with N1 one-stage data bits. Feedback logic circuit 55 combines bits from selected stages to produce feedback bits. Applying clock pulse 60 to register 45 causes the data bits in the register to be shifted to the right by 1 bit so that the feedback bits beat to the leftmost stage 50a and the rightmost stage 50z falls out to the right.
Memory 65 records the feedback bits after each of the 2 × N2 clock pulses to generate a 2 × N2 bit scrambling mask. The memory 65 can alternatively record the bits dropped from the register stage 50z or any other function of the bits in the register 45. The memory 65 need not record one bit for each clock pulse acting on the register 45. For example, 23 clock pulses may be applied to the register 45 and then the memory 65 records the 8 bits selected from the register. This process will continue until at least 2N2 bits are recorded. Any method of generating a pseudo random bit pattern starting from a first order bit may be utilized as long as any change to the first order bit pattern results in approximately 50% of the generated scrambling mask being different from the originally provided first order bits.
The resulting scrambling code is added to the encoded secondary bits 36 at adder 70 using modulo-2 (exclusive or) addition. The modulo-2 addition on a bit-by-bit (symbol-by-symbol) basis ensures that a single transmission (error) of scrambled and coded secondary bits remains after descrambling at the receiver. Instead, using a permutation instead of bitwise addition may also prevent error propagation. In an implementation utilizing bit permutation, the encoded level two bits 36 will be reordered dependent on the level one bits 20. This can be accomplished, for example, by initializing the PRN generator with a primary bit 20, the output of which is treated as a sequence of bit indices, specifying which encoded secondary bit should be transmitted at the next available bit position. After scrambling, the encoded primary bits 32 are interleaved with the encoded and scrambled secondary bits 36 at interleaver 75. Alternatively, further scrambling (not shown) may be utilized prior to transmission to provide additional privacy.
Referring now to fig. 3, a decoder for decoding a signal generated by the encoding apparatus of fig. 1 is shown. The received signal containing the interleaved encoded primary signal 32 and the scrambled and encoded secondary signal 36 is provided to a de-interleaver 80 for separation. The separated encoded primary signal 32 is processed by decoder 85 to recover the primary bits 20 along with the first decoding quality indicator provided to error detector 90. Decoder 85 is configured to decode any encoding scheme implemented by encoder 30 (fig. 1).
The decoded primary bits 20 are provided to a scrambling mask generator 95 where the scrambling mask described above with respect to fig. 1 is regenerated. The encoded secondary bits are descrambled at descrambler 100 by subtracting the regenerated scrambling mask from the scrambled and encoded secondary bits 36 on a bit-by-bit (symbol-by-symbol) basis. Alternatively, if bit permutation is used for scrambling, their descrambling is controlled by reordering the encoded secondary bits with the decoded primary bits. The decoder 105 then processes the descrambled encoded secondary bits 36 to recover the secondary bits 20 along with an estimate of the second decoding quality indicator of the decoding process. The decoding quality indicator is a cumulative path metric such as a logarithm indicating the likelihood that the decoded data sequence is the correct sequence. The decoding quality indicator is provided to the error detector 90. Decoder 105 is configured to decode any encoding scheme implemented by encoder 35 (fig. 1). The error detector 90 uses two quality indicators to determine whether the decoded first-order bits are true or false.
The effect of the scrambling mask is to change the encoded secondary bits 36 from an active encoded output to an inactive output of the encoder 36 until the scrambling process is reversed at the receiver. However, in order to reverse the scrambling process, there must be accurate knowledge of the primary bit on which the scrambling depends. If an error occurs at the receiver in decoding the primary bits, the scrambling process cannot be properly relieved and the encoded secondary bits cannot be properly descrambled. The incorrectly descrambled secondary bits will not represent a valid encoded output that has been generated by the encoder 35. The corresponding decoder 105 detects this and uses the decoding quality indicator with a value representing a low likelihood to provide an indication that an error may have occurred in the decoding of the first order bits.
Thus, the encoding/decoding systems of fig. 1 and 3 provide a means by which the decoding of the second set of bits (secondary bits) can assist in determining whether the previous decoding of the first set of bits (primary bits) was successful. This is useful when the number of bits in the second set is much larger than the number of bits in the first set, so the decision as to whether the encoded secondary bits represent a valid encoder output is based on a much larger number of bits than would be the case if only a smaller number of primary bits were to be considered.
Error detector 90 utilizes the decoding quality indicators from both decoder 105 and decoder 85 to determine the likelihood of errors in the decoded primary bits and to provide an error indication for further processing. Further processing may include a speech decoder 110 (shown in dashed lines) for reproducing speech waveforms from the decoded primary and secondary bits. The error indication is used by the voice decoder 110 to mask erroneous artifacts when the error detector 90 indicates a high likelihood of error in the decoded first-order bits.
The following describes a method by which the error detector 90 can determine the likelihood of occurrence in the decoding of the primary bit 20. It should of course be understood that a wide variety of methods for processing likelihood metrics can be devised within the same spirit of the invention. Decoders 85 and 105 estimate the most likely primary and secondary bit sequences present in the received signal using soft decision sequential maximum likelihood decoding (MLSE), in which the received coded bits are not quantized to zero or one, but are instead determined to be soft values indicating the degree of "unity" or "nullity" of the coded bits. The MLSE decoder (not shown) in the error detector 90 assumes all possible bit patterns of length equal to the limit length of the error correction code and determines the encoded bit pattern that should be derived. Each coded bit pattern is then compared to the corresponding soft value pattern. If the polarities match, the magnitude of the soft value is subtracted from the likelihood metric for the corresponding hypothesis. Otherwise, the magnitude of the soft value is added to the metric for the corresponding hypothesis.
The hypothesized bit patterns are then extended by a new symbol, doubling the number of patterns, but at the same time concentrating the number of patterns by a factor of 2 by selecting between pairs of patterns that differ only in their oldest bit positions and retaining the patterns in each pair that have a lower metric. When all information bits have been assumed, the lowest cumulative metric represents the most likely symbol sequence, and the metric is the negative of the correlation between the received signal and the coded version of the sequence.
When using a tail-biting convolutional decoder, the decoding bit period can start at any point in the period and the process continues until at least one complete decoding period has occurred. Since the tail-biting decoder does not start and end at a known state, the first and last bits of the decoding tend to exhibit a higher error rate. Thus, the decoder is typically allowed to complete, for example, two complete cycles and select the bit group that decodes in the middle of two cycles. The resulting cumulative metric is then summed over two cycles rather than over the center bit. Therefore, when the tail-biting decoder is allowed to overrun, the negative value metric used as the correlation value should be calculated as the difference between the accumulated metric just before decoding the first bit of the finally selected output bit and the metric after decoding the last bit of the selected output bit. This gives a correct determination of how much the metric has grown due to processing the selected output bits.
Other methods of determining quality indicators may be used, in which the final decoded bit sequence may be retrospectively correlated with stored received soft values to obtain correlation values, which are divided by the number of coded bits to derive an average correlation value. If the sequence is correct, the average correlation value will be equal to the average received signal amplitude. Thus, separate accumulations sum the squares of the received soft values, divide them by their number, and take the square root to derive the Root Mean Square (RMS) average of the amplitudes of the received signals.
The average correlation value is divided by the RMS average signal value to derive a decoding quality indicator that lies between zero and one. A value of one indicates that there is no noise and no error is detected in the decoding. Values less than one represent the amount of noise present in the received signal and the resulting possible number of decoding errors. An inverse quality indication as one minus the above-determined value may be more valuable. The inverse quantization of the quality indication may be one of 20 bands of width 0.05 giving an inverse quality indicator of quantization between 1 and 20, where 1 represents very reliable decoding and 20 represents very unreliable decoding.
Both decoders 85 and 105 of fig. 3 can cause the output of the inverse quality indicator to the error detector 90. The error detector 90 then makes a decision based on both quality indicators to determine whether the primary bit has been decoded correctly. For example, as long as the sum of the above defined inverse quantization quality indicators is smaller than a selected threshold, it may be decided that the decoding is most likely correct. Figure 4 graphically illustrates the application of an accept/reject threshold to the sum of inverse/quantization quality indicators. The possible values 1-20 of the first inverse quality indicator for decoding the primary bit are marked upwards from the vertical axis. A second inverse quality indicator of the decoded secondary bits is marked along the horizontal axis.
The acceptability threshold for any decoding operation equal to the sum of the primary and secondary inverse quality indicators should not exceed 20. This corresponds to the diagonal line in fig. 4, which passes through all grid points whose sum of coordinates is equal to the exemplary threshold 0. Thus, decoding operations falling to the upper right of this line with a sum of quality values exceeding 20 are rejected (i.e., classified as erroneous decoding), while decoding operations falling to the lower left of the threshold line are accepted. Of course, the acceptance/rejection line may not be a straight line but a curve passing through a predetermined lattice point.
The curve should be chosen to give the best compromise between correctly rejecting erroneous decoding, falsely rejecting correct decoding and falsely accepting erroneous decoding. The remaining events include correct reception of correct decoding. The relative probabilities of these four types of events are plotted in fig. 5a-5f for different accept/reject thresholds. Fig. 5a shows the result of a particular type of encoder with a matched decoder with the following parameters:
1. the speech coder outputs 72 bit frames every 20 milliseconds, and the frames are divided into 12 primary bits and 60 secondary bits;
2. using a ratio 1/5 to limit length 7, the tail-biting convolutional encoder encodes 12 primary bits to give 60 encoded bits;
3. using a ratio 1/3 to limit length 7, the tail-biting convolutional encoder encodes 60 secondary bits to give 180 encoded bits; and
4. after decoding, transmitting the 60 plus 180 coded bits on a noisy channel at various signal-to-noise ratios; a threshold 18 is placed on the sum of the inverse quantization quality indicators to determine whether the frame should be accepted or rejected.
Four types of events can occur using a decoder. The desired events include the acceptance of a correctly decoded frame by the error detector 90 or the rejection by the error detector 90 of a decoded frame with an error in the primary bit. The undesired results include erroneously rejecting a correctly decoded frame or the error detector 90 erroneously accepting a decoded frame with an error in the primary bit.
Fig. 5a shows the probability of four types of events as a function of the signal-to-noise ratio (EB/NO) for the above parameters. It can be seen that the majority of frames with errors in the level one bit are rejected, with the remaining number of error detection escapes in the region of 0.1% to 1%. The speech encoder can bridge up to 10% of the frame erasure area with artificially reconstructed lost segments while preserving useful speech quality. To understand how the probability varies with the choice of rejection threshold, the following graph can be observed. Fig. 4a shows no error detection, where all frames are accepted, fig. 4b shows rejection only above threshold 20, fig. 4c shows rejection only above threshold 19, fig. 4e shows rejection only above threshold 18, and fig. 4f shows rejection only above threshold 17.
It is shown in fig. 4a-4f that the number of frame erasures due to an error detector falsely rejecting a good frame is sometimes larger than the number of rejected frames due to errors. This is the statistical failure rate of any error detection criteria, sometimes good frames fail to pass the test, thereby increasing the number of frames lost. However, because the present invention does not increase the overhead in the form of CRC for error detection purposes, all of the transmitted energy is concentrated on error correction and thus the number of frames lost is lower than when CRC is employed. The tradeoff between lost frames due to erroneously rejecting good frames and erroneously accepting bad frames can be changed by selecting a rejection threshold to adapt performance to a particular type of information source encoder and decoder.
Referring now to fig. 6, there is shown another embodiment of the inventive encoder in which, rather than grouping the primary and secondary bits into two separate groups of data symbols and encoding and decoding them separately, the primary and secondary bits are combined together in a single data group 120 that is clearly divided into primary and secondary bits and that group is encoded with a single convolutional encoder 125. Convolutional encoder 125 preferably comprises a tail-biting convolutional encoder with a variable ratio depending on whether the device is encoding primary or secondary bits. Variable rate convolutional codes are generated by calculating more coded bits than necessary for each coded data bit and selecting their variable portions for transmission. Coded bits that are not selected for transmission are referred to as "punctured". The number of punctured coded bits may vary continuously from bit to bit in a manner that is pre-agreed between the transmitter and the receiver.
Because of the use of variable coding rates, whenever the encoder 125 is computing coded bits for the primary bits, the number of coded bits generated will be greater than the coded bits that the encoder computes from the secondary bits. When a single punctured convolutional encoder 125 is used, sometimes the encoder 125 is processing all the primary bits, all the secondary bits, or a mixture of primary and secondary bits. During this time, the encoder selects the coded bits for transmission based on the level of bits being coded. The number of bits selected by the puncturing scheme may have to be varied between values just above two values (e.g., 4 and 2 or 5 and 3) in order to achieve "scarf joint" between the encoding of the primary bits at one code rate and the secondary bits at a second code rate. While the encoder 125 is encoding the secondary bits, the scrambling generator 130 generates a scrambling mask from the primary bits, as previously described, and adds the mask to the encoded secondary bits at the scrambler 135. Instead, the scrambling generator controls the order in which the encoded secondary bits are inserted into the transport stream. The encoded primary and secondary bits are submitted for transmission to a receiver.
Reference is now made toFig. 7, which shows a block diagram for decoding a received signal encoded according to the encoder shown in fig. 4. A set (150) of primary and secondary bits is assumed. The format of the required decoded output is known a priori to contain a given number (N1) of primary bits at certain given positions and a second given number (N2) of secondary bits at other given positions. A known MLSE decoder (not shown) can start by assuming any N consecutive decoded bits, where N is the constraint length minus 1. As stated above, is 2(N-1)Each cumulative similarity measure for a possible bit pattern is obtained by, for each bit pattern, first assuming that the nth bit is 1 and then the nth bit is 0, encoding each pattern (now extended by the nth bit to include N bits; i.e., equal to the constraint length of the known code) and comparing the encoded pattern to the received signal samples. The lesser of the pair metrics corresponding to two patterns that differ only in their first bit position but are identical in bits 2, 3, 4, …, N is then retained with the corresponding pattern. In this way, the number of reserved modes is now again 2(N-1)And the first of the N bits is now constrained to a certain reserved value for each mode, while bits 2, 3, 4, … N now represent all possible 2 s(N-1)And (4) combination.
The process continues assuming successively (N +1) th bit, (N +2) th bit, and so on until all bits are decoded. At each stage, when the most recently assumed N bits are re-encoded for comparison with the received signal sample, the re-encoding process used is selected based on whether the N bits are primary, secondary or a mixture, in the same known manner as the transmitter used to encode the primary, secondary or mixture. In other words, the bit positions of the primary and secondary bits and the respective encoding processes are agreed upon in advance between the transmitter and the receiver.
In the practice of the present invention, the decoding process described above begins at a level one bit position. The replica of the encoder 155 generates a number of encoded bits from each successive shift of X contiguous bits (the limit length) in the data set 150. The position in fig. 5 from which the adjacent bit is selected spans both the level two and level one bits. However, the first selected starting position should be the position of any coded bit that does not generate scrambling (i.e., primary rather than secondary).
Thus, decoding begins by selecting all possible hypotheses derived from a set of contiguous bits located at positions in the data set 150 that do not produce scrambled coded bits. During this time period, the scrambling mask generator 160 generates a null pattern and the scrambler 165 performs a null scrambling operation. The encoded bits generated by the replica of encoder 155 are compared at comparator 175 with the bits received from receive buffer 170. The receive buffer 170 receives the encoded signal generated by the encoder 124 of fig. 4. The comparison between the coded bits and the received bits by comparator 175 generates a measure of each hypothesis of the contiguous group of bits. These metrics are stored in memory 180 in conjunction with their corresponding hypotheses. Various hypotheses of contiguous groups of bits are stored in path history memory 152.
The contiguous group of bits selected from the data group 150 is then advanced through all of the primary bit positions that do not contain scrambling bits, and the resulting coded bits are compared at each position with the corresponding received bits from the receive buffer 170 at comparator 175 to calculate an increment in the metric value. After rotating the adjacent bit selection through each position containing only one level of bits, several different hypotheses containing all the level of bits in the hypothesis memory 180 are assumed along with the associated partial cumulative metric.
The primary bit hypotheses are now utilized in the scrambling mask generator 160 to generate scrambling bits which are operated upon at the output of the encoder 155 by the scrambler 165 to generate the desired scrambling code bits for the scrambled secondary data bits. The coded and scrambled secondary bits must now be generated one bit at a time extending each hypothesis in memory 152 and then compared to the corresponding received bits from receive buffer 170 at comparator 175. The comparison provides an incremental measure or increment to the existing measure of the selected hypothesis. When all hypotheses have advanced by one bit and a new cumulative metric is generated, the number of hypotheses is doubled by adding new bits and then halved when selecting between pairs that differ in state only in their oldest bits according to the Viterbi decoding principle.
Decoding continues in this manner until the encoder has moved one full turn to select the initial start bit set again, assuming all bits of the data set 150 containing scrambled encoded bits. The decoding continues for a second cycle, with the primary bits not needing to descramble their coded bits, and the secondary bits needing to be scrambled using the already assumed primary bits from the same machine state. The decoding continues to loop until no further improvement is deemed to be achieved. The hypothesis with the lowest cumulative metric is selected and the decoded primary and secondary bit groups are extracted from the center of the multiple decoding cycles to avoid end-point effects.
If no portion of the metrics are stored during decoding, the extracted bits can be used retrospectively to process the received signal samples to generate a decoding quality indicator. If the partial metrics are saved during decoding, a quality indicator for decoding a level of bits is derived from the difference between the partial metrics immediately preceding decoding the first level of bits and the partial metrics following decoding the last level of bits. Similarly, a quality indicator for the secondary bit decoding is derived from the difference between the partial metric immediately preceding the decoding of the first secondary bit and the partial metric after the decoding of the last secondary bit. Of course, the quality indicator is computed from the partial metrics stored relative to the best hypothesis (i.e., the machine state with the lowest overall metric) and jointly processed by the error detector 200 to check the likelihood that the decoded primary bits contain a decoding error.
Although preferred embodiments of the method and apparatus of the present invention have been illustrated in the accompanying drawings and described in the foregoing detailed description, it will be understood that the invention is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications and substitutions without departing from the invention as set forth and defined by the following claims.

Claims (27)

1. An apparatus for encoding information bits for transmission, comprising:
-a separator for sorting the information bits (10) into a first set of information bits (20) and a second set of information bits (25);
a first convolutional encoder (30) for encoding a first set of information bits to generate a third set of encoded information bits (32);
a second convolutional encoder (35) for encoding a second set of information bits to generate a fourth set of encoded information bits (36), wherein the ratio of the third set of information bits to the first set of information bits is greater than the ratio of the fourth set of information bits to the second set of information bits;
scrambling circuitry (40, 70) for scrambling the fourth set of encoded information bits in dependence upon the first set of information bits using a position shift to generate a fifth set of scrambled encoded information bits;
a transmitter (75) for transmitting each of the fifth set of scrambled encoded information bits and each of the third set of encoded information bits in a predetermined transmission sequence; and
wherein the scrambling circuit comprises a pseudo-random sequence generator (40) for generating a sequence of scrambling bits from an initial set of bits dependent on the first set of information bits.
2. The apparatus of claim 1, wherein the sequence of scrambling bits comprises a number of bits at least equal to the number of fourth encoded information bits.
3. The apparatus of claim 1, wherein the sequence of scrambling bits defines a scrambling order for rearranging the fourth set of encoded information bits to produce the fifth set of scrambled encoded information bits.
4. The apparatus of claim 2, wherein the scrambled bits are bitwise combined with corresponding bits of the fourth set of encoded information bits to produce a fifth set of scrambled encoded information bits.
5. The apparatus of claim 2, wherein the first portion of scrambling bits are bitwise combined with a corresponding first portion of the fourth set of encoded information bits and the second portion of scrambling bits define a scrambling order of the fourth set of encoded information bit arrangements.
6. The apparatus of claim 4, wherein said combining comprises an exclusive OR operation.
7. The apparatus of claim 1, wherein the first convolutional encoder comprises a tail-biting convolutional encoder.
8. The apparatus of claim 1, wherein the second convolutional encoder comprises a tail-biting convolutional encoder.
9. The apparatus of claim 1, wherein the first convolutional encoder comprises a punctured convolutional encoder.
10. The apparatus of claim 1, wherein the second convolutional encoder comprises a punctured convolutional encoder.
11. The apparatus of claim 1, wherein the first and second convolutional encoders comprise:
a shift register (45) comprising a number of memory cells (50);
logic circuitry (55, 65) for calculating a plurality of logical combinations of the contents of the memory cells of the shift register;
circuitry for initializing the memory cell to an initial state using a first portion of the first set of information bits;
a controller for sequentially shifting the first and second sets of information bits through the memory cells of the shift register until the memory cells return to an initial state, and for selecting a plurality of logical combinations of the memory cells of the shift register, including the third and fourth sets of encoded information bits, according to whether the memory cells of the shift register contain only a portion of the first or second sets of information bits, prior to each shifting of the combinations.
12. The apparatus of claim 11, wherein the selection of the logical combination for each shift varies according to a predetermined puncturing scheme.
13. An apparatus for receiving a signal and decoding coded and scrambled information bits therefrom to generate first and second sets of information bits, comprising:
a first processor (80) for determining a first bit sequence with an accumulated metric indicating a highest likelihood based on a plurality of first bit sequences of metric values hypothesized to contain a first set of information bits;
a first re-encoder (125) for re-encoding the hypothesized plurality of first bit sequences to generate first desired signal samples corresponding to the first set of information bits;
a first metric accumulator for comparing the first expected signal sample with a corresponding received signal sample and for accumulating an accumulated metric associated with the first expected signal sample based on the comparison;
a descrambling circuit (100) for descrambling the received signal samples corresponding to the second set of information bits in accordance with the determined highest likelihood first set of information bits;
a second processor for successively assuming a second bit sequence comprising a second set of information bits based on a plurality of stored metric values, thereby determining a second bit sequence having a second cumulative metric representing a highest likelihood;
a second re-encoder (105) for re-encoding the hypothesized second sequence of bits to generate a second desired sequence of signal samples corresponding to a second set of information bits;
a second metric accumulator for comparing the descrambled received signal with a second expected sequence of signal samples and for accumulating a plurality of second accumulated metrics in dependence on the comparison;
an error indication (90) circuit for processing the first cumulative metric representing the highest likelihood and the second cumulative metric representing the highest likelihood to produce an error indicator signal; and
wherein the error indication circuit combines the first and second cumulative metrics in an additive manner and compares the resulting sum to a threshold to generate the error indicator signal.
14. The apparatus of claim 13, wherein the first re-encoder and the second re-encoder comprise convolutional encoders.
15. The apparatus of claim 13, wherein the first and second re-encoders comprise tail-biting convolutional encoders.
16. The apparatus of claim 13, wherein the first and second re-encoders comprise punctured convolutional encoders.
17. The apparatus of claim 13, wherein the first re-encoder generates the plurality of desired signal samples having a first ratio to the first set of information bits, and the second re-encoder generates the plurality of desired signal samples having a second ratio to the second set of information bits, wherein the second ratio is lower than the first ratio.
18. The apparatus of claim 13, wherein the descrambling circuit further comprises a generator for generating a sequence of descrambling bits from the initial state based on the determined first bit sequence of highest likelihood.
19. The apparatus of claim 13, wherein the descrambling bit sequence comprises at least the same number of bits as the signal samples being compared, determining an order in which the received signal is compared to the expected signal samples by the second metric accumulator.
20. The apparatus of claim 19, wherein the means for descrambling combines individual bits in the bit descrambling sequence with corresponding received signal samples.
21. The apparatus of claim 18, wherein the descrambling bit sequence determines an order in which the received signal is compared to the expected signal samples by the second metric accumulator.
22. The apparatus of claim 13, wherein the error indicator circuit determines an average signal strength of the received signal samples.
23. The apparatus of claim 22, wherein the average signal strength is calculated using first and second metric accumulators.
24. The apparatus of claim 13, wherein the error indicator circuit normalizes the metrics by dividing by the respective average signal strengths.
25. The apparatus of claim 24, wherein the normalized measure defines a two-dimensional plane and the error indicator signal is dependent on whether the signal is located in the first or second region of the plane.
26. An apparatus for receiving a signal and decoding therefrom coded and scrambled information bits to produce a first set of decoded information bits corrected for transmission errors plus an indicator signal warning of the likelihood of uncorrected errors and a second set of information bits corrected for some transmission errors, comprising:
a first processor (80) for successively assuming a first bit sequence comprising the first set of information bits in dependence on the metric value to determine a first bit sequence having a first cumulative metric representing the highest likelihood;
a first re-encoder (125) for re-encoding the hypothesized first sequence of bits to generate a first desired sequence of signal samples corresponding to the first set of information bits;
a first metric accumulator for comparing the first expected signal sample with a corresponding received signal sample and accumulating the accumulated metric according to similarity or difference;
a second processor for successively assuming a second sequence comprising the second set of information bits based on the metric values to determine a second bit sequence having a second cumulative metric representing a highest likelihood;
a second re-encoder for re-encoding the hypothesized second sequence of bits to generate a second desired sequence of signal samples corresponding to the second set of information bits;
scrambling circuitry (95) for scrambling the second desired signal samples corresponding to the second set of information bits with the determined highest likelihood first information bit sequence;
a second metric accumulator for comparing the received signal samples with the scrambled expected signal samples and accumulating the second accumulated metric according to similarity or difference;
an error indicator circuit (90) for processing the first cumulative metric representing the highest likelihood and the second cumulative metric representing the highest likelihood to generate the indicator signal; and
wherein the error indicator circuit additively combines the first and second cumulative metrics and compares the resulting sum to a threshold to generate the indicator signal.
27. A method for encoding information bits, characterized by the steps of:
classifying the information bits into a first set of bits and a second set of bits;
selecting a first number of contiguous bits within a first set of bits and a second set of bits comprising only the bits of the first set;
calculating a first number of coded bits from the first number of contiguous bits;
selecting a second number of contiguous bits from the first and second sets of bits comprising only the second set of bits;
calculating a second number of coded bits from the second number of contiguous bits; and
scrambling the second number of coded bits according to the first number of coded bits by using a permutation to generate a third set of coded bits; and
wherein the scrambling step further comprises:
generating a scrambling mask based on the first set of bits; and
the scrambling mask is applied to the second set of coded bits.
HK01103280.2A 1997-10-03 1998-09-30 Apparatus and method for coding and decoding information bits HK1032698B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US08/943,885 US6012160A (en) 1997-10-03 1997-10-03 Method for protecting important data bits using less important data bits
US08/943,885 1997-10-03
PCT/US1998/020538 WO1999018689A1 (en) 1997-10-03 1998-09-30 Method for protecting important data bits using less important data bits

Publications (2)

Publication Number Publication Date
HK1032698A1 HK1032698A1 (en) 2001-07-27
HK1032698B true HK1032698B (en) 2006-09-29

Family

ID=

Similar Documents

Publication Publication Date Title
CN1242582C (en) Apparatus and method for encoding and decoding information bits
CA2060862C (en) Decoding system for distinguishing different types of convolutionally encoded signals
US6199186B1 (en) Screening for undetected errors in data transmission systems
JP3860218B2 (en) A coding scheme for digital communication systems.
EP1487119B1 (en) Error detection methods in wireless communication systems
US5430743A (en) Method and apparatus for recovering data in a radio communication system
EP0927464A1 (en) Convolutional decoding with the ending state decided by crc bits placed inside multiple coding bursts
CN1158775C (en) Syndrome-based channel quality or message structure determiner
US6161210A (en) List Viterbi algorithms for tailbiting convolutional codes
US6240538B1 (en) Method and apparatus for errors and erasures decoding
US6108386A (en) List Viterbi algorithms for continuous data transmission
CN1153397C (en) Bit detection method in a radio communication system
US6192500B1 (en) Method and apparatus for enhanced performance in a system employing convolutional decoding
US6408037B1 (en) High-speed data decoding scheme for digital communication systems
JP2002501328A (en) Method and apparatus for coding, decoding and transmitting information using source control channel decoding
HK1032698B (en) Apparatus and method for coding and decoding information bits
EP4147392A1 (en) Decoder for a receiver