[go: up one dir, main page]

CN120223091A - A coding and decoding method for data compression transmission - Google Patents

A coding and decoding method for data compression transmission Download PDF

Info

Publication number
CN120223091A
CN120223091A CN202510201340.9A CN202510201340A CN120223091A CN 120223091 A CN120223091 A CN 120223091A CN 202510201340 A CN202510201340 A CN 202510201340A CN 120223091 A CN120223091 A CN 120223091A
Authority
CN
China
Prior art keywords
length
sequence
data
coding
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510201340.9A
Other languages
Chinese (zh)
Other versions
CN120223091B (en
Inventor
马啸
王寅楚
马千里
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202510201340.9A priority Critical patent/CN120223091B/en
Publication of CN120223091A publication Critical patent/CN120223091A/en
Application granted granted Critical
Publication of CN120223091B publication Critical patent/CN120223091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/4006Conversion to or from arithmetic code
    • H03M7/4012Binary arithmetic codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/27Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)

Abstract

The invention discloses a coding and decoding method for data compression transmission, which comprises the steps of inputting data u with the length of B x q into a sparser at a transmitting end, grouping to obtain the total number m of character string types, and constructing a data frame with the size of m according to the occurrence frequency of the character string typesSparse mapping is carried out on the data u according to the code table to obtain a length ofInterleaving the sequences v, dividing the sequences into L groups according to each group of k bits to obtain a sequence v with the length of Lk, and dividing the sequence v into a single path or multiple paths to encode the sequence v to obtain a codeword sequence c with the length of N. At the receiving end, decoding the received sequence y with the length of N to obtain a sequence with the length of LkSequences are sequencedDe-interleaving and removing the zero-filling bits to obtain a length ofThe sequenceInput to a de-sparser to obtainCompared with the existing variable length coding compression transmission scheme, the invention can avoid error propagation and realize high-efficiency data transmission under the condition of allowing bit errors to a certain extent.

Description

Coding and decoding method for data compression transmission
Technical Field
The invention belongs to the technical field of digital communication, and particularly relates to an encoding and decoding method for data compression transmission.
Background
In some communication traffic scenarios, a large amount of data is often generated, which generally occupies a large storage space and bandwidth, and presents challenges for transmission and storage. Therefore, data compression techniques have been developed to reduce the amount of data while preserving the quality of the data as much as possible. By adopting the efficient compression algorithm, the storage requirement and the bandwidth consumption can be remarkably reduced, so that the efficiency of data transmission is improved. This is particularly important in many applications, such as video streaming, audio transmission, and large data storage.
The existing data compression schemes such as Huffman coding, LZW coding and the like can realize lossless compression of data by constructing a dictionary and adopting a variable length coding mode to represent characters with more occurrence frequencies by using shorter code words. However, the compression method such as Huffman coding is actually very sensitive to bit errors because it adopts a variable length coding manner, and a small number of bit errors may cause error propagation, thereby affecting the transmission quality of the entire data.
In addition, data may be interfered by noise during transmission to generate errors, so that a channel coding technology is introduced to correct the generated errors so as to ensure the reliability of data transmission. The packet Markov superposition transmission (block Markov superposition transmission, BMST) method is a type of convolutional long code which can be constructed by short codes, has a simple coding algorithm, can be decoded by adopting a sliding window iterative decoding algorithm, and has a lower decoding performance bound which can be determined by the basic code performance and the coding memory size (Zhongshan university, a packet Markov superposition coding method [ P ]: CN 103152060A). By designing the structure of the connection matrix, BMST code words with more flexible code rate are obtained, and the functions of source coding, channel coding or source-channel joint coding can be realized.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art, and provides a coding and decoding method for data compression transmission aiming at a communication service scene, which can avoid error propagation to enable data to have better readability compared with the prior variable-length compression transmission scheme under the condition of tolerating certain errors.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
The invention provides a coding and decoding method for data compression transmission, which comprises the following steps:
(1) At a transmitting end, setting data with the length of B.q to be transmitted as u, and encoding the data u into a transmission codeword with the length of N Wherein the method comprises the steps ofRepresenting an N-dimensional binary space, the encoding method comprising the steps of:
(1.1) inputting data u of length b×q to a sparser for mapping to length B Is a sequence of (2)The method comprises the following specific steps:
(1.1.1) dividing data u to be transmitted with the length of B x q into B groups u= (u (0),u(1),…,u(B-1)) with equal length, wherein the size of each group is q characters, and carrying out frequency statistics on each group to obtain the total type number m and a frequency distribution result;
(1.1.2) constructing a sparse code table using the statistical frequency distribution result, each group u (i) using binary codewords Wherein i is more than or equal to 0 and less than or equal to B-1;
(1.1.3) sequencing the binary codewords p (i) into a mapping sequence
(1.2) Interleaving the sequence v and dividing the sequence v into L groups according to each group of k bits to obtain a sequence v with a length of Lk, and counting to obtain the sparsityWherein W H (v) is the Hamming weight of v;
(1.3) carrying out single-path or grading on the sequence v and then carrying out multi-path coding to obtain a transmission codeword sequence c with the length of N;
(2) At the receiving end, for a received sequence y of length N, data of length B x q is estimated The decoding method comprises the following steps:
(2.1) decoding the received sequence y of length N in one or more ways to obtain a sequence of length Lk
(2.2) SequenceObtaining the length by the de-interleaver in the corresponding step (1.3)Is a sequence of (2)
(2.3) SequencingDivided into equal length groups BEach group has the length ofWill beInputting the raw data into a de-sparser, de-mapping according to a sparsing code table constructed in the step (1.2) to obtain an estimation of the raw data with the length of B.q
As a preferable technical scheme, in the step (1.1.2), the specific implementation method is as follows:
Determining the length of binary code words as the length of binary code words according to the total type number m counted in the step (1.1.1) And (3) re-ordering binary code words according to the hamming weight from small to large, and finally, sequencing the binary code words with the character group types according to the occurrence frequency from large to small in a one-to-one correspondence manner.
As a preferable technical scheme, the step (1.1.2) specifically comprises:
(1.1.2.1) generating m Lengths Constructing a sparse code table of non-repeating binary codewords p j, where j=0, 1..m-1 is a type number;
(1.1.2.2) mapping according to the occurrence frequency of the character groups, so that the character groups with higher frequency correspond to binary code words with smaller hamming weight, and the character groups with lower frequency correspond to binary code words with larger hamming weight.
As a preferable technical scheme, in the step (1.3), the sequence v is encoded in a single-pass or hierarchical manner, specifically:
and adopting single-path coding or classifying data, adopting multi-path coding, and realizing channel coding or joint information source channel coding by using a packet Markov superposition transmission coding mode for each path.
As a preferable technical scheme, in the step (2.1), a single-channel or multi-channel decoding or joint source channel decoding algorithm corresponding to the step (1.3) is adopted.
Compared with the prior art, the invention has the following advantages and beneficial effects:
In the coding and decoding method for data compression transmission, at the transmitting end, data u with the length of B is firstly input into a sparsifierer (sparsifier), and is divided into B groups according to the q characters as one group to be counted to obtain the total number m of character string types, binary code words are distributed from high to low according to the occurrence frequency of the character string types, and the code word length is And the hamming weight is ordered from low to high, and finally a size of m×isconstructedSparse mapping the data u according to the code table to obtain a length ofIs interleaved and divided into L groups (zero padding if necessary) for each group of k bits, resulting in a sequence v of length Lk. The sequence v is divided into a single channel or multiple channels for coding (channel coding/joint source channel coding) to obtain a codeword sequence c with the length of N. At the receiving end, decoding the received sequence y with the length of N to obtain a sequence with the length of LkSequences are sequencedDe-interleaving and removing the zero-filling bits to obtain a length ofIs a sequence of (2)Is input into a de-sparser (de-sparsifier) to obtainBecause the invention adopts the fixed-length code word to map the original data, the error propagation phenomenon caused by bit errors when the variable-length code word is adopted can not occur, and the errors can be limited in a single packet. The present invention reduces the need for transmission reliability, and improves data readability over variable length compressed transmission coding schemes, while allowing for certain data errors. In addition, the coding scheme of the invention has simple structure, and can combine channel coding or information source channel joint coding technology according to actual requirements, thereby realizing reliable and efficient transmission of data.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a transmission schematic diagram of a codec method for data compression transmission according to the present invention;
FIG. 2 is an original magic cube image in example 1 of the present invention;
fig. 3 is a restored image transmitted when Signal-to-Noise Ratio (SNR) =7db in embodiment 1 of the present invention by SPARSIFIER concatenation BMST (SPARSIFIER-BMST) encoding of a magic cube image;
Fig. 4 is a restored image transmitted at snr=7db by Huffman cascading BMST (Huffman-BMST) encoding a magic square image in embodiment 1 of the present invention;
FIG. 5 is an original firework image in example 2 of the present invention;
Fig. 6 is a restored image of the transmission of SPARSIFIER-BMST at snr=7 dB for the firework image in example 2 of the present invention;
Fig. 7 is a restored image transmitted at snr=7db by Huffman-BMST coding the firework image in embodiment 2 of the present invention.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the described embodiments of the application may be combined with other embodiments.
Example 1
As shown in fig. 1, this embodiment 1 provides a coding and decoding method for data compression transmission, which includes the following steps:
(1) At a transmitting end, setting data with the length of B.q to be transmitted as u, and encoding the data u into a transmission codeword with the length of N Wherein the method comprises the steps ofRepresenting an N-dimensional binary space, the encoding method comprises the following steps:
(1.1) mapping data u of length B x q to length B x q data u input to a sparser (sparsifier) Is a sequence of (2)The method comprises the following specific steps:
And (1.1.1) dividing the data u to be transmitted with the length of B x q into B groups u= (u (0),u(1),…,u(B-1)) with equal length, wherein each group has the size of q characters, and carrying out frequency statistics on the q characters to obtain the total type number m and a frequency distribution result.
(1.1.2) Constructing a code table using the counted frequency distribution result, each packet u (i) using binary codewordsWherein i is more than or equal to 0 and B-1 is more than or equal to 1.
(1.1.2.1) Generating m LengthsIs used to construct a codebook of non-repeating binary codewords p j, where j=0, 1.
(1.1.2.2) Mapping according to the occurrence frequency of the character groups, so that the character groups with higher frequency correspond to binary code words with smaller hamming weight, and the character groups with lower frequency correspond to binary code words with larger hamming weight.
(1.1.3) Arranging the code words p (i), i is more than or equal to 0 and less than or equal to B-1 into a mapping sequence according to the sequence order
(1.2) Interleaving the sequence v and dividing the sequence v into L groups (zero padding if necessary) according to each group of k bits to obtain a sequence v of length Lk (zero padding if necessary), and counting to obtain the sparsityWherein W H (v) is the Hamming weight of v.
(1.3) Performing single-path or hierarchical multi-path coding on the sequence v in a proper way (realizing channel coding/joint source channel coding in a packet Markov superposition transmission coding mode) to obtain a transmission codeword c with the length of N.
(2) At the receiving end, for a received sequence y of length N, data of length B x q is estimatedThe decoding method comprises the following steps:
(2.1) decoding the received sequence y of length N in one or more ways to obtain a sequence of length Lk
(2.2) SequenceThe length is obtained by the de-interleaver (and zero compensation is removed) in the corresponding step (1.3)Is a sequence of (2)
(2.3) SequencingDivided into equal length groups BEach group has the length ofWill beInputting the raw data into a de-sparser, de-mapping according to the code table constructed in the step (1.2) to obtain an estimation of the raw data with the length of B x q
In this embodiment 1, the data is a pixel sequence u of the magic square image shown in fig. 2, the length is 18874368 characters, statistics is performed according to 96 characters as a group, total 105755 types are shared, the length of each codeword is 17 bits, and a sparse code table with the size of 105755 ×17 is constructed. And (5) thinning and mapping to obtain a sequence v with the length of 3342336. And (3) carrying out zero padding on the sequence v and interleaving to obtain the sequence v with the length of 3344000, and carrying out statistics to obtain the sparsity theta=0.284. BMST codes v, generating sub-code words with the length of n=2000 every k=2000 bits, memorizing the length of m=2, and finally obtaining a code word sequence c with the length of 3348000. Codeword sequence c is transmitted over a binary phase-shift keying (BPSK) modulated through an Additive White Gaussian Noise (AWGN) channel. At the receiving end, adopting a sliding window iterative decoding algorithm, and obtaining sequence estimation with length of 3348000 by decoding delay d=9After de-interleaving and removing redundant zero padding bits, the sequence with the length of 3342336 is finally obtainedObtaining the estimation of the original data after de-sparsification mappingThe image is restored. In this embodiment 1, the recovery performance of encoding and transmitting image data by using Huffman coding cascade BMST coding is used as a comparison object, wherein a sequence v with a length of 2513626 is obtained after Huffman coding, a sequence v with a length of 2514688 is obtained after zero padding and interleaving of the sequence v, BMST coding is performed on the sequence v, subcodes with a length of n=2000 are generated every k=1504 bits, a memory length of m=2, and finally a codeword sequence c with a length of 3348000 is obtained. From a comparison of fig. 3 and 4, the image recovered by the SPARSIFIER cascade BMST (SPARSIFIER-BMST) scheme is more readable at snr=7 dB. The simulated peak signal-to-noise ratio (PSNR) results are shown in table 1, and it can be seen that in the region where the signal-to-noise ratio is low, the proposed scheme has higher image quality than the Huffman cascade BMST (Huffman-BMST) scheme. By the encoding method for data compression transmission in embodiment 1, binary code words with different hamming weights are allocated according to the occurrence frequency of the character set, and BMST codes are utilized to realize the joint source channel coding to improve the transmission efficiency and resist transmission noise.
TABLE 1 comparison of Performance of SPARSIFIER-BMST scheme and Huffman-BMST scheme at different SNR
SNR 6dB 7dB 8dB 9dB
Sparsifier-BMST 26.41dB 36.71dB 47.90dB 61.61dB
Huffman-BMST 11.1dB 11.52dB 10.98dB 13.68dB
Example 2
In the encoding method for data compression transmission provided in this embodiment 2, the data is a pixel sequence u of a firework image shown in fig. 5, the length is 88510445 characters, statistics is performed according to 96 characters as a group, total 626390 types are shared, the length of each type of codeword is 20 bits, and a sparse code table with the size of 626390 ×20 is constructed. And (5) thinning and mapping to obtain a sequence v with the length of 18439680. And (3) carrying out zero padding on the sequence v and interleaving to obtain the sequence v with the length of 18439985, and carrying out statistics to obtain the sparsity theta= 0.31827. BMST codes v, generating sub-code words with the length of n=2005 every k=2005, memorizing the length of m=2, and finally obtaining a code word sequence c with the length of 18443995. The codeword sequence c is transmitted over the AWGN channel with BPSK modulation. At the receiving end, adopting a sliding window iterative decoding algorithm, and obtaining sequence estimation with length of 18439985 by decoding delay d=10After de-interleaving and removing redundant zero padding bits, a sequence with the length of 18439680 is obtainedObtaining the estimation of the original data after de-sparsification mappingThe image is restored. In this embodiment 3, the recovery performance of encoding and transmitting image data by using Huffman coding cascade BMST coding is used as a comparison object, wherein a sequence v with a length of 15165098 is obtained after Huffman coding, and a sequence v with a length of 15165853 is obtained after zero padding and interleaving of the sequence v. The sequence v is BMST encoded, each k=1649 producing a subcode of length n=2005, memorizing a length m=2, resulting in a sequence c of codewords of length 18443995. From a comparison of fig. 6 and fig. 7, the image recovered by the SPARSIFIER-BMST scheme is more readable at snr=7db. The simulated PSNR results are shown in table 3, and it can be seen that the proposed scheme has higher image quality than the Huffman-BMST scheme in the region where the signal-to-noise ratio is low. According to the coding scheme for data compression transmission in embodiment 2, binary code words with different hamming weights are allocated according to the occurrence frequency of the character set, and BMST codes are utilized to realize that the joint source channel coding improves the transmission efficiency and resists transmission noise.
TABLE 3 comparison of Performance of SPARSIFIER-BMST scheme and Huffman-BMST scheme at different SNR
SNR 6dB 7dB 8dB 9dB
Sparsifier-BMST 17.51dB 40.86dB 52.95dB 63.61dB
Huffman-BMST 15.70dB 15.66dB 16.24dB 17.16dB
Example 3
In the coding and decoding method for data compression transmission of this embodiment 3, consider a data sequence u with a length of 1000000, which is composed of four symbols, A, B, C and D, and the corresponding probabilities are {0.1081,0.3244,0.5405,0.0270} and the source entropy is 1.5067, respectively. Counting according to a group of 1 character, namely four types, wherein the length of each type of code word is 2 bits, and constructing a 4 multiplied by 2 sparse code table, wherein the sparse code table is as follows:
Sign symbol Probability of Code word
A 0.1081 10
B 0.3244 01
C 0.5405 00
D 0.0270 11
The sequence v with the length of 2000000 is obtained after sparse mapping according to the code table, and single-path coding or multi-path coding can be adopted after grading. (1) When adopting single-path coding, counting the sequence v to obtain the bit sparsity of theta= 0.2432, and interleaving to obtain the sequence v with the length of 2000000 bits. BMST codes v, generating sub-code words with the length of n=1700 every k=1000 bits, memorizing the length of m=8, and finally obtaining code word sequence c with the length of 1713600, wherein the code rate is 1.7136. At the decoding end, a sliding window iterative decoding algorithm is adopted, and the decoding delay d=16 is adopted to obtain the sequence estimation with the length of 2000000After de-interleaving, the sequence with the length of 2000000 is finally obtainedObtaining the estimation of the original data after de-sparsification mappingNo errors occur. (2) When multi-path coding is adopted after grading, dividing the sequence v into 2 paths according to the bit number of the code word to obtain sequences v 1 and v 2 with the lengths of 1000000 respectively, calculating to obtain a sequence v 1 with the sparsity of theta 1=0.8649,v2 and the sparsity of theta 2 = 0.6486, and interleaving to obtain sequences v 1 and v 2 with the lengths of 1000000 respectively. Each of v 1 and v 2 is subjected to BMST coding, each of the 1 st path of codes generates a subcode with a length of n=646 and stores m=8 to obtain a codeword sequence c 1 with a length of 651168, each of the 2 nd path of codes generates a subcode with a length of n=998 and stores m=8 to obtain a codeword sequence c 2 with a length of 1005984, and the two paths of codes are combined to obtain a codeword sequence c with a length of 1657152, wherein the code rate is 1.6572. It is assumed that codeword sequence c is transmitted over a channel without noise interference. At the decoding end, the received sequence y of length 1657152 is split into 2 sequences y 1 and y 2 of lengths 651168 and 1005984, respectively. Sliding window iterative decoding algorithm is respectively adopted for 2 paths by combining conditional probability, decoding delay d=16, and sequence estimation with length of 1000000 is respectively obtainedAndAfter de-interleaving and combining, a sequence with the length of 2000000 is obtainedObtaining the estimation of the original data after de-sparsification mappingNo errors occur.
Through the encoding method for data compression transmission in embodiment 3, binary code words with different hamming weights are allocated according to the occurrence frequency of the character set, and the binary code words can be divided into single paths or classified and then are encoded by adopting multiple paths BMST to realize compression.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present invention is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present invention.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.

Claims (5)

1. A method for encoding and decoding data compression transmission, comprising the steps of:
(1) At a transmitting end, setting data with the length of B.q to be transmitted as u, and encoding the data u into a transmission codeword with the length of N Wherein the method comprises the steps ofRepresenting an N-dimensional binary space, the encoding method comprising the steps of:
(1.1) inputting data u of length b×q to a sparser for mapping to length B Is a sequence of (2)The method comprises the following specific steps:
(1.1.1) dividing data u to be transmitted with the length of B x q into B groups u= (u (0),u(1),…,u(B-1)) with equal length, wherein the size of each group is q characters, and carrying out frequency statistics on each group to obtain the total type number m and a frequency distribution result;
(1.1.2) constructing a sparse code table using the statistical frequency distribution result, each group u (i) using binary codewords Wherein i is more than or equal to 0 and less than or equal to B-1;
(1.1.3) sequencing the binary codewords p (i) into a mapping sequence
(1.2) Interleaving the sequence v and dividing the sequence v into L groups according to each group of k bits to obtain a sequence v with a length of Lk, and counting to obtain the sparsityWherein W H (v) is the Hamming weight of v;
(1.3) carrying out single-path or grading on the sequence v and then carrying out multi-path coding to obtain a transmission codeword sequence c with the length of N;
(2) At the receiving end, for a received sequence y of length N, data of length B x q is estimated The decoding method comprises the following steps:
(2.1) decoding the received sequence y of length N in one or more ways to obtain a sequence of length Lk
(2.2) SequenceObtaining the length by the de-interleaver in the corresponding step (1.3)Is a sequence of (2)
(2.3) SequencingDivided into equal length groups BEach group has the length ofWill beInputting the raw data into a de-sparser, de-mapping according to a sparsing code table constructed in the step (1.2) to obtain an estimation of the raw data with the length of B.q
2. The method of encoding and decoding for data compression transmission according to claim 1, wherein in step (1.1.2), the specific implementation method is:
Determining the length of binary code words as the length of binary code words according to the total type number m counted in the step (1.1.1) And (3) re-ordering binary code words according to the hamming weight from small to large, and finally, sequencing the binary code words with the character group types according to the occurrence frequency from large to small in a one-to-one correspondence manner.
3. The method of encoding and decoding for data compression transmission according to claim 2, wherein the step (1.1.2) is specifically:
(1.1.2.1) generating m Lengths Constructing a sparse code table of non-repeating binary codewords p j, where j=0, 1..m-1 is a type number;
(1.1.2.2) mapping according to the occurrence frequency of the character groups, so that the character groups with higher frequency correspond to binary code words with smaller hamming weight, and the character groups with lower frequency correspond to binary code words with larger hamming weight.
4. The method of claim 1, wherein in step (1.3), the sequence v is encoded in a single pass or in multiple passes after classification, specifically:
and adopting single-path coding or classifying data, adopting multi-path coding, and realizing channel coding or joint information source channel coding by using a packet Markov superposition transmission coding mode for each path.
5. The method of claim 1, wherein in step (2.1), a single-channel or multi-channel decoding or joint source channel decoding algorithm is used in step (1.3).
CN202510201340.9A 2025-02-24 2025-02-24 Coding and decoding method for data compression transmission Active CN120223091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510201340.9A CN120223091B (en) 2025-02-24 2025-02-24 Coding and decoding method for data compression transmission

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510201340.9A CN120223091B (en) 2025-02-24 2025-02-24 Coding and decoding method for data compression transmission

Publications (2)

Publication Number Publication Date
CN120223091A true CN120223091A (en) 2025-06-27
CN120223091B CN120223091B (en) 2025-11-14

Family

ID=96110709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510201340.9A Active CN120223091B (en) 2025-02-24 2025-02-24 Coding and decoding method for data compression transmission

Country Status (1)

Country Link
CN (1) CN120223091B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103152060A (en) * 2013-01-17 2013-06-12 中山大学 Grouping Markov overlapping coding method
US20140140400A1 (en) * 2011-06-16 2014-05-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Entropy coding supporting mode switching
US20160112158A1 (en) * 2013-07-03 2016-04-21 Huawei Technologies Co., Ltd. Method for Concurrent Transmission of Information Symbols in Wireless Communication Systems
US20170041021A1 (en) * 2014-04-27 2017-02-09 Gurulogic Microsystems Oy Encoder, decoder and method
US20190312666A1 (en) * 2018-04-06 2019-10-10 International Business Machines Corporation Error correcting codes with bayes decoder and optimized codebook
CN116980076A (en) * 2023-06-26 2023-10-31 中山大学 Source-channel joint coding method for short packet communication
CN117651076A (en) * 2023-11-29 2024-03-05 哈尔滨工程大学 Adaptive cross-domain multichannel secret source coding compression and decompression method
CN117978179A (en) * 2023-12-29 2024-05-03 北京集朗半导体科技有限公司 Decompression method and device for compressed data, chip and storage medium
CN119402203A (en) * 2024-10-14 2025-02-07 安徽大学 Unconstrained biometric verification method, system and device integrating AdaMTrans and Neural-MS decoding

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140140400A1 (en) * 2011-06-16 2014-05-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Entropy coding supporting mode switching
CN103152060A (en) * 2013-01-17 2013-06-12 中山大学 Grouping Markov overlapping coding method
US20160112158A1 (en) * 2013-07-03 2016-04-21 Huawei Technologies Co., Ltd. Method for Concurrent Transmission of Information Symbols in Wireless Communication Systems
US20170041021A1 (en) * 2014-04-27 2017-02-09 Gurulogic Microsystems Oy Encoder, decoder and method
US20190312666A1 (en) * 2018-04-06 2019-10-10 International Business Machines Corporation Error correcting codes with bayes decoder and optimized codebook
CN116980076A (en) * 2023-06-26 2023-10-31 中山大学 Source-channel joint coding method for short packet communication
CN117651076A (en) * 2023-11-29 2024-03-05 哈尔滨工程大学 Adaptive cross-domain multichannel secret source coding compression and decompression method
CN117978179A (en) * 2023-12-29 2024-05-03 北京集朗半导体科技有限公司 Decompression method and device for compressed data, chip and storage medium
CN119402203A (en) * 2024-10-14 2025-02-07 安徽大学 Unconstrained biometric verification method, system and device integrating AdaMTrans and Neural-MS decoding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ROHAN PINTO 等: "Implementation of Digital FIR Filter Using Optimized Hybrid Arthmetic Unit", IEEE, 19 November 2024 (2024-11-19) *
翟助群 等: "低密度奇偶校验码的联合信源信道译码", 兵工学报, no. 2, 15 December 2015 (2015-12-15) *

Also Published As

Publication number Publication date
CN120223091B (en) 2025-11-14

Similar Documents

Publication Publication Date Title
US11374682B2 (en) Method and apparatus for encoding data using a polar code
CN106888026B (en) Segmented polarization code coding and decoding method and system based on LSC-CRC (least significant likelihood-Cyclic redundancy check) decoding
US7653859B2 (en) System, apparatus and method for transmitting and receiving data coded by low density parity check code having variable coding rate
US20220224947A1 (en) Coding method and related device
CN110912566B (en) A channel decoding method for digital audio broadcasting system based on sliding window function
US8312344B2 (en) Communication method and apparatus using LDPC code
CN103338046A (en) Encoding and decoding method of LDPC-RS (Low Density Parity Check-Reed-Solomon) two-dimensional product code with compatible code rate
Dumer et al. Codes approaching the Shannon limit with polynomial complexity per information bit
CN116506074B (en) Combined source channel coding method and system based on block fading channel
EP1589663A1 (en) System, apparatus and method for transmitting and receiving data coded by low density parity check code having variable coding rate
Xiao et al. Dynamic perturbation decoding of polar-CRC cascaded code
Boiko et al. Simulation of the Transport Channel With Polar Codes for the 5G Mobile Communication
CN120223091B (en) Coding and decoding method for data compression transmission
CN114124108A (en) Encoding method, decoding method and related device based on low density parity check
Li et al. SCL-GRAND: Lower complexity and better flexibility for CRC-Polar Codes
CN109831216A (en) Polarization code SBP decoder based on G-Matrix verification
CN116980076A (en) Source-channel joint coding method for short packet communication
CN113556135A (en) A Bit Flip Decoding Method for Polar Code Confidence Propagation Based on Frozen Flip List
CN108259128A (en) A kind of building method of the system Raptor codes based on nonrandom generator matrix
CN1319278C (en) Turbo product code serial cascade NR code channel coding method
CN119543961A (en) A flexible rate-compatible coding method based on low-rate generalized LDPC codes
KR20100110662A (en) Method and apparatus for reducing complexity of low-density parity-check code decoding
CN114050835B (en) A RS code encoding method based on parity check precoding
CN116388771A (en) A coding method with variable code length and code rate based on 5G LDPC code
WO2008034287A1 (en) An interleaving scheme for an ldpc coded 32apsk system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant