[go: up one dir, main page]

CN111432211B - Residual error information compression method for video coding - Google Patents

Residual error information compression method for video coding Download PDF

Info

Publication number
CN111432211B
CN111432211B CN202010247702.5A CN202010247702A CN111432211B CN 111432211 B CN111432211 B CN 111432211B CN 202010247702 A CN202010247702 A CN 202010247702A CN 111432211 B CN111432211 B CN 111432211B
Authority
CN
China
Prior art keywords
coding
entropy
residual information
data
quantization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010247702.5A
Other languages
Chinese (zh)
Other versions
CN111432211A (en
Inventor
段强
汝佩哲
李锐
金长新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Science Research Institute Co Ltd
Original Assignee
Shandong Inspur Science Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Science Research Institute Co Ltd filed Critical Shandong Inspur Science Research Institute Co Ltd
Priority to CN202010247702.5A priority Critical patent/CN111432211B/en
Publication of CN111432211A publication Critical patent/CN111432211A/en
Application granted granted Critical
Publication of CN111432211B publication Critical patent/CN111432211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本发明提供一种用于视频编码的残差信息压缩方法,涉及信息压缩、编解码领域,通过使用自编码器的思想,将残差信息通过训练好的编码器网络进行提取,生成一个特征图,然后通过量化降低数据的存储空间,再用熵编码将量化后的数据进行进一步压缩。残差信息解码的时候,使用相反的流程,将保存的熵编码数据解码并反量化,并通结构相反的解码器进行解码,从特征图恢复为三通道的残差信息。通过对已有的残差信息进行压缩或二次压缩,成倍的减少存储空间,减少存储成本。

Figure 202010247702

The invention provides a residual information compression method for video coding, which relates to the fields of information compression and coding and decoding. By using the idea of self-encoder, the residual information is extracted through a trained encoder network to generate a feature map , and then reduce the storage space of the data through quantization, and then use entropy coding to further compress the quantized data. When the residual information is decoded, the reverse process is used to decode and inverse quantize the stored entropy encoded data, and decode it through a decoder with the opposite structure to restore the three-channel residual information from the feature map. By compressing or secondary compressing the existing residual information, the storage space is doubled and the storage cost is reduced.

Figure 202010247702

Description

Residual error information compression method for video coding
Technical Field
The invention relates to the field of information compression, coding and decoding, in particular to a residual error information compression method for video coding.
Background
In the digital media era, a large amount of image video data is generated and stored from the fields of daily life, social networking, public security monitoring, industrial production and the like, and a large amount of storage space needs to be consumed. The compression ratio of h264, which is the mainstream video compression format at present, still has a space for improvement, and the motion estimation based on blocks also generates color difference, so that h265, which is not popularized yet, is not considered well due to low compression efficiency and various patent disputes.
Motion compensation, which is an effective method for reducing redundant information of a frame sequence, is to predict and compensate a current local image from a previous local image. It usually has a residual with the real video information, and the residual information can complement the information lost in the motion compensation process.
In view of the large-scale application of neural networks and deep learning techniques to tasks in the field of artificial intelligence, it is very promising to compress data by means of neural networks.
Disclosure of Invention
Based on the above technical problem, the present invention provides a residual information compression method for video coding, which can obtain compressed residual information at a low bit rate for storing and compressing the residual information after motion estimation of video compression.
The method is based on a neural network structure of an autoencoder, uses a GDN activation function, and combines quantization and entropy coding to compress residual error information.
An autoencoder is an artificial neural network that learns an efficient representation of input data through unsupervised learning. It does not need to specially label the training data, and the loss is calculated based on the difference between the input and the output. The process of representing the input data by the neural network can be considered as a kind of encoding, and the dimension of the encoding is usually smaller than that of the input data, so that the compression and dimension reduction effects are achieved. Simple training it makes the input and output the same and has no great significance, so it is forced to learn an efficient representation of the data by adding internal size constraints, such as bottleeck layer, and training the data to add noise and train the self-encoder to recover the original data.
After an efficient representation is obtained, it can be quantized to achieve further compression. Because sometimes floating point numbers with higher precision occupy a lot of storage space, but too many bits after the decimal point do not have a great benefit to the actual task. However, in the back propagation of the neural network, optimization is performed by gradient descent, but quantization is an unguided process and cannot be used in the process of gradient calculation. There are various methods that can replace direct quantization, such as adding uniform noise, soft quantization, etc.
The quantized characteristic values need to be further compressed by entropy coding, and the commonly used entropy coding such as arithmetic coding, huffman coding, shannon coding and the like is important to design an efficient probability model.
Entropy coding belongs to lossless compression of data, reducing bits by identifying and eliminating portions of statistical redundancy, so that it does not lose information when compression is performed. The goal is to display discrete data with fewer bits (than needed for the original data representation) without loss of information during compression.
The method for compressing the residual information based on the self-encoder and the entropy coding can obtain the compressed residual information under the condition of low bit rate, and is used for storing and compressing the residual information after motion estimation of video compression.
The residual features are used to train the self-encoder network by using the self-encoder. Then, a trained Encoder (Encoder) network is used for extraction, a Feature Map (Feature Map) is generated, then the storage space of the data is reduced through quantization (quantization), and the quantized data is further compressed through Entropy Coding (Entropy Coding). When decoding the residual information, the stored entropy-encoded data is decoded and dequantized by using the reverse flow, and decoded by a Decoder (Decoder) with the reverse structure, and the residual information is restored from the feature map.
The implementation steps comprise: building a neural network architecture, coding, quantizing, entropy coding, storing a generated file, and decoding entropy. In particular, the amount of the solvent to be used,
1) building a neural network architecture, and specifying the number of layers of convolution layers, the size of convolution kernels, a padding method and the number of threads required by coding. In general, the design principle is that the size of a convolution kernel is first large and then small, the number is first small and then large or consistent, and strides >1 is arranged at certain layers to reduce the size of a feature map;
2) training is carried out by using a training set, each label of residual information is self, a loss function is constructed by mse and bpp, and optimization is carried out by using an Adam optimizer. After multiple iterations, a trained neural network model can be obtained;
3) the encoding process is a process of inputting the existing residual error information into the Encoder part of the trained neural network and obtaining a Feature Map (Feature Map) through multi-step convolution. Wherein the activation function of each convolutional layer uses ReLU or GDN;
4) quantization is commonly used in both the manner of adding uniform noise and soft quantization. Adding uniform noise is a process of adding noise to replace quantization in training, and because differences before and after quantization are similar to uniform noise, simulation is carried out by artificially adding noise.
5) The entropy coding is started, and binary coding is carried out firstly. Non-binary numbers must be binarized or converted to binary numbers before arithmetic coding. And counting the probability density functions of all binary symbols, and carrying out arithmetic coding on each bit of the binary symbols according to the probability density function obtained by counting.
6) The encoded file is stored in a serialized form and can be processed using a serialized package such as pickle.
7) And performing entropy decoding, reading the file stored in a serialized mode, converting the file into decimal fraction, namely converting the decimal point in front of the highest bit into the decimal fraction, and then decoding according to the existing probability density function.
8) After entropy decoding, a feature map with the size identical to that before entropy encoding is obtained, then a neural network opposite to the encoding network is constructed, a convolution layer is replaced by a deconvolution layer, the feature map is restored to residual information of three channels, and one-step rounding quantization is carried out during storage.
The invention has the advantages that
The method has better effect on the tasks of image compression and super-resolution.
The method can be applied to the field of video coding and decoding and compression, and the storage space and the storage cost are reduced by times by compressing or secondarily compressing the existing residual information. The compressed residual information is mainly used for supplementing lost information in video compression and improving the picture quality of video compression.
Drawings
FIG. 1 is a schematic workflow diagram of the present invention;
fig. 2 is an exemplary diagram of a neural network structure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention, and based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the scope of the present invention.
The method comprises the steps of extracting residual error information through a trained Encoder (Encoder) network by using the idea of a self-Encoder to generate a Feature Map (Feature Map), reducing the storage space of data through quantization (Quantize), and further compressing the quantized data through entropy coding. When decoding the residual information, the reverse flow is used to decode and inversely quantize the stored entropy coding data, and the decoding is carried out by a Decoder (Decoder) with the reverse structure, and the residual information of three channels is recovered from the characteristic diagram.
The method comprises the following specific steps: building a neural network architecture, coding, quantizing, entropy coding, storing a generated file, and decoding entropy. In particular, the amount of the solvent to be used,
1) building a neural network architecture, and specifying the number of layers of convolution layers, the size of convolution kernels, a padding method and the number of threads required by coding. In general, the design principle is that the size of a convolution kernel is first large and then small, the number is first small and then large or consistent, and strides >1 is arranged at certain layers to reduce the size of a feature map;
2) training is carried out by using a training set, each label of residual information is self, a loss function is constructed by mse and bpp, and optimization is carried out by using an Adam optimizer. After multiple iterations, a trained neural network model can be obtained;
3) the encoding process is a process of inputting the existing residual error information into the Encoder part of the trained neural network and obtaining a Feature Map (Feature Map) through multi-step convolution. Wherein the activation function of each convolutional layer uses ReLU or GDN;
4) quantization is commonly used in both the manner of adding uniform noise and soft quantization. Adding uniform noise is a process of adding noise to replace quantization in training, and because differences before and after quantization are similar to uniform noise, simulation is carried out by artificially adding noise.
5) The entropy coding is started, and binary coding is carried out firstly. Non-binary numbers must be binarized or converted to binary numbers before arithmetic coding. And counting the probability density functions of all binary symbols, and carrying out arithmetic coding on each bit of the binary symbols according to the probability density function obtained by counting.
6) The encoded file is stored in a serialized form and can be processed using a serialized package such as pickle.
7) And performing entropy decoding, reading the file stored in a serialized mode, converting the file into decimal fraction, namely converting the decimal point in front of the highest bit into the decimal fraction, and then decoding according to the existing probability density function.
8) After entropy decoding, a feature map with the size identical to that before entropy encoding is obtained, then a neural network opposite to the encoding network is constructed, a convolution layer is replaced by a deconvolution layer, the feature map is restored to residual information of three channels, and one-step rounding quantization is carried out during storage.
The above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (1)

1.一种用于视频编码的残差信息压缩方法,其特征在于,1. a residual information compression method for video coding, is characterized in that, 基于自编码器的神经网络结构,使用GDN激活函数,并结合量化和熵编码进行残差信息压缩;Neural network structure based on autoencoder, using GDN activation function, combined with quantization and entropy coding for residual information compression; 通过添加内部的尺寸限制,以及训练数据增加噪声,并训练自编码器使其恢复原有的数据,这样强制它学习到数据的高效表示;By adding internal size constraints, adding noise to the training data, and training the autoencoder to restore the original data, this forces it to learn an efficient representation of the data; 得到高效表示之后,再对其量化来达到进一步压缩的效果;After the efficient representation is obtained, it is quantized to achieve further compression; 量化之后的特征值需要进行熵编码来进一步压缩;The quantized eigenvalues need to be entropy encoded for further compression; 熵编码属于对数据的无损压缩,通过识别和消除统计冗余的部分来减少比特,这使得其在执行压缩时不会丢失信息;Entropy coding is a lossless compression of data that reduces bits by identifying and eliminating statistically redundant parts, which allows it to perform compression without loss of information; 通过使用自编码器的思想,将残差特征用于训练自编码器网络;然后使用训练好的编码器网络进行提取,生成一个特征图,然后通过量化降低数据的存储空间,再用熵编码将量化后的数据进行进一步压缩;残差信息解码的时候,使用相反的流程,将保存的熵编码数据解码并反量化,并通结构相反的解码器进行解码,从特征图恢复为残差信息;By using the idea of the autoencoder, the residual feature is used to train the autoencoder network; then the trained encoder network is used to extract, generate a feature map, and then reduce the storage space of the data through quantization, and then use entropy coding to The quantized data is further compressed; when the residual information is decoded, the reverse process is used to decode and inversely quantize the saved entropy encoded data, and decode it through a decoder with the opposite structure, and restore the residual information from the feature map; 步骤包括:搭建神经网络架构,编码,量化,熵编码,保存生成文件,熵解码和解码;The steps include: building a neural network architecture, encoding, quantizing, entropy encoding, saving generated files, entropy decoding and decoding; 其中,网络结构至少应包括一组通过设置Strides降采样的卷积层,一组设置Strides上采样的反卷积层和一组用于量化和熵编码的层;Among them, the network structure should at least include a set of convolutional layers for downsampling by setting Strides, a set of deconvolution layers for upsampling by setting Strides, and a set of layers for quantization and entropy coding; 这里卷积层的卷积核大小和个数通过实验得到组合,卷积层的激活函数使用GDN(Generalized divisive normalization)或ReLU;Here, the size and number of convolution kernels of the convolution layer are combined through experiments, and the activation function of the convolution layer uses GDN (Generalized Divisive Normalization) or ReLU; 具体步骤:Specific steps: 1)搭建神经网络架构,规定好编码所需的卷积层的层数,卷积核大小,padding的方法,strides的数量;1) Build a neural network architecture, specify the number of convolutional layers required for encoding, the size of the convolution kernel, the padding method, and the number of strides; 2)使用训练集进行训练,每一个残差信息的标签都是其自身,通过mse和bpp构建损失函数,使用Adam优化器进行优化;数次迭代之后可以得到一个训练好的神经网络模型;2) Use the training set for training, the label of each residual information is its own, build the loss function through mse and bpp, and use the Adam optimizer for optimization; after several iterations, a trained neural network model can be obtained; 3)编码过程就是将已有的残差信息输入训练好的神经网络的Encoder部分,通过多步卷积,得到特征图的过程;3) The encoding process is the process of inputting the existing residual information into the Encoder part of the trained neural network, and obtaining the feature map through multi-step convolution; 4)量化常用的有添加均匀噪声和软量化两种方式;添加均匀噪声就是在训练中,添加噪声来代替量化的过程;4) There are two commonly used methods for quantization: adding uniform noise and soft quantization; adding uniform noise is the process of adding noise instead of quantization during training; 5)开始熵编码,先二进制化,对二进制数进行编码;非二进制数必须二进制化或在算数编码前转换成二进制数;统计所有二进制符号的概率密度函数,对已二进制化的符号的每一个比特根据统计得到的概率密度函数进行算数编码;5) Start entropy coding, binarize first, and encode binary numbers; non-binary numbers must be binarized or converted into binary numbers before arithmetic coding; count the probability density functions of all binary symbols, and for each of the binarized symbols The bits are arithmetically encoded according to the probability density function obtained by statistics; 6)将编码后的文件序列化保存下来,使用序列化的包进行处理;6) Serialize and save the encoded file, and use the serialized package for processing; 7)进行熵解码,把序列化保存的文件读取,先转化为十进制小数,即最高位前面加小数点变为小数,然后根据已有的概率密度函数进行解码;7) Perform entropy decoding, read the serialized and saved file, convert it into a decimal first, that is, add a decimal point in front of the highest digit to become a decimal, and then decode according to the existing probability density function; 8)熵解码后会得到一个和熵编码之前大小完全相同的特征图,然后通过构建一个和编码网络相反的神经网络,用反卷积层代替卷积层,将特征图恢复为三通道的残差信息,并在保存的时候进行一步取整量化。 8) After entropy decoding, a feature map with the same size as before entropy coding will be obtained, and then by constructing a neural network opposite to the coding network, the deconvolution layer is used instead of the convolution layer, and the feature map is restored to three-channel residuals. difference information, and perform one-step rounding and quantification when saving.
CN202010247702.5A 2020-04-01 2020-04-01 Residual error information compression method for video coding Active CN111432211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010247702.5A CN111432211B (en) 2020-04-01 2020-04-01 Residual error information compression method for video coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010247702.5A CN111432211B (en) 2020-04-01 2020-04-01 Residual error information compression method for video coding

Publications (2)

Publication Number Publication Date
CN111432211A CN111432211A (en) 2020-07-17
CN111432211B true CN111432211B (en) 2021-11-12

Family

ID=71550390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010247702.5A Active CN111432211B (en) 2020-04-01 2020-04-01 Residual error information compression method for video coding

Country Status (1)

Country Link
CN (1) CN111432211B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115118972B (en) * 2021-03-17 2025-09-02 华为技术有限公司 Video image encoding and decoding method and related equipment
CN119256542A (en) * 2022-02-28 2025-01-03 抖音视界有限公司 Method, device and medium for video processing
CN114972554B (en) * 2022-05-23 2026-01-09 北京市商汤科技开发有限公司 Image processing methods and apparatuses, electronic devices and storage media
CN116939226B (en) * 2023-06-14 2026-02-06 南京大学 Low-code-rate image compression-oriented generated residual error repairing method and device
CN117896525A (en) * 2024-01-16 2024-04-16 镕铭微电子(济南)有限公司 Video processing, model training method, device, electronic device and storage medium
CN119743626B (en) * 2024-12-18 2025-10-03 中国科学院深圳先进技术研究院 Video secondary compression method and device based on residual error coding and network equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107018422A (en) * 2017-04-27 2017-08-04 四川大学 Still image compression method based on depth convolutional neural networks
CN107396124A (en) * 2017-08-29 2017-11-24 南京大学 Video-frequency compression method based on deep neural network
WO2019009447A1 (en) * 2017-07-06 2019-01-10 삼성전자 주식회사 Method for encoding/decoding image and device therefor
TW201924342A (en) * 2017-10-12 2019-06-16 聯發科技股份有限公司 Method and apparatus of neural network for video coding
CN110753225A (en) * 2019-11-01 2020-02-04 合肥图鸭信息科技有限公司 Video compression method and device and terminal equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110211637A1 (en) * 2007-11-20 2011-09-01 Ub Stream Ltd. Method and system for compressing digital video streams
CN110472483B (en) * 2019-07-02 2022-11-15 五邑大学 SAR image-oriented small sample semantic feature enhancement method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107018422A (en) * 2017-04-27 2017-08-04 四川大学 Still image compression method based on depth convolutional neural networks
WO2019009447A1 (en) * 2017-07-06 2019-01-10 삼성전자 주식회사 Method for encoding/decoding image and device therefor
CN107396124A (en) * 2017-08-29 2017-11-24 南京大学 Video-frequency compression method based on deep neural network
TW201924342A (en) * 2017-10-12 2019-06-16 聯發科技股份有限公司 Method and apparatus of neural network for video coding
CN111133756A (en) * 2017-10-12 2020-05-08 联发科技股份有限公司 Neural network method and apparatus for video coding
CN110753225A (en) * 2019-11-01 2020-02-04 合肥图鸭信息科技有限公司 Video compression method and device and terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DEEPCODER: A Deep Neural Network based Video Compression;tong chen;《IEEE》;20180301;全文 *

Also Published As

Publication number Publication date
CN111432211A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN111432211B (en) Residual error information compression method for video coding
CN111246206B (en) Optical flow information compression method and device based on self-encoder
CN113747163B (en) Image coding, decoding and compression methods based on context reorganization modeling
CN115278262B (en) End-to-end intelligent video encoding method and device
CN109889839B (en) Image encoding and decoding system and method for region of interest based on deep learning
CN100403801C (en) A context-based adaptive entropy encoding/decoding method
CN110290387A (en) A Generative Model-Based Image Compression Method
CN111343458B (en) Sparse gray image coding and decoding method and system based on reconstructed residual
CN116939226B (en) Low-code-rate image compression-oriented generated residual error repairing method and device
CN114697632A (en) End-to-end stereo image compression method and device based on bidirectional condition coding
CN102014283A (en) First-order difference prefix notation coding method for lossless compression of image data
Wu et al. Improved decoder for transform coding with application to the JPEG baseline system
CN115882866A (en) Data compression method based on data difference characteristic
CN115150628A (en) Coarse-to-fine depth video coding method with super-prior guiding mode prediction
CN114663536A (en) Image compression method and device
CN117915107B (en) Image compression system, image compression method, storage medium and chip
CN114501034A (en) Image Compression Method and Medium Based on Discrete Gaussian Mixture Hyperprior and Mask
Hassen et al. Quantum Machine Learning for Video Compression: An Optimal Video Frames Compression Model using Qutrits Quantum Genetic Algorithm for Video multicast over the Internet.
CN115761446B (en) Image compression method and system based on Swin-Transformer and autoregression
Karthikeyan et al. An efficient image compression method by using optimized discrete wavelet transform and Huffman encoder
Kumar et al. Vector quantization with codebook and index compression
CN115883842B (en) Filtering and encoding and decoding method, device, computer readable medium and electronic device
CN111652789B (en) Big data-oriented color image watermark embedding and extracting method
CN111080729B (en) Training picture compression network construction method and system based on Attention mechanism
CN117893624B (en) Color image lossless compression and decompression method based on quaternion neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211026

Address after: 250100 building S02, No. 1036, Langchao Road, high tech Zone, Jinan City, Shandong Province

Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd.

Address before: 250100 First Floor of R&D Building 2877 Kehang Road, Sun Village Town, Jinan High-tech Zone, Shandong Province

Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant