[go: up one dir, main page]

CN114826832B - Channel estimation method, neural network training method and device, and equipment - Google Patents

Channel estimation method, neural network training method and device, and equipment Download PDF

Info

Publication number
CN114826832B
CN114826832B CN202110130782.0A CN202110130782A CN114826832B CN 114826832 B CN114826832 B CN 114826832B CN 202110130782 A CN202110130782 A CN 202110130782A CN 114826832 B CN114826832 B CN 114826832B
Authority
CN
China
Prior art keywords
channel estimation
neural network
res
channel
estimation value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110130782.0A
Other languages
Chinese (zh)
Other versions
CN114826832A (en
Inventor
沈弘
孙羿
赵春明
杜振国
彭兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110130782.0A priority Critical patent/CN114826832B/en
Publication of CN114826832A publication Critical patent/CN114826832A/en
Application granted granted Critical
Publication of CN114826832B publication Critical patent/CN114826832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/0224Channel estimation using sounding signals
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Power Engineering (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

本发明实施例公开了一种信道估计方法、神经网络的训练方法及装置、设备,该信道估计方法包括:接收端根据接收到的信号和本地导频信号进行信道估计和均衡,得到多个资源单元RE各自位置上的第一信道估计值和该多个RE各自位置上的数据预判符号,将该多个RE位置中的每个RE位置上的信道信息输入到第一神经网络,得到该RE位置上的第二信道估计值,其中,RE位置上的信道信息包括该RE位置上的接收信号、第一信道估计值和数据预判符号,该方法通过第一神经网络优化每个RE位置上的第一信道估计值,提高信道估计精度。

The embodiment of the present invention discloses a channel estimation method, a neural network training method and apparatus, and a device. The channel estimation method comprises: a receiving end performs channel estimation and equalization according to a received signal and a local pilot signal, obtains a first channel estimation value at each position of a plurality of resource units RE and a data prediction symbol at each position of the plurality of REs, inputs the channel information at each RE position of the plurality of RE positions into a first neural network, obtains a second channel estimation value at the RE position, wherein the channel information at the RE position comprises a received signal at the RE position, a first channel estimation value, and a data prediction symbol, and the method optimizes the first channel estimation value at each RE position by using the first neural network, thereby improving the accuracy of channel estimation.

Description

Channel estimation method, neural network training method, device and equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a channel estimation method, a neural network training method, a device, and equipment.
Background
The orthogonal frequency division multiplexing (orthogonal frequency-division multiplexing, OFDM) technology is used as a core technology of a physical layer of a fifth generation mobile communication system (5th Generation,5G), and has the advantages of multipath fading resistance, intersymbol interference resistance, flexible bandwidth, high spectrum utilization rate and the like.
Channel estimation is a key technology in an OFDM communication system, and has a great influence on the transmission performance of the system. The channel estimation algorithm can be classified into a blind estimation, a semi-blind estimation, and a pilot-based estimation method from the viewpoint of whether pilot symbols are used. The most widely applied pilot-based channel estimation method under the protocol framework based on long term evolution (long term evolution, LTE) and 5G has the following core ideas: firstly, a receiving signal at a pilot symbol position is obtained, a channel frequency domain response at the pilot symbol position is obtained by utilizing Least Square (LS) algorithm, linear minimum mean square error (linear minimum mean square error, LMMSE) algorithm and the like, and then a complete channel response at a two-dimensional time-frequency grid is obtained by interpolation algorithm.
However, conventional OFDM channel estimation methods generally assume that the channel impulse response remains unchanged for one OFDM symbol duration, which does not conform to the channel characteristics in a high-speed mobile environment. In order to ensure the channel estimation accuracy in a high-speed environment, a large number of pilots are often required to be inserted, so that the transmission efficiency is reduced, and the receiving end needs to perform frequent estimation on the channel, which causes higher power consumption.
In addition, the conventional method relies on a more ideal channel modeling, and the actual scenario has non-ideal factors including nonlinearities, etc., so that a strict mathematical model cannot be used to describe the channel. Therefore, a channel estimation method capable of improving the channel estimation accuracy is needed.
Disclosure of Invention
The embodiment of the invention provides a channel estimation method, a training method of a neural network, a training device of the neural network and equipment.
In a first aspect, the present application provides a channel estimation method, the method comprising:
The receiving end carries out channel estimation and equalization according to the received signals and the local pilot signals to obtain first channel estimation values at the respective positions of a plurality of resource units (REs) and data pre-judgment symbols at the respective positions of the REs, wherein the received signals comprise the received signals at the respective positions of the REs;
The receiving end inputs the channel information of each RE position in the RE positions to a first neural network to obtain a second channel estimation value of each RE position, the channel information of each RE position comprises a received signal of each RE position, a first channel estimation value and a data pre-judgment symbol, and the first neural network is used for predicting the channel estimation value of the RE position according to the input channel information of the RE position.
According to the method, the initial estimation of the channel is firstly carried out based on the received signal and the local pilot signal to obtain the first channel estimation value and the data pre-judgment symbol at each RE position, and then the channel information at each RE position is optimized through the first neural network for optimizing the channel estimation value at the RE position to obtain the optimized channel estimation value at the RE position, wherein the received signal, the first channel estimation value and the data pre-judgment symbol at the RE position are considered in the process of optimizing the channel estimation value through the first neural network, so that the channel estimation precision can be improved.
With reference to the first aspect, in one possible implementation, the channel information at each RE position further includes a received signal at each of w REs adjacent to the each RE and/or a data pre-determining symbol at each of the w REs, where each RE and the w REs correspond to the same OFDM symbol, and w is a positive integer.
According to the method, in the process of channel optimization through the first neural network, the influence of the adjacent w subcarriers on the channel estimation value is fully considered, and especially in a high-speed motion scene, the channel estimation precision can be further improved.
With reference to the first aspect, in one possible implementation, the first neural network is trained by a plurality of first samples, where the first samples include channel information at one RE position estimated based on a sample received signal and a real channel value at the one RE position.
With reference to the first aspect, in one possible implementation manner, the first neural network is a fully connected neural network including a hidden layer.
According to the method, the first neural network comprising only one hidden layer can improve the calculation efficiency both in training and in an application process, so that a receiving end can quickly recover a received signal.
Optionally, the hidden layer adopts a linear rectification function as an activation function, so that the first neural network is a nonlinear model, and the accuracy of the second channel estimation value predicted by the model is improved.
With reference to the first aspect, in one possible implementation, the method further includes:
The receiving end inputs a channel estimation matrix formed by second channel estimation values at the respective positions of the plurality of REs into a second neural network to obtain a third channel estimation value at the respective position of the plurality of REs; the second neural network is trained through a plurality of second samples, and the second samples comprise a channel estimation matrix formed by channel estimation values at respective positions of all REs obtained by estimating sample received signals and a real channel matrix formed by real channel values at respective positions of all REs.
According to the method, the second channel estimation value at the RE position output by the first neural network is further optimized through the second neural network, so that the accuracy of channel estimation is further improved.
With reference to the first aspect, in one possible implementation, the second neural network is a depth residual neural network, and the second neural network includes at least two convolutional layers, at least one active layer, and at least one adder, where the active layer is located between two adjacent convolutional layers in the at least two convolutional layers.
According to the method, the second neural network adopts the depth residual neural network, so that gradient explosion during training of the second neural network can be avoided, a convolution layer positioned at a rear layer can refer to output of a front convolution layer or an input layer, and further, accuracy of the second neural network is improved.
With reference to the first aspect, in one possible implementation manner, the at least two convolution layers include a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, and a fifth convolution layer that are sequentially arranged, the at least one active layer includes a first active layer and a second active layer, and the at least one adder includes a first adder and a second adder, where:
The first active layer is located between the second convolution layer and the third convolution layer;
The second active layer is located between the third convolution layer and the fourth convolution layer;
The first adder is positioned between the fourth convolution layer and the fifth convolution layer, and the input of the first adder is the input of the first convolution layer and the output of the fourth convolution layer;
The inputs of the second adder are the input of the second neural network and the output of the fifth convolution layer.
By adopting the second neural network and the method, the accuracy of the estimated third channel estimation value is high.
With reference to the first aspect, in one possible implementation manner, the receiving end performs channel estimation and equalization according to the received signal and the pilot signal to obtain a first channel estimation value at each position of the plurality of resource elements REs and a data pre-judgment symbol at each position of the plurality of REs, and one implementation manner may be:
The receiving end carries out channel estimation according to a receiving signal and a local pilot signal on a pilot frequency position to obtain a first channel estimation value on the pilot frequency position, wherein the pilot frequency position occupies at least two RE positions;
The receiving end carries out interpolation processing on the first channel estimation value at the pilot frequency position to obtain a first channel estimation value at a data position, wherein the data position occupies a position except the pilot frequency position in the plurality of RE positions;
And the receiving end obtains a data pre-judging symbol on the data position according to the received signal on the data position and the first channel estimation value on the data position.
In a second aspect, the present application further provides a training method of a neural network, including:
The training equipment carries out channel estimation and equalization according to a sample receiving signal and a local pilot signal to obtain a first channel estimation value at each position of a plurality of resource units (REs) and a data pre-judgment symbol at each position of the REs, wherein the sample receiving signal comprises receiving signals at each position of the REs;
The training equipment inputs channel information at each RE position in the plurality of RE to a first neural network to obtain a predicted channel estimation value at each RE position, wherein the channel information at each RE position comprises a received signal at each RE position, a first channel estimation value and a data pre-judgment symbol;
the training device updates parameters of the first neural network according to the loss between the predicted channel estimation value and the real channel value at each RE position.
The first neural network obtained by the training method can realize the optimization of the channel estimation value at one RE position, and the received signal, the first channel estimation value and the data pre-judgment symbol at the RE position are considered in the process of channel estimation value optimization, so that the channel estimation precision can be improved.
With reference to the second aspect, in one possible implementation, the channel information at each RE position further includes a received signal at each of w REs adjacent to the each RE and/or a data pre-determining symbol at each of the w REs, where each RE and the w REs correspond to the same orthogonal frequency division multiplexing OFDM symbol, and w is a positive integer.
The first neural network obtained by the training method fully considers the influence of the adjacent w subcarriers on the channel estimation value in the process of channel optimization, and particularly can further improve the channel estimation precision in a high-speed motion scene.
With reference to the second aspect, in one possible implementation, the first neural network is a fully-connected neural network including a hidden layer, and the hidden layer uses a linear rectification function as an activation function.
According to the method, the first neural network comprising only one hidden layer can improve the calculation efficiency both in training and in an application process, so that a receiving end can quickly recover a received signal.
In a third aspect, the present application further provides a training method of a neural network, which is characterized by comprising:
The training equipment carries out channel estimation and equalization according to a sample receiving signal and a local pilot signal to obtain a first channel estimation value at each position of a plurality of resource units (REs) and a data pre-judgment symbol at each position of the REs, wherein the sample receiving signal comprises receiving signals at each position of the REs;
The training device inputs channel information at each RE position in the RE positions to a first neural network to obtain a second channel estimation value at each RE position, wherein the channel information at each RE position comprises a received signal at each RE position, a first channel estimation value and a data pre-judgment symbol, and the first neural network is used for predicting the channel estimation value at the RE position according to the input channel information at the RE position;
the training equipment inputs a channel estimation matrix formed by second channel estimation values at the positions of the REs into a second neural network to obtain a predicted channel estimation matrix;
the training device updates parameters of the neural network based on the loss between the predicted channel estimation matrix and the real channel matrix.
According to the method, the initial estimation of the channel is firstly carried out based on the received signal and the local pilot signal to obtain the first channel estimation value and the data pre-judgment symbol at each RE position, and then the channel information at each RE position is optimized through the first neural network for optimizing the channel estimation value at the RE position to obtain the optimized channel estimation value at the RE position, wherein the received signal, the first channel estimation value and the data pre-judgment symbol at the RE position are considered in the process of optimizing the channel estimation value through the first neural network, so that the channel estimation precision can be improved.
The second neural network obtained through training by the method further optimizes the second channel estimation value on the RE position output by the first neural network, so that the accuracy of channel estimation can be further improved.
With reference to the third aspect, in one possible implementation, the channel information at each RE position further includes received signals at respective positions of w REs adjacent to the each RE and/or data pre-determined symbols at respective positions of the w REs, where each RE and the w REs correspond to the same orthogonal frequency division multiplexing OFDM symbol, and w is a positive integer.
According to the method, in the process of channel optimization through the first neural network, the influence of the adjacent w subcarriers on the channel estimation value is fully considered, and particularly in a high-speed motion scene, the estimation precision of the second channel estimation value can be further improved, and the estimation precision of the third channel estimation value is further improved.
With reference to the third aspect, in one possible implementation, the second neural network is a depth residual neural network, and the second neural network includes at least two convolutional layers, at least one active layer, and at least one adder, where the active layer is located between two adjacent convolutional layers in the at least two convolutional layers.
The second neural network adopts the depth residual neural network, so that gradient explosion during training of the second neural network can be avoided, a convolution layer positioned at a rear layer can refer to the output of a front convolution layer or an input layer, and the accuracy of the second neural network is further improved.
In a fourth aspect, the present application also provides a channel estimation apparatus, including:
The first channel estimation unit is used for carrying out channel estimation and equalization according to the received signals and the local pilot signals to obtain first channel estimation values and data pre-judgment symbols at the RE positions of a plurality of resource units, wherein the received signals comprise the received signals at the respective positions of the RE;
And the second channel estimation unit is used for inputting the channel information at each RE position in the RE positions into a first neural network to obtain a second channel estimation value at each RE position, wherein the channel information at each RE position comprises a received signal at each RE position, a first channel estimation value and a data pre-judgment symbol, and the first neural network is used for predicting the channel estimation value at the RE position according to the input channel information at the RE position.
Optionally, the channel estimation apparatus may further include other functional units configured to implement the foregoing first aspect or any one of the possible implementations of the first aspect, where specific implementation and achieved beneficial effects of each functional unit in the apparatus may be described in connection with the foregoing first aspect or any one of the possible implementations of the first aspect, which are not described herein again.
In a fifth aspect, the present application further provides a training device for a neural network, which is characterized by including:
the first channel estimation unit is used for carrying out channel estimation and equalization according to a sample receiving signal and a local pilot signal to obtain a first channel estimation value at each position of a plurality of resource units (REs) and a data pre-judgment symbol at each position of the REs, wherein the sample receiving signal comprises receiving signals at each position of the REs;
A second channel estimation unit, configured to input channel information at each of the plurality of RE locations to a first neural network, to obtain a predicted channel estimation value at each RE location, where the channel information at each RE location includes a received signal at each RE location, a first channel estimation value, and a data pre-judgment symbol;
And the updating unit is used for updating the parameters of the first neural network according to the loss between the predicted channel estimation value and the real channel value at each RE position.
Optionally, the training device of the neural network may further include other functional units for implementing the second aspect or any possible implementation of the second aspect, where specific implementation and achieved beneficial effects of each functional unit in the device may be referred to as related description in the second aspect or any possible implementation of the second aspect, and are not repeated herein.
In a sixth aspect, the present application further provides a training method of a neural network, which is characterized by comprising:
the first channel estimation unit is used for carrying out channel estimation and equalization according to a sample receiving signal and a local pilot signal to obtain a first channel estimation value at each position of a plurality of resource units (REs) and a data pre-judgment symbol at each position of the REs, wherein the sample receiving signal comprises receiving signals at each position of the REs;
A second channel estimation unit, configured to input channel information at each of the plurality of RE locations to a first neural network, to obtain a second channel estimation value at each RE location, where the channel information at each RE location includes a received signal at each RE location, a first channel estimation value, and a data pre-judgment symbol, and the first neural network is configured to predict the channel estimation value at the RE location according to the input channel information at the RE location;
a third channel estimation unit, configured to input a channel estimation matrix formed by the second channel estimation values at the respective positions of the plurality of REs into a second neural network to obtain a predicted channel estimation matrix;
And the updating unit is used for updating parameters of the neural network according to the loss between the predicted channel estimation matrix and the real channel matrix.
Optionally, the training device of the neural network may further include other functional units for implementing the above third aspect or any possible implementation of the third aspect, where specific implementation and achieved beneficial effects of each functional unit in the device may be described in relation to the above third aspect or any possible implementation of the third aspect, which is not described herein.
In a seventh aspect, the present application further provides an electronic device, including: one or more processors, one or more memories, a communication interface; the one or more memories are coupled to the one or more processors, the one or more memories are configured to store computer program code, the computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to implement the method as described in the first aspect or any of the first aspects.
In an eighth aspect, the present application further provides an electronic device, including: one or more processors and one or more memories; the one or more memories are coupled to the one or more processors, the one or more memories are configured to store computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to implement the method as described in the second aspect or any of the second aspects.
In a ninth aspect, the present application further provides an electronic device, including: one or more processors and one or more memories; the one or more memories are coupled to the one or more processors, the one or more memories are used for storing computer program codes, the computer program codes comprise computer instructions, when the one or more processors execute the computer instructions, the electronic device realizes the method according to the third aspect or any one of the possible implementation of the third aspect.
In a tenth aspect, the application also provides a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method as described in the first aspect or any one of the possible implementations of the first aspect.
In an eleventh aspect, the present application also provides a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method as described in the second aspect or any one of the possible implementations of the second aspect.
In a twelfth aspect, the application also provides a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method as described in the third aspect or any one of the possible implementations of the third aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic explanatory diagram of a wireless communication system provided in an embodiment of the present application;
Fig. 2A is a schematic explanatory diagram of a method of channel estimation provided by an embodiment of the present application;
Fig. 2B is a schematic illustration of a two-dimensional time-frequency bin of a channel frequency domain response provided by an embodiment of the present application;
fig. 3A is a schematic flow chart of a training method of a neural network according to an embodiment of the present application;
Fig. 3B is a schematic structural diagram of a first neural network according to an embodiment of the present application;
fig. 4A is a schematic flow chart of a training method of a neural network according to an embodiment of the present application;
FIG. 4B is a schematic diagram of a second neural network according to an embodiment of the present application;
fig. 5A is a schematic flow chart of a channel estimation method according to an embodiment of the present application;
Fig. 5B is a schematic explanatory diagram of a channel estimation method provided by the embodiment of the present application;
Fig. 5C is a schematic flow chart of a channel estimation method according to an embodiment of the present application;
fig. 5D is a schematic explanatory diagram of a channel estimation method provided by the embodiment of the present application;
fig. 6A to fig. 6C are comparison diagrams of simulation results of various channel estimation methods according to embodiments of the present application;
Fig. 6D is a schematic illustration of a pilot insertion method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a channel estimation device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a training device for a neural network according to an embodiment of the present application;
FIG. 9 is a schematic diagram of another training device for neural networks according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 11 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
Some key terms related to the embodiments of the present application will be described first.
(1) Resource Element (RE)
REs, also referred to as resource elements, are the smallest physical resources, and one RE may correspond to one subcarrier on one OFDM symbol. In the embodiment of the application, the RE position is the corresponding OFDM symbol and subcarrier.
(2) Resource Block (RB)
The frequency is 12 subcarriers continuously, and one slot in the time domain is called 1 RB.
(3) Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI)
AI has attracted attention from researchers in the field of wireless communications due to its successful application in fields such as natural language processing and computer vision. In AI technology, deep learning is an important subclass with many excellent characteristics. First, the deep neural network is considered as a general function approximator, which can effectively approximate and fit arbitrary functions, and extract and process implicit characteristic relations. Second, processing large amounts of data is a strong item of deep learning, with its own distributed and parallel computing architecture guaranteeing computing speed and processing power. In addition, the large number of frame libraries, including TensorFlow, theano and Caffe, etc., allows for deep learning to be readily applied to a variety of fields. These advantages of deep learning provide a new opportunity to solve challenges faced by communication systems by breaking the inherent limitations in traditional communication theory. In recent years, researchers have applied deep learning to wireless communication physical layer technologies such as modulation scheme recognition, signal demodulation, and channel decoding. Compared to traditional methods, deep learning-based schemes can still remain robust in certain extreme environments.
Therefore, the embodiment of the application provides a channel estimation method, which can apply the deep neural network to OFDM channel estimation, thereby breaking through the inherent limit of the traditional channel estimation method, improving the channel estimation precision, saving the pilot frequency overhead and improving the system performance.
As shown in fig. 1, a schematic explanatory diagram of a wireless communication system according to an embodiment of the present application is provided, and the wireless communication system includes, but is not limited to, a long term evolution (long term evolution, LTE) system for transmitting data using orthogonal frequency division multiplexing (Orthogonal Frequency-Division Multiplexing, OFDM), a fifth Generation mobile communication (5G) system for future evolution, a New Radio (NR) system, and the like. The system may include: one or more transmitting terminals 11, one or more receiving terminals 12, and a core network (not shown in fig. 1), etc.
In fig. 1, the transmitting end 11 may be a network access device, the receiving end 12 is illustrated as a user terminal, and in another scenario, the transmitting end 11 may be a user terminal, and the receiving end 12 may be a network access device.
The network access device may be a base station that may be used to communicate with one or more user terminals. The base station may be an evolved node B (eNB or eNodeB) in an LTE system, and a next generation node (next-generation Node B, gNB) in a 5G system, a new air interface (NR) system, or the like. A base station may also be an Access Point (AP), a transceiver point (transmission receive point, TRP), a Central Unit (CU), or other network entity, and may include some or all of the functionality of the above network entities.
User terminals, also referred to as User Equipment (UE), may include cell phones, tablet computers, smartwatches, desktops, in-car terminals, routers, mobile Stations (MSs), personal digital assistants (Personal DIGITAL ASSISTANT, PDA), handsets, laptop computers (laptop computers), and the like.
When the sending base station transmits downlink data to the terminal, the downlink data can be coded by adopting channel coding, and the downlink data after the channel coding is transmitted to the terminal after being modulated. When the terminal transmits uplink data to the base station, the uplink data can also be encoded by adopting channel coding, and the encoded data is transmitted to the base station after modulation. When receiving data, the receiving device needs to perform channel estimation at the receiving end, and the receiving device may be a terminal or a base station. In general, a base station is often used to transmit downlink data to a terminal, and the terminal may perform channel estimation on the downlink data.
A transmitting end 11, configured to transmit a signal to a receiving end 12, where a local pilot signal is inserted during the signal transmission process. It should be understood that the local pilot signal is a signal known to both the transmitting end 11 and the receiving end 12, and may be used for channel estimation to obtain the first channel estimation value described in the present application.
The receiving end 12 receives the signal sent by the sending end 11, where the received signal includes the signal on each subcarrier on each OFDM symbol, and since one subcarrier on one OFDM symbol corresponds to one RE, the received signal includes the received signals on all RE positions. The received signal includes N subcarriers, and M OFDM symbols occupy the N subcarriers, which is described by taking as an example that the received signal includes signals at m×n RE positions. Further, the receiving end 12 may make 2 or 3 estimations of the channel based on the received signal.
In the first channel estimation process, based on the received pilot signal, a channel estimation method aiming at the application of OFDM technology in the prior art is adopted to obtain channel frequency domain response at each position of M.N REs, namely, a first channel estimation value at each position of M.N REs; and, based on the first channel estimation value at each position of m×n REs and the received signal, a pre-judgment value of the data symbol at each position of m×n REs, which is also called a data pre-judgment symbol, is obtained.
In the second channel estimation, a first channel estimation value at the RE position to be estimated, a received signal at the RE position, and a data pre-judgment symbol at the RE position to be estimated may be input to the first neural network, so as to obtain a second channel estimation value at the RE position to be estimated. In other embodiments, the influence of multiple REs adjacent to the RE to be estimated on the channel estimation of the subcarrier to be estimated may be considered, where a first channel estimation value of the RE to be estimated, a received signal at the RE to be estimated, a data pre-determined symbol at the RE to be estimated, a received signal at the RE to be estimated and/or a data pre-determined symbol at the RE to be estimated, are input to the first neural network, to obtain a second channel estimation value at the RE to be estimated. It should be noted that, in the two implementations, the model of the first neural network is different, and the samples used for training are different, which can be specifically described with reference to the training method of the neural network shown in fig. 3A and the signal estimation method shown in fig. 5A or fig. 5B.
The second channel estimation value may be used as a final channel estimation value for recovering data in the received data signal.
Optionally, a third channel estimation may be performed, where in the third channel estimation, a channel estimation matrix formed by the second channel estimation values at the respective positions of the m×n REs is input to the second neural network, so as to obtain a third channel estimation value at the respective positions of the m×n REs, where the third channel estimation value is a final signal estimation value. The third neural network may be a residual neural network, a convolutional neural network, a deep neural network, or the like. Regarding the third neural network, specific reference may be made to the following description of the training method of the neural network shown in fig. 4A and the signal estimation method shown in fig. 5A or 5B.
The training device 13 is configured to train the first neural network and the second neural network, and send the first neural network and the second neural network obtained by training to the receiving end 12, so that the receiving end 12 can perform channel estimation using the first neural network and the second neural network. For a specific method of training the first neural network and the second neural network, reference may be made to the following description of the training method embodiment of the neural network shown in fig. 3A and fig. 4A.
In some embodiments, the training device 13 may collect the signal sent by the sending end 11 and the signal received by the receiving end 12, and generate a training sample based on the collected signal, where a method for generating the training sample may be described in association with the training method shown in fig. 3A and fig. 4A.
It should be understood that the training device for training the first neural network and the training device for training the second neural network may be the same device or different devices, may be a server, and may also be the receiving end 12 or the transmitting end 11.
A method for channel estimation according to the present application is described below with reference to fig. 2A, and the method for channel estimation may be applied to the first channel estimation and may be performed by the transmitting end 11, the receiving end 12, or the training device 13 in the system shown in fig. 1. The method may include, but is not limited to, the steps of:
S21: and estimating the channel according to the received signals at the pilot frequency positions to obtain first channel estimated values at the respective positions of the M.N REs.
The target signal occupies m×n REs, that is to say, includes the received signals at each position of the m×n REs, where one RE corresponds to one subcarrier on one OFDM symbol, and there are M OFDM symbols and N subcarriers. It should be appreciated that in a particular application, the target signal may be a signal received by the receiving end, or a sample received signal, etc. On the other hand, m×n REs may be divided into pilot positions and data positions, and the target signal includes a received signal at the pilot position and a received signal at the data position.
Wherein, the pilot frequency position refers to the RE occupied by the pilot frequency signal, the pilot frequency of one user can occupy one or more RE, the received signal on the pilot frequency position is also called pilot frequency signal, and the received signal is the data on the pilot frequency position in the target signal, which is also called pilot frequency data; the data location refers to the RE occupied by the transmitted data, and the received signal at the data location, also referred to as the data signal, is the data at the data location in the target signal.
It should be understood that the first channel estimation value at the RE position corresponding to the kth subcarrier on the mth OFDM symbol is the channel frequency domain response at the RE position corresponding to the kth subcarrier on the mth OFDM symbol, M is the index of the OFDM symbol, k is the index of the subcarrier, where M and k are positive integers, M is less than or equal to M, and k is less than or equal to N.
Specifically, the transmitting end can insert a known local pilot signal in the process of transmitting signals, and the receiving end can recover a channel frequency domain response (also called a channel impulse response) to a pilot position based on the pilot signal and the local pilot signal in the received signals; and further, obtaining channel frequency domain responses of all the time periods through interpolation, namely obtaining first channel estimated values at respective positions of the M.N REs. The interpolation may be linear interpolation, gaussian interpolation, cubic interpolation, or the like, and is not limited herein. The linear interpolation is to estimate the channel frequency domain response of the data position between the pilot positions by using the channel frequency domain responses of two adjacent pilot positions, and the method assumes that the channel frequency domain response of each data sub-channel between the adjacent pilot positions changes linearly.
For example, a Least Square (LS) or a linear minimum mean square error (linear minimum mean square error, LMMSE) algorithm may be used to obtain the channel frequency domain response of the pilot frequency position, that is, the first channel estimation value of the pilot frequency position, and then the complete channel frequency domain response on the two-dimensional time-frequency grid is obtained by a linear interpolation method based on the channel frequency domain response of the pilot frequency position. As shown in fig. 2B, the entire system resource is composed of a grid (also referred to as a two-dimensional time-frequency grid) divided by a frequency domain and a time domain, where one grid represents one RE, and one RE is composed of one subcarrier in the frequency domain and one OFDM symbol in the time domain. The matrix formed by the first channel estimation values of the m×n REs is referred to as a first channel estimation matrix. As shown in fig. 2B, hm, k represents a channel frequency domain response of the kth subcarrier on the mth OFDM symbol, that is, a first channel estimation value at an RE position corresponding to the kth subcarrier on the mth OFDM symbol.
It should be understood that the receiving end inserts a local pilot signal in the transmitted signal, which is known to the transmitting end.
S22: and obtaining the data pre-judgment symbol in the data position through a channel equalization algorithm according to the received signal in the data position and the first channel estimation value of the data position. The data pre-judgment symbol at the RE position is the pre-judgment value of the data symbol carried at the RE position.
Wherein, the data symbol is a signal carried by the RE except the RE occupied by the pilot frequency position in M.N REs.
The channel equalization algorithm refers to a process of performing correlation processing on the received signal by using a channel estimation value after channel estimation is completed, so as to remove adverse effects of the channel on the transmitted signal as much as possible, and recover the original transmitted signal, and may include a matched filter algorithm, a zero forcing algorithm, a least mean square algorithm, and the like. Wherein, it can be obtained by single tap balancing algorithm and hard decision.
For OFDM systems, a simple frequency domain single tap equalization algorithm may be employed. There are only a limited number of possibilities since the transmitted symbols are modulated by a specified constellation. Therefore, after the equalization operation is completed, hard decision can be performed on the equalized result, and a constellation point with the smallest Euclidean distance from the result can be found to be used as a data pre-decision symbol.
It should be appreciated that in various embodiments of the present application, the pilot signal, the data pre-determined symbol, the signal estimate (the first signal estimate, the second signal estimate, and the third signal estimate), etc., each include a real portion and an imaginary portion.
It should be noted that, the channel estimation method shown in fig. 2A is a prior art, and specific reference may be made to related descriptions in the prior art, which are not repeated here.
The training methods of the two neural networks provided in the embodiments of the present application are described below with reference to fig. 3A and fig. 4A, where each of the methods may be implemented by the training device in fig. 1, and the training devices implementing each of the training methods may be the same device or different devices. The training methods of these two neural networks are described below:
(one): training method of first neural network
S31: a first neural network is constructed.
The first neural network may be a deep neural network, and is configured to estimate a second channel estimation value at a location of an RE to be estimated. The to-be-estimated RE is one RE of the m×n REs, and the embodiment of the present application is illustrated by taking the to-be-estimated RE as the RE corresponding to the kth subcarrier on the mth OFDM symbol as an example.
Structurally, the first neural network includes an input layer, one or more hidden layers, and an output layer. The number of neurons in the input layer is determined by the input data of the model, and the output of the output layer is a second channel estimation value of the subcarrier to be estimated, and the second channel estimation value comprises a real part and an imaginary part. The number of neurons of the hidden layer, the size and the number of convolution kernels and the like can be preset, and can also be determined in an automatic machine learning process. Wherein the input data of the first neural network may include a received signal Y m,k at the RE position to be estimated, a data pre-judgment symbol at the RE position to be estimatedFirst channel estimate/>, at the RE position to be estimatedThe output of the first neural network is the second channel estimation value at the RE position to be estimated, namely the real part and the imaginary part of the improved channel estimation value at the RE position to be estimated.
Optionally, the input data further includes received signals at respective positions of a plurality of REs adjacent to the RE to be estimated, data pre-judgment symbols at respective positions of the adjacent plurality of REs, and the like. Among the REs closest to the frequency of the RE to be estimated, for example, the REs in frequency order, q REs before and after the RE to be estimated, that is, the RE corresponding to the kth-q subcarrier on the { mth OFDM symbol, the RE corresponding to the kth-q+1 subcarrier on the mth OFDM symbol, …, the RE corresponding to the kth-1 subcarrier on the mth OFDM symbol, the RE corresponding to the kth+1 subcarrier on the mth OFDM symbol, the RE corresponding to the kth+2 subcarrier on the mth OFDM symbol, …, the RE corresponding to the kth+q subcarrier on the mth OFDM symbol, q may be 1,2, 3, 4, 5 or other positive integers less than half of the total subcarrier number, which is not limited herein. At this time, the received signals at a plurality of RE positions adjacent to the RE to be estimated may be expressed as { Y m,k-q,Ym,k-q+1,…,Ym,k-1,Ym,k+1,Ym,k+2,…,Ym,k+q }, and the data pre-judgment symbols at the adjacent RE positions may be expressed as
The embodiment of the application is illustrated by taking q as 2 as an example, namely, the influence of 4 subcarriers with similar frequencies on the first channel estimation value at the RE position to be estimated is considered in the estimation process. At this time, the received signals at a plurality of RE positions adjacent to the RE to be estimated may be expressed as { Y m,k-2,Ym,k-1,Ym,k+1,Ym,k+2 }, and the data pre-judgment symbols at the adjacent RE positions may be expressed as
As shown in fig. 3B, the first neural network is a fully-connected neural network with only one hidden layer, and the hidden layer uses a linear rectifying function (RECTIFIED LINEAR units, reLu) as an activation function. The neural network with only one hidden layer has simple structure and high calculation speed, and can reduce the time length of the first channel estimation when the model is applied.
The received signal and the data pre-judging symbol comprise a real part and an imaginary part, and in a specific implementation, the received signal and the data pre-judging symbol on the RE to be estimated and a plurality of RE positions adjacent to the RE to be estimated and the first channel estimated value on the RE position to be estimated are separated according to the real part and the imaginary part and combined into a one-dimensional vector to be used as input data of the first neural network. The input data when q is 2 can be expressed as: where Re { } represents the real part and Im { } represents the imaginary part.
S32: the first neural network is trained. One implementation of S32 may include, but is not limited to, the following steps:
s321: and acquiring a sample receiving signal and a real channel value of a target channel.
The generating process of the sample receiving signal may be: a target channel and a transmit symbol are first generated, and a sample received signal is generated based on the target channel and the transmit symbol. Whereas the true channel value of the target channel, i.e. the true channel frequency domain response, is known. Wherein the transmission symbols comprise data symbols and pilot symbols, the data symbols are modulated onto the subcarriers to form data signals,
It should be understood that the symbols are data modulated onto subcarriers by the transmitting end when transmitting a signal.
S322: and estimating the channel according to the pilot frequency signals in the sample receiving signals to obtain first channel estimated values at the respective positions of all REs.
S323: and obtaining the data pre-judgment symbol at each RE position through a channel equalization algorithm according to the data signal and the first channel estimation value of the data position in the sample receiving signal.
It should be understood that, as with the destination signal illustrated in fig. 2A above, the sample received signal includes received signals on all REs, where all REs include REs on the data location and REs on the pilot location. The data signal is a received signal at a data location in the sample received signal, and the pilot signal is a received signal at a pilot location in the sample received signal.
For the specific implementation of S322 and S323, reference may be made to the relevant descriptions in steps S21 and S22 in the signal estimation algorithm shown in fig. 2A, which are not repeated here.
S324: the channel information at one RE position in the sample receiving signal is used as the input data of the first sample, and the real channel value at the RE position is used as the label of the first sample. The channel information at the RE position includes the received signal at the RE position, the first channel estimation value, and the data pre-judgment symbol as the input data of the first sample. Optionally, the channel information at the RE locations further includes received signals and data pre-determined symbols at a plurality of RE locations adjacent to the RE. For a plurality of REs adjacent to the RE, reference may be made to the description in the signal estimation algorithm shown in fig. 2A, and the description is omitted here.
S325: the first neural network is trained through a plurality of first samples.
Specifically, input data of channel information at an RE position in the first sample may be input to the first neural network to obtain a channel estimation prediction value at the RE position; and updating parameters of the first neural network according to errors between the channel estimation predicted value and the real channel value, wherein the training process is a process of enabling the errors to be smaller and smaller until the accuracy of the trained first neural network meets the requirement.
It should be understood that the embodiment of the present application is described by taking the example that the first neural network includes a hidden layer. In other embodiments, the first neural network may include more or fewer hidden layers, the size and number of convolution kernels,
The trained first neural network is used for estimating a second channel estimation value at the RE position to be estimated. In some embodiments, the second channel estimation value at the RE position to be estimated output by the method can be used as a final channel estimation value for restoring the digital signal. In other embodiments, the first neural network is also referred to as a data-aided preprocessing network (or PreDNN model), and the output second channel estimation values at the positions of the REs to be estimated are not used as final channel estimation values, but a channel estimation matrix formed by the second channel estimation values at the positions of the REs needs to be input into the second neural network for further channel estimation. For the second neural network, reference may be made to the description related to the following section (two), and details are not repeated here.
(II): training method of second neural network
S41: a second neural network is constructed.
The second neural network may be a deep neural network (deep convolution neural network, DNN), a convolutional neural network (convolution neural network, CNN), a deep residual network (deep residual network, resNet), or the like. The second channel estimation values of the input data of the second neural network for all subcarriers can be spliced into a two-dimensional channel response matrix according to the OFDM symbols and the subcarriers where the second channel estimation values are located, and the two-dimensional channel response matrix is called a second channel estimation matrix. The output of the second neural network is a third channel estimation matrix, i.e., an estimate of the improved second channel estimation matrix.
The embodiment of the application is illustrated by taking a depth residual error network as an example. Structurally, the second neural network includes at least one convolutional layer, an excitation layer, and at least one summer. The convolution kernel size, the number of convolution kernels, the position and the number of adders of the convolution layer in the second neural network can be preset values, and can be determined by an automatic machine learning method.
For example, as shown in fig. 4B, a second neural network, also referred to as CASRESNET, includes a 5-layer convolutional layer, a 2-layer excitation layer, and 2 adders. Wherein 2 excitation layers are respectively positioned between the second convolution layer (convolution layer 2) and the third convolution layer (convolution layer 3) and between the third convolution layer (convolution layer 3) and the fourth convolution layer (convolution layer 4); the inputs of the first adder (adder 1) are the output of the first layer convolution and the output of the fourth layer convolution, and the inputs of the second adder (adder 2) are the output of the input layer (not shown, i.e., the input of the first layer convolution) and the output of the fifth layer convolution. Specifically, the input data (second channel estimation matrix) of the second neural network is input to the first layer convolution layer (convolution layer 1), and the convolution kernel size of the convolution layer 1 is 5*5 and the number is 8; the output of the convolution layer 1 is used as the input of the convolution layer 2, and the convolution kernel of the convolution layer 2 has the size of 3*3 and the number of 8; the output of the convolution layer 2 is input into the convolution layer 3 through the excitation layer 1, and the convolution kernel of the convolution layer 3 is 3*3 in size and 8 in number; the output of the convolution layer 3 is input to the convolution layer 4 through the excitation layer 2; the convolution kernel of the convolution layer 4 has the size 3*3 and the number of 8; the output of the convolution layer 4 and the output of the convolution layer 1 are simultaneously input to the adder 1; the output of the adder 1 is used as the input of the convolution layer 5, the convolution kernel size of the convolution layer 5 is 5*5, the number of the convolution layers is 2, the output of the convolution layer 5 and the input data (second channel estimation matrix) of the second neural network are simultaneously input to the adder 2, and the output of the adder 2 is the third channel estimation matrix.
S42: the second neural network is trained. One implementation of S42 may include, but is not limited to, the following steps:
S421: and acquiring initial channel estimation values of all RE positions obtained by channel estimation of the sample receiving signals and real channel values of all RE positions. Wherein, the initial channel estimation value at each RE position forms an initial channel estimation matrix, and the real channel estimation value at each RE position forms a real channel matrix.
Wherein, the initial channel estimation matrix may be a first channel estimation matrix obtained by using the signal estimation algorithm shown in fig. 2A based on the sample received signal; the second channel estimation matrix obtained by the first neural network trained in fig. 3A may also be based on the first channel estimation matrix, where the first channel estimation value at each RE position in the first channel estimation matrix is input to the first neural network, so as to obtain the second channel estimation value at the RE position. For specific implementation, reference may be made to the embodiments shown in fig. 2A and fig. 3A, which are not described herein.
S422: the initial channel estimation matrix is used as input data of a second sample, and the real channel matrix is used as a label of the second sample.
S423: the second neural network is trained by a plurality of second samples.
Specifically, the initial channel estimation matrix in the second sample may be input to the second neural network to obtain a predicted channel estimation matrix; and updating parameters of the second neural network according to errors between the predicted channel estimation matrix and the real channel matrix, wherein the training process is a process of enabling the errors to be smaller and smaller until the accuracy of the trained second neural network meets the requirement.
In the application process of the second neural network, the trained second neural network is used for improving the input channel estimation matrix, and the output channel estimation matrix can be used as a final channel estimation value for restoring the digital signal. Specific applications may be seen from the embodiments of the channel estimation methods shown in fig. 5A or fig. 5B described below.
Not limited to the above-described training method of the neural network shown in fig. 3A and fig. 4A, in another embodiment of the present application, the first neural network and the second neural network may also be trained together.
Alternatively, the training method of the neural network shown in fig. 3A and fig. 4A may be implemented by an end-to-end open source machine learning platform, for example, tensorflow platform, where an Adam optimizer may be adopted in the network training process, the learning rate is 0.001, the loss function may be a mean square error of the channel estimation value, the parameters such as the weight and bias of the first neural network or the second neural network are iteratively optimized and finally determined by a back propagation algorithm, and the trained model may be used for online calculation of the application stage.
The following describes two other channel estimation methods according to embodiments of the present application in conjunction with the two channel estimation methods shown in fig. 5A and 5C and the channel estimation methods shown in fig. 5B and 5D, where the two other channel estimation methods may be implemented by the receiving end in the system shown in fig. 1, and the method may include, but is not limited to, some or all of the following steps:
S51: and carrying out channel estimation according to the received signals and the local pilot signals to obtain first channel estimation values at the respective positions of the M.N REs.
Specifically, the application takes the received signal including the signal on each subcarrier on each OFDM symbol as an example, and takes N subcarriers, M OFDM symbols as an example, each OFDM symbol has N subcarriers, and the N subcarriers are mutually orthogonal, and since one subcarrier on one OFDM symbol corresponds to one RE, the received signal includes the received signal on m×n RE positions. From another perspective, the received signal includes a pilot signal and a data signal, the pilot signal being the received signal at a pilot location in the received signal; pilot position refers to the RE occupied by the pilot signal, one pilot can occupy one RE; the data signal is a received signal at a data location in the received signal, the data location being an RE occupied by the data signal. The first channel estimation value at the m×n RE positions includes the channel frequency domain response of each subcarrier on each OFDM symbol, that is, includes the m×n channel frequency domain responses, and the calculation method of the first channel estimation value may refer to the channel estimation method shown in fig. 2A and will not be described herein.
It should be understood that the local pilot signal is a signal known to both the transmitting and receiving ends, can be used for channel estimation,
It should be appreciated that the first channel estimation values at the m×n RE positions may form a first channel estimation matrix at the two-dimensional time-frequency gridCan be expressed as:
Wherein the matrix The element in the ith row and the jth column in the (i) th row represents a first channel estimation value at an RE position corresponding to the jth subcarrier on the ith OFDM symbol, where i is a positive integer not greater than M and j is a positive integer not greater than N.
S52: and obtaining the data pre-judgment symbol at each data position through a channel equalization algorithm according to the received signal at the data position and the first channel estimation value of the data position.
Wherein, the data prejudgement symbol on M x N RE positions forms a matrixThe method comprises data pre-judging symbols at each RE position, wherein the data pre-judging symbols at the kth subcarrier at the mth OFDM symbol, namely the data pre-judging symbols at the RE positions corresponding to the kth subcarrier at the mth OFDM symbol, can be expressed as
For the specific implementation of S51 and S52, reference may be made to the relevant descriptions in steps S21 and S22 in the signal estimation algorithm shown in fig. 2A, and the description is omitted here.
S53: and respectively inputting the channel information at each RE position in the M-N RE to the first neural network to respectively obtain second channel estimation values at each RE position.
The structure of the first neural network may be as shown in fig. 3B, and may be obtained by the neural network training method shown in fig. 3A. The training method can be referred to the embodiment shown in fig. 3A, and will not be described herein.
In the following, the method for determining the second channel estimation value at each RE position is described by taking the RE corresponding to the kth subcarrier on the mth OFDM symbol as an example, which is not described herein again.
The channel information at the RE position corresponding to the kth subcarrier on the mth OFDM symbol includes the received signal Y m,k at the RE position corresponding to the kth subcarrier on the mth OFDM symbol, the data pre-judgment symbolFirst channel estimation value
Optionally, the channel information at the RE position corresponding to the kth subcarrier on the mth OFDM symbol further includes a received signal and/or a data pre-judgment symbol at each of a plurality of REs adjacent to the RE, and the like. Wherein a plurality of REs adjacent to the kth subcarrier on the mth OFDM symbol are REs respectively corresponding to a plurality of subcarriers closest in frequency to the kth subcarrier on the mth OFDM symbol in the same OFDM symbol, for example, in the subcarriers ordered in frequency, the plurality of subcarriers are the kth subcarrier on the mth OFDM symbol and each q subcarriers before and after the kth subcarrier, that is, k-q subcarrier on the mth OFDM symbol, k-q+1 subcarrier on the mth OFDM symbol, …, k-1 subcarrier on the mth OFDM symbol, k+1 subcarrier on the mth OFDM symbol, k+2 subcarrier on the mth OFDM symbol, …, and k+q subcarrier on the mth OFDM symbol }, the embodiment of the present application is described by taking q=2 as an example, that is, the channel information of the kth subcarrier on the mth OFDM symbol further includes { Y m,k-2,Ym,k-1,Ym,k+1,Ym,k+2 }, the data of the adjacent plurality of pre-determined positions can be expressed as
Dividing each item in channel information at an RE position corresponding to the kth subcarrier on the mth OFDM symbol according to real parts and imaginary parts, and combining the items into a one-dimensional vector, wherein the channel information can be expressed as: where Re { } represents the real part and Im { } represents the imaginary part.
And inputting the channel information at the RE position corresponding to the kth subcarrier on the mth OFDM symbol to obtain a first neural network, and outputting a second channel estimation value at the RE position corresponding to the kth subcarrier on the mth OFDM symbol after the first neural network processes the channel information.
Further, similarly, second channel estimates at m×n RE positions may be estimated. It should be understood that when calculating the second channel estimation values for different REs, the first neural networks used are the same, and in a specific implementation, the time division multiplexing of the first neural networks may be performed, or m×n first neural networks may be simultaneously operated, so as to calculate the second channel estimation values at each RE position at the same time, thereby improving the calculation efficiency.
The second channel estimation values at the M x N RE positions form a second channel estimation matrixWherein the second channel estimation matrixElement in (c) and first channel estimation matrixThe elements in the matrix are in one-to-one correspondence, and a second channel estimation matrixThe element in the ith row and jth column of the (i) is a positive integer not greater than M, and i is a positive integer not greater than N.
In some embodiments, the second channel estimation matrixMay be a final channel estimation matrix for recovering data in the received signal.
In some embodiments, please refer to the channel estimation methods shown in fig. 5C and fig. 5D, the second channel estimation matrix is shownInstead of the final channel estimation matrix, it can be input to the second neural network for further improvement and optimization to obtain a third channel estimation matrixAt this time, the third channel estimation matrixAs a final channel estimation matrix for recovering data in the received signal. As shown in fig. 5B, the method includes step S54 in addition to the above-mentioned S51-S53, as follows:
S54: matrix a second channel estimate Inputting to a second neural network to obtain a third channel estimation matrixThe elements in the third channel estimation matrix are in one-to-one correspondence with the elements in the second channel estimation matrix, and the elements in the ith row and the jth column in the third channel estimation matrix represent third channel estimation values on REs corresponding to the jth subcarrier on the ith OFDM symbol, which are used for restoring data carried on the jth subcarrier on the ith OFDM symbol in the received signal, where i is a positive integer not greater than M, and j is a positive integer not greater than N.
The structure of the second neural network may be as shown in fig. 4B, and may be obtained by the neural network training method shown in fig. 4A. The training method can be referred to the embodiment shown in fig. 4A, and will not be described herein. The second neural network may also be an existing channel estimation network CHANNELNET, REESNET or other channel estimation networks, which is not limited.
With respect to the network CHANNELNET, REESNET, reference may be made to the following description of the advantages of the channel estimation method, which will not be repeated here.
According to the channel estimation method, the first neural network (PreDNN) is utilized to perform data auxiliary preprocessing, so that on one hand, the channel estimation precision can be effectively improved under the condition of the same pilot frequency quantity; on the other hand, with equal channel estimation accuracy, the number of pilots required is smaller.
When the first neural network is adopted, especially the first neural network only comprising one hidden layer, the training complexity is low, the calculation speed is high in the application process, and the data in the received signal can be quickened to restore due to the small network scale, the simple structure and the small parameters to be estimated. In addition, the first neural network can be compatible with other existing channel estimation networks, and can effectively improve the channel estimation performance after being cascaded with other networks, and the influence on the overall complexity is small.
It should be appreciated that the channel time-varying speed is faster in a high-speed mobile scenario, and the channel impulse response has changed within one OFDM symbol duration, and this change is mapped to the frequency domain and appears as ICI (Inter-CARRIER INTERFERENCE, inter-subcarrier interference). When the channel information input to the RE position of the first neural network comprises the first channel estimation value and/or the data pre-judgment symbol at a plurality of RE positions adjacent to the RE, the method fully considers the influence of ICI, can be applied to channel estimation in a high-speed moving scene, and can further improve the channel estimation precision and reduce the number of required pilot frequencies.
When the channel estimation value is further improved by adopting the second neural network (CASRESNET), particularly the second neural network with a residual neural network structure, the problems of overfitting and gradient explosion can be relieved, the training difficulty of the second neural network is reduced, and the channel estimation precision of the second neural network is improved.
In addition, the channel estimation method does not need channel statistical information, such as power delay spectrum and prior knowledge of noise variance, and the like, and can quickly estimate the final channel estimation value so as to quickly restore the data in the received signal.
The following describes advantages of the channel estimation method according to the embodiment of the present application with reference to fig. 6A to 6C.
The embodiment of the application simulates based on an LTE scene, considers an uncoded OFDM system with sufficient Cyclic Prefix (CP), and main parameters are shown in a table 1. Wherein the pilot insertion mode is according to the pattern specified by the LTE standard: as shown in fig. 6D, each 7 OFDM symbols are pilot-placed on the 1 st and 5 th OFDM symbols, the pilot-placed OFDM symbols occupy 6 lattices in the RE space, and the pilot-placed REs are staggered with each other in the adjacent pilot-inserted OFDM symbols.
Table 1 list of simulation parameters
Carrier frequency 2GHz Number of subcarriers 128
Subcarrier spacing 15kHz OFDM symbol number 14
Channel power delay profile 4-Diameter equal-strength diameter Channel Doppler spectrum Jakes model
Vehicle speed 600km/h Maximum Doppler frequency offset 1111Hz
Pilot frequency insertion mode Lattice pilot frequency (21 x 4) Modulation scheme QPSK
The simulation results are shown in fig. 6A to 6C. Fig. 6A-6C compare simulation results of 8 channel estimation schemes, respectively:
(1) And pilot LS+linear interpolation, namely, firstly estimating the channel frequency domain response of the pilot frequency position by an LS method, and further obtaining all the channel frequency domain responses by linear interpolation.
(2) And pilot frequency LMMSE+linear interpolation, namely, firstly estimating the channel frequency domain response of the pilot frequency position by an LMMSE method, and further obtaining all the channel frequency domain responses by linear interpolation.
(3) And CHANNELNET, namely obtaining the channel frequency domain response on the whole two-dimensional time-frequency grid through LS estimation and interpolation technology, regarding the channel frequency domain response as a low-resolution noisy image, and then further improving the channel estimation value through CHANNELNET network, wherein the CHANNELNET network is formed by cascading a feed-forward denoising convolutional network DnCNN comprising twenty layers of convolutional layers and based on residual error learning through a super-resolution convolutional network SRCNN comprising three layers of convolutional layers. Wherein SRCNN improves the initial channel estimation value (i.e. the channel frequency domain response on the two-dimensional time-frequency grid), dnCNN is used for removing noise influence, and the DnCNN network output is the final channel estimation value.
(4) And ReESNet, namely, firstly obtaining the channel frequency domain response of the pilot frequency position by an LS method, splicing the channel frequency domain response into a two-dimensional matrix to be used as the input of a network ReESNet, wherein the network structure of ReESNet comprises three convolution layers, four residual blocks and an up-sampling layer for expanding the dimension to the whole channel time-frequency matrix, so that the influence of different interpolation algorithms on the performance is eliminated, the network scale is reduced, and the channel estimation performance is improved. The output of the ReESNet network is the final channel estimate.
(5) PreDNN + CASRESNET, namely a cascade of a first neural network (PreDNN) as shown in fig. 3B and a second neural network (CASRESNET) as shown in fig. 4B, provided by the implementation of the present application, p=2, only considering the influence of adjacent 4 subcarriers on the channel estimation value of the current subcarrier.
(6) PreDNN + CASRESNET (16×4 pilot), namely the first neural network (PreDNN) shown in fig. 3B and the second neural network (CASRESNET) shown in fig. 4B are cascaded, p=2, and only the influence of adjacent 4 subcarriers on the channel estimation value at the RE position corresponding to the current subcarrier is considered. Unlike (5), the number of pilots in the simulation parameter list is reduced to 16×4.
(7) PreDNN + CHANNELNET, namely, the first neural network (PreDNN) and CHANNELNET provided by the embodiment of the present application as shown in fig. 3B are cascaded, and the second channel estimation matrix formed by PreDNN output second channel estimation values is input to CHANNELNET for further optimization, and regarding CHANNELNET, reference may be made to the above scheme (3), which is not repeated herein.
(8) In the step PreDNN + ReESNet, that is, in the step of cascading the first neural networks (PreDNN) and CHANNELNET shown in fig. 3B, it should be noted that, since the input of ReEsNet only includes the channel estimation value of the pilot position, the application object of PreDNN should be changed to the subcarrier of the pilot position, preDNN only improves the channel estimation value of the subcarrier of the pilot position, and the channel estimation matrix composed of the second channel estimation value of the subcarrier of the pilot position after the improvement point output by PreDNN is input to ReESNet for further optimization, reference may be made to the scheme (4) above with respect to ReESNet, which is not repeated herein.
Fig. 6A compares the channel estimation mean square error at different signal-to-noise ratios (SNRs) for schemes (1) through (6). As can be seen from fig. 6A, the mean square error of the channel estimation schemes (5) and (6) of the present invention is significantly lower than that of the schemes (1), (2), (3) and (4) under the same signal-to-noise ratio, i.e., the accuracy of the channel estimation schemes (5) and (6) of the present invention is significantly better than that of the other existing schemes, and the advantage can be maintained in the case that the number of pilots in the scheme (6) is reduced to 16×4 (when the pilot interval is 8 subcarriers).
Fig. 6B compares the channel estimation mean square error at different signal-to-noise ratios (SNRs) for scheme (3) and scheme (7). As can be seen from fig. 6B, the channel estimation accuracy is significantly improved when the cascade PreDNN is preprocessed before CHANNELNET.
Fig. 6C compares the channel estimation mean square error at different signal-to-noise ratios (SNRs) for scheme (4) and scheme (8). As can be seen from fig. 6C, the channel estimation accuracy is significantly improved when the cascade PreDNN is preprocessed before ReEsNet.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a channel estimation device according to an embodiment of the present application, where the device 700 may be a receiving end in the system shown in fig. 1, and may implement the channel estimation method shown in fig. 5A or fig. 5B, and the device 700 may include, but is not limited to, some or all of the following units:
A first channel estimation unit 701, configured to perform channel estimation and equalization according to a received signal and a local pilot signal, to obtain first channel estimation values and data pre-judgment symbols at positions of a plurality of resource elements REs, where the received signal includes received signals at respective positions of the plurality of REs;
A second channel estimation unit 702, configured to input channel information at each of the plurality of RE locations to a first neural network, to obtain a second channel estimation value at each RE location, where the channel information at each RE location includes a received signal at each RE location, a first channel estimation value, and a data pre-judgment symbol, and the first neural network is configured to predict the channel estimation value at the RE location according to the input channel information at the RE location.
In a possible implementation, the channel information at each RE position further includes received signals at respective positions of w REs adjacent to the each RE and/or data pre-judgment symbols at respective positions of the w REs, where each RE and the w REs correspond to the same orthogonal frequency division multiplexing OFDM symbol, and w is a positive integer.
In one possible implementation, the first neural network is trained by a plurality of first samples, the first samples including channel information at one RE location estimated based on a sample received signal and a true channel value at the one RE location.
In one possible implementation, the first neural network is a fully connected neural network comprising a hidden layer.
Optionally, the hidden layer adopts a linear rectification function as the activation function.
In one possible implementation, the apparatus 700 may further include:
A third channel estimation unit 703, configured to input a channel estimation matrix formed by the second channel estimation values at the respective positions of the plurality of REs to the second neural network, so as to obtain third channel estimation values at the respective positions of the plurality of REs; the second neural network is trained through a plurality of second samples, and the second samples comprise a channel estimation matrix formed by channel estimation values at respective positions of all REs obtained by estimating sample received signals and a real channel matrix formed by real channel values at respective positions of all REs.
In one possible implementation, the second neural network is a depth residual neural network, the second neural network including at least two convolutional layers, at least one active layer, and at least one summer, the active layer being located between two adjacent ones of the at least two convolutional layers.
Optionally, the at least two convolution layers include a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, and a fifth convolution layer that are sequentially arranged, the at least one active layer includes a first active layer and a second active layer, and the at least one adder includes a first adder and a second adder, where:
The first active layer is located between the second convolution layer and the third convolution layer;
The second active layer is located between the third convolution layer and the fourth convolution layer;
The first adder is positioned between the fourth convolution layer and the fifth convolution layer, and the input of the first adder is the input of the first convolution layer and the output of the fourth convolution layer;
The inputs of the second adder are the input of the second neural network and the output of the fifth convolution layer.
In one possible implementation, the first channel estimation unit is specifically configured to:
Performing channel estimation according to a received signal and a local pilot signal at a pilot position to obtain a first channel estimation value at the pilot position, wherein the pilot position occupies at least two RE positions;
Performing interpolation processing on the first channel estimation value at the pilot frequency position to obtain a first channel estimation value at a data position, wherein the data position occupies a position except the pilot frequency position in the plurality of RE positions;
and obtaining the data pre-judging symbol in the data position according to the received signal in the data position and the first channel estimation value in the data position.
It should be understood that the specific implementation and the obtained beneficial effects of each unit in the above-mentioned apparatus 700 may be referred to in the embodiments shown in fig. 5A, 5B, and 6A-6C, which are not described herein.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a neural network training apparatus according to an embodiment of the present application, where the apparatus 800 may be the training device in the system shown in fig. 1, and may implement the training method of the neural network shown in fig. 3A, and the apparatus 800 may include, but is not limited to, some or all of the following units:
A first channel estimation unit 801, configured to perform channel estimation and equalization according to a sample received signal and a local pilot signal, to obtain a first channel estimation value at each position of a plurality of resource elements REs and a data pre-judgment symbol at each position of the plurality of REs, where the sample received signal includes received signals at each position of the plurality of REs;
a second channel estimation unit 802, configured to input channel information at each of the plurality of RE locations to a first neural network, to obtain a predicted channel estimation value at each RE location, where the channel information at each RE location includes a received signal at each RE location, the first channel estimation value, and a data pre-judgment symbol;
An updating unit 803, configured to update the parameters of the first neural network according to the loss between the predicted channel estimation value and the real channel value at each RE position.
In a possible implementation, the channel information at each RE position further includes received signals and/or data pre-determined symbols at w RE positions adjacent to the each RE position, where each RE position and the w RE positions correspond to the same OFDM symbol.
In one possible implementation, the first neural network is a fully connected neural network comprising a hidden layer.
Optionally, the hidden layer adopts a linear rectification function as the activation function.
In one possible implementation, the first channel estimation unit is specifically configured to:
Performing channel estimation according to a received signal and a local pilot signal at a pilot position to obtain a first channel estimation value at the pilot position, wherein the pilot position occupies at least two RE positions;
Performing interpolation processing on the first channel estimation value at the pilot frequency position to obtain a first channel estimation value at a data position, wherein the data position occupies a position except the pilot frequency position in the plurality of RE positions;
and obtaining the data pre-judging symbol in the data position according to the received signal in the data position and the first channel estimation value in the data position.
It should be understood that the specific implementation and the obtained beneficial effects of each unit in the above-mentioned apparatus 800 may be referred to in the embodiments shown in fig. 3A, 5A or 5B and fig. 6A-6C, which are not described herein.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a neural network training apparatus according to an embodiment of the present application, where the apparatus 900 may be the training device in the system shown in fig. 1, and may implement the training method of the neural network shown in fig. 4A, and the apparatus 900 may include, but is not limited to, some or all of the following units:
a first channel estimation unit 901, configured to perform channel estimation and equalization according to a sample received signal and a local pilot signal, to obtain a first channel estimation value at each position of a plurality of resource elements REs and a data pre-judgment symbol at each position of the plurality of REs, where the sample received signal includes received signals at each position of the plurality of REs;
A second channel estimation unit 902, configured to input channel information at each of the plurality of RE locations to a first neural network, to obtain a second channel estimation value at each RE location, where the channel information at each RE location includes a received signal at each RE location, a first channel estimation value, and a data pre-judgment symbol, and the first neural network is configured to predict the channel estimation value at the RE location according to the input channel information at the RE location;
A third channel estimation unit 903, configured to input a channel estimation matrix formed by the second channel estimation values at the respective positions of the plurality of REs to a second neural network to obtain a predicted channel estimation matrix;
an updating unit 904, configured to update parameters of the neural network according to a loss between the predicted channel estimation matrix and the real channel matrix.
In one possible implementation, the second neural network is a depth residual neural network, the second neural network including at least two convolutional layers, at least one active layer, and at least one summer, the active layer being located between two adjacent ones of the at least two convolutional layers.
Optionally, the at least two convolution layers include a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, and a fifth convolution layer that are sequentially arranged, the at least one active layer includes a first active layer and a second active layer, and the at least one adder includes a first adder and a second adder, where:
The first active layer is located between the second convolution layer and the third convolution layer;
The second active layer is located between the third convolution layer and the fourth convolution layer;
The first adder is positioned between the fourth convolution layer and the fifth convolution layer, and the input of the first adder is the input of the first convolution layer and the output of the fourth convolution layer;
The inputs of the second adder are the input of the second neural network and the output of the fifth convolution layer.
In one possible implementation, the first channel estimation unit is specifically configured to:
Performing channel estimation according to a received signal and a local pilot signal at a pilot position to obtain a first channel estimation value at the pilot position, wherein the pilot position occupies at least two RE positions;
Performing interpolation processing on the first channel estimation value at the pilot frequency position to obtain a first channel estimation value at a data position, wherein the data position occupies a position except the pilot frequency position in the plurality of RE positions;
and obtaining the data pre-judging symbol in the data position according to the received signal in the data position and the first channel estimation value in the data position.
It should be understood that the specific implementation and the obtained beneficial effects of each unit in the above-mentioned apparatus 900 may be referred to in the embodiments shown in fig. 4A, 5A or 5B and fig. 6A-6C, which are not described herein.
An exemplary electronic device 1000 provided by an embodiment of the present application is described below, where the electronic device 1000 may be implemented as a receiving end mentioned in the foregoing embodiments, or as an apparatus 700, such as the electronic device 1000 shown in fig. 10, including a memory 1001, a processor 1002, a communication interface 1003, and a bus 1004. The memory 1001, the processor 1002, and the communication interface 1003 are connected to each other by a bus 1004.
The Memory 1001 may be a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). The memory 1001 may store a program, and when the program stored in the memory 1001 is executed by the processor 1002, the processor 1002 and the communication interface 1003 are used to perform part or all of the steps in the channel estimation method shown in fig. 5A or 5B of the present application.
The processor 1002 may employ a general-purpose central processing unit (Central Processing Unit, CPU), microprocessor, application SPECIFIC INTEGRATED Circuit (ASIC), graphics processor (graphics processing unit, GPU) or one or more integrated circuits for executing associated programs to perform the functions required by the elements in the apparatus 700 of the present application, or to perform some or all of the steps of the channel estimation methods of fig. 5A or 5B of the present application.
The processor 1002 may also be an integrated circuit chip with signal processing capabilities. In implementation, the various steps of the neural network compression method of the present application may be performed by integrated logic circuitry of hardware or instructions in software form in the processor 1002. The processor 1002 may also be a general purpose processor, a digital signal processor (DIGITAL SIGNAL Processing unit, DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 1001, and the processor 1002 reads the information in the memory 1001, and in combination with its hardware, performs the functions required to be performed by the units in the apparatus 700 according to the embodiment of the application.
Communication interface 1003 enables communication between electronic device 1000 and other devices or communication networks using transceiving means such as, but not limited to, a transceiver.
Bus 1004 may include a path to transfer information between various components of electronic device 1000 (e.g., memory 1001, processor 1002, communication interface 1003).
It should be noted that while the electronic device 1000 shown in fig. 10 shows only a memory, a processor, and a communication interface, those skilled in the art will appreciate that in a particular implementation, the electronic device 1000 also includes other components necessary to achieve proper operation. Also, those skilled in the art will appreciate that the electronic device 1000 may also include hardware devices that implement other additional functions, as desired. Furthermore, it will be appreciated by those skilled in the art that the electronic device 1000 may also include only the components necessary to implement embodiments of the present application, and not necessarily all of the components shown in FIG. 10.
An exemplary electronic device 1100 provided by an embodiment of the present application is described below, where the electronic device 1100 may be implemented as a training device as mentioned in the foregoing embodiments, or as an apparatus 800 or 900, such as the electronic device 1100 shown in fig. 11, including a memory 1101, a processor 1102, a communication interface 1103, and a bus 1104. The memory 1101, the processor 1102, and the communication interface 1103 are communicatively connected to each other through a bus 1104.
The Memory 1101 may be a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). The memory 1101 may store a program, and the processor 1102 and the communication interface 1103 are configured to perform part or all of the steps of the neural network training method shown in fig. 3A or fig. 4A of the present application when the program stored in the memory 1101 is executed by the processor 1102.
The processor 1102 may employ a general-purpose central processing unit (Central Processing Unit, CPU), microprocessor, application SPECIFIC INTEGRATED Circuit (ASIC), graphics processor (graphics processing unit, GPU) or one or more integrated circuits for executing associated programs to perform the functions required to be performed by the elements of the apparatus 800 or 900 of the embodiments of the present application, or to perform some or all of the steps of the neural network training methods illustrated in fig. 3A or 4A of the present application.
The processor 1102 may also be an integrated circuit chip with signal processing capabilities. In implementation, the various steps of the neural network compression method of the present application may be performed by integrated logic circuitry of hardware or instructions in software form in the processor 1102. The processor 1102 may also be a general purpose processor, a digital signal processor (DIGITAL SIGNAL Processing unit, DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 1101, and the processor 1102 reads information in the memory 1101, and in combination with hardware thereof, performs functions required to be performed by the unit in the apparatus 800 or 900 of the embodiment of the present application.
The communication interface 1103 enables communication between the electronic device 1100 and other devices or communication networks using a transceiver means such as, but not limited to, a transceiver.
A bus 1104 may include a path to transfer information between components of the electronic device 1100 (e.g., the memory 1101, the processor 1102, the communication interface 1103).
It should be noted that while the electronic device 1100 shown in fig. 11 shows only a memory, a processor, and a communication interface, those skilled in the art will appreciate that in a particular implementation, the electronic device 1100 also includes other components necessary to achieve proper operation. Also, those skilled in the art will appreciate that the electronic device 1100 may also include hardware devices that implement other additional functions, as desired. Furthermore, it will be appreciated by those skilled in the art that the electronic device 1100 may also include only the components necessary to implement embodiments of the present application, and not necessarily all of the components shown in FIG. 11.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Those of skill in the art will appreciate that the functions described in connection with the various illustrative logical blocks, modules, and algorithm steps described in connection with the disclosure herein may be implemented as hardware, software, firmware, or any combination thereof. If implemented in software, the functions described by the various illustrative logical blocks, modules, and steps may be stored on a computer readable medium or transmitted as one or more instructions or code and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media corresponding to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (e.g., according to a communication protocol). In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this disclosure. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood that the computer-readable storage medium and data storage medium do not include connections, carrier waves, signals, or other transitory media, but are actually directed to non-transitory tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor" as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functions described by the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combination codec. Moreover, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). The various components, modules, or units are described in this disclosure in order to emphasize functional aspects of the devices for performing the disclosed techniques, but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in a codec hardware unit in combination with suitable software and/or firmware, or provided by an interoperable hardware unit (including one or more processors as described above).
The terminology used in the above embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one, two or more than two. The term "and/or" is used to describe an association relationship of associated objects, meaning that there may be three relationships; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The foregoing is merely illustrative of the embodiments of the present application, and the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (19)

1. A channel estimation method, applied to a receiving end, the method comprising:
Performing channel estimation and equalization according to a received signal and a local pilot signal to obtain a first channel estimation value at each position of a plurality of Resource Elements (REs) and a data pre-judgment symbol at each position of the plurality of REs, wherein the received signal comprises the received signal at each position of the plurality of REs;
Inputting channel information at each RE position in the RE positions to a first neural network to obtain a second channel estimation value at each RE position, wherein the channel information at each RE position comprises a received signal at each RE position, a first channel estimation value and a data pre-judgment symbol, and the first neural network is used for predicting the channel estimation value at the RE position according to the input channel information at the RE position;
Inputting a channel estimation matrix formed by second channel estimation values at the respective positions of the plurality of REs into a second neural network to obtain a third channel estimation value at the respective position of the plurality of REs; the second neural network is trained through a plurality of second samples, and the second samples comprise a channel estimation matrix formed by channel estimation values at respective positions of all REs obtained by estimating sample received signals and a real channel matrix formed by real channel values at respective positions of all REs.
2. The method of claim 1, wherein the channel information at each RE position further includes received signals at respective positions of w REs adjacent to the each RE and/or data pre-determined symbols at respective positions of the w REs, each RE and the w REs corresponding to a same orthogonal frequency division multiplexing OFDM symbol, and w is a positive integer.
3. The method of claim 2 wherein the first neural network is trained by a plurality of first samples, the first samples comprising channel information at one RE location and real channel values at the one RE location estimated based on sample received signals.
4. A method according to any one of claims 1-3, wherein the first neural network is a fully connected neural network comprising a hidden layer.
5. The method of claim 4, wherein the hidden layer uses a linear rectification function as the activation function.
6. The method of claim 1, wherein the second neural network is a depth residual neural network, the second neural network comprising at least two convolutional layers, at least one active layer, and at least one summer, the active layer being located between two adjacent ones of the at least two convolutional layers.
7. The method of claim 6, wherein the at least two convolutional layers comprise a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, and a fifth convolutional layer that are arranged in sequence, the at least one active layer comprises a first active layer and a second active layer, and the at least one adder comprises a first adder and a second adder, wherein:
The first active layer is located between the second convolution layer and the third convolution layer;
The second active layer is located between the third convolution layer and the fourth convolution layer;
The first adder is positioned between the fourth convolution layer and the fifth convolution layer, and the input of the first adder is the input of the first convolution layer and the output of the fourth convolution layer;
The inputs of the second adder are the input of the second neural network and the output of the fifth convolution layer.
8. The method according to any one of claims 1-3 and 5-6, wherein said performing channel estimation and equalization based on the received signal and the pilot signal to obtain a first channel estimation value at each of the plurality of resource element REs and a data pre-determined symbol at each of the plurality of REs comprises:
Performing channel estimation according to a received signal and a local pilot signal at a pilot position to obtain a first channel estimation value at the pilot position, wherein the pilot position occupies at least two RE positions;
Performing interpolation processing on the first channel estimation value at the pilot frequency position to obtain a first channel estimation value at a data position, wherein the data position occupies a position except the pilot frequency position in the plurality of RE positions;
and obtaining the data pre-judging symbol in the data position according to the received signal in the data position and the first channel estimation value in the data position.
9. A method of training a neural network, comprising:
Performing channel estimation and equalization according to a sample receiving signal and a local pilot signal to obtain a first channel estimation value at each position of a plurality of Resource Elements (REs) and a data pre-judgment symbol at each position of the REs, wherein the sample receiving signal comprises receiving signals at each position of the REs;
Inputting channel information at each RE position in the RE positions to a first neural network to obtain a second channel estimation value at each RE position, wherein the channel information at each RE position comprises a received signal at each RE position, a first channel estimation value and a data pre-judgment symbol, and the first neural network is used for predicting the channel estimation value at the RE position according to the input channel information at the RE position;
Inputting a channel estimation matrix formed by second channel estimation values at the respective positions of the plurality of REs into a second neural network to obtain a predicted channel estimation matrix;
And updating parameters of the neural network according to the loss between the predicted channel estimation matrix and the real channel matrix.
10. The method of claim 9, wherein the channel information at each RE position further includes received signals at respective positions of w REs adjacent to the each RE and/or data pre-determined symbols at respective positions of the w REs, each RE and the w REs corresponding to a same orthogonal frequency division multiplexing OFDM symbol, w being a positive integer.
11. The method of claim 9 or 10, wherein the second neural network is a depth residual neural network comprising at least two convolutional layers, at least one active layer, and at least one summer, the active layer being located between two adjacent ones of the at least two convolutional layers.
12. A channel estimation apparatus, the apparatus comprising:
The first channel estimation unit is used for carrying out channel estimation and equalization according to the received signals and the local pilot signals to obtain first channel estimation values and data pre-judgment symbols at the RE positions of a plurality of resource units, wherein the received signals comprise the received signals at the respective positions of the RE;
A second channel estimation unit, configured to input channel information at each of the plurality of RE locations to a first neural network, to obtain a second channel estimation value at each RE location, where the channel information at each RE location includes a received signal at each RE location, a first channel estimation value, and a data pre-judgment symbol, and the first neural network is configured to predict the channel estimation value at the RE location according to the input channel information at the RE location;
A third channel estimation unit, configured to input a channel estimation matrix formed by the second channel estimation values at the respective positions of the plurality of REs into a second neural network, to obtain third channel estimation values at the respective positions of the plurality of REs; the second neural network is trained through a plurality of second samples, and the second samples comprise a channel estimation matrix formed by channel estimation values at respective positions of all REs obtained by estimating sample received signals and a real channel matrix formed by real channel values at respective positions of all REs.
13. The apparatus of claim 12, wherein the channel information in each RE position further comprises received signals in respective positions of w REs adjacent to the each RE and/or data pre-determined symbols in respective positions of the w REs, each RE and the w REs corresponding to a same orthogonal frequency division multiplexing OFDM symbol, w being a positive integer.
14. The apparatus of claim 12 or 13, wherein the first neural network is trained by a plurality of first samples, the first samples comprising channel information at one RE location and a true channel value at the one RE location estimated based on sample received signals.
15. A neural network training device, comprising:
the first channel estimation unit is used for carrying out channel estimation and equalization according to a sample receiving signal and a local pilot signal to obtain a first channel estimation value at each position of a plurality of resource units (REs) and a data pre-judgment symbol at each position of the REs, wherein the sample receiving signal comprises receiving signals at each position of the REs;
A second channel estimation unit, configured to input channel information at each of the plurality of RE locations to a first neural network, to obtain a second channel estimation value at each RE location, where the channel information at each RE location includes a received signal at each RE location, a first channel estimation value, and a data pre-judgment symbol, and the first neural network is configured to predict the channel estimation value at the RE location according to the input channel information at the RE location;
a third channel estimation unit, configured to input a channel estimation matrix formed by the second channel estimation values at the respective positions of the plurality of REs into a second neural network to obtain a predicted channel estimation matrix;
And the updating unit is used for updating parameters of the neural network according to the loss between the predicted channel estimation matrix and the real channel matrix.
16. The apparatus of claim 15, wherein the channel information in each RE position further comprises received signals in respective positions of w REs adjacent to the each RE and/or data pre-determined symbols in respective positions of the w REs, each RE and the w REs corresponding to a same orthogonal frequency division multiplexing OFDM symbol, w being a positive integer.
17. An electronic device, comprising: one or more processors, one or more memories, a communication interface; the one or more memories being coupled to the one or more processors, the one or more memories being for storing computer program code comprising computer instructions which, when executed by the one or more processors, implement the method of any of claims 1-8.
18. An electronic device, comprising: one or more processors and one or more memories; the one or more memories being coupled to the one or more processors, the one or more memories being for storing computer program code comprising computer instructions which, when executed by the one or more processors, implement the method of any of claims 9-11.
19. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-11.
CN202110130782.0A 2021-01-29 2021-01-29 Channel estimation method, neural network training method and device, and equipment Active CN114826832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110130782.0A CN114826832B (en) 2021-01-29 2021-01-29 Channel estimation method, neural network training method and device, and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110130782.0A CN114826832B (en) 2021-01-29 2021-01-29 Channel estimation method, neural network training method and device, and equipment

Publications (2)

Publication Number Publication Date
CN114826832A CN114826832A (en) 2022-07-29
CN114826832B true CN114826832B (en) 2024-05-24

Family

ID=82525566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110130782.0A Active CN114826832B (en) 2021-01-29 2021-01-29 Channel estimation method, neural network training method and device, and equipment

Country Status (1)

Country Link
CN (1) CN114826832B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20260032020A1 (en) * 2024-07-25 2026-01-29 Dell Products L.P. Channel Estimation with Varying Numbers of Transmit Layers

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117879737A (en) * 2022-09-30 2024-04-12 维沃移动通信有限公司 Channel prediction method, device and communication equipment
CN116015560B (en) * 2022-10-19 2025-08-26 南京上铁电子工程有限公司 Channel state information feedback method, device, electronic device and storage medium
CN116846712A (en) * 2023-07-07 2023-10-03 西安邮电大学 Frequency offset estimation method, system, equipment and medium based on clustering neural network
CN116915555B (en) * 2023-08-28 2023-12-29 中国科学院声学研究所 Underwater acoustic channel estimation method and device based on self-supervision learning
CN119814058A (en) * 2023-10-10 2025-04-11 中兴通讯股份有限公司 Signal receiving method, communication node and storage medium
WO2025236127A1 (en) * 2024-05-11 2025-11-20 北京小米移动软件有限公司 Communication method, device and system, and storage medium
CN118282811A (en) * 2024-06-04 2024-07-02 中科南京移动通信与计算创新研究院 MIMO channel estimation method based on AI
CN119544412B (en) * 2024-09-23 2025-10-03 东南大学 A pilot design and channel estimation method and system for OFDM system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104022978A (en) * 2014-06-18 2014-09-03 中国联合网络通信集团有限公司 Half-blindness channel estimating method and system
WO2019138156A1 (en) * 2018-01-12 2019-07-18 Nokia Technologies Oy Profiled channel impulse response for accurate multipath parameter estimation
CN111628946A (en) * 2019-02-28 2020-09-04 华为技术有限公司 Channel estimation method and receiving equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104022978A (en) * 2014-06-18 2014-09-03 中国联合网络通信集团有限公司 Half-blindness channel estimating method and system
WO2019138156A1 (en) * 2018-01-12 2019-07-18 Nokia Technologies Oy Profiled channel impulse response for accurate multipath parameter estimation
CN111628946A (en) * 2019-02-28 2020-09-04 华为技术有限公司 Channel estimation method and receiving equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20260032020A1 (en) * 2024-07-25 2026-01-29 Dell Products L.P. Channel Estimation with Varying Numbers of Transmit Layers

Also Published As

Publication number Publication date
CN114826832A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN114826832B (en) Channel estimation method, neural network training method and device, and equipment
Honkala et al. DeepRx: Fully convolutional deep learning receiver
Soltani et al. Deep learning-based channel estimation
CN105814855B (en) Precoding in Super Nyquist Transmission System
CN109314682A (en) Iterative 2D Equalization of Orthogonal Time-Frequency Spatial Modulation Signals
WO2021155744A1 (en) Deep learning-based joint optimization method for wireless communication physical layer receiving and sending end, electronic device, and storage medium
US12531764B2 (en) Radio receiver, transmitter and system for pilotless-OFDM communications
Drakshayini et al. A review of wireless channel estimation techniques: challenges and solutions
Hussein et al. Least Square Estimation‐Based Different Fast Fading Channel Models in MIMO‐OFDM Systems
CN114745233A (en) A Joint Channel Estimation Method Based on Pilot Design
Shankar Bi‐directional LSTM based channel estimation in 5G massive MIMO OFDM systems over TDL‐C model with Rayleigh fading distribution
Zhang et al. Efficient residual shrinkage CNN denoiser design for intelligent signal processing: Modulation recognition, detection, and decoding
CN103856254B (en) A kind of fixed complexity globular decoding detection method of soft output and device
CN116708094B (en) Detection method and device, equipment and medium of multiple input multiple output system
Nguyen et al. Groupwise neighbor examination for tabu search detection in large MIMO systems
CN115514596B (en) OTFS communication receiver signal processing method and device based on convolutional neural network
CN108566227A (en) A kind of multi-user test method
WO2024002455A1 (en) Method, apparatus and computer program for estimating a channel based on basis expansion model expansion coefficients determined by a deep neural network
US20250062936A1 (en) A radio receiver device with a neural network, and related methods and computer programs
CN117397215A (en) Generation and reception of precoded signals based on codebook linearization
Bhatt et al. Analysis of the fifth generation NOMA system using LSTM algorithm
Zheng et al. Deep learning-aided receiver against nonlinear distortion of hpa in ofdm systems
US20250184186A1 (en) Radio receiver with multi-stage equalization
Voggu et al. SIGNETS: Neural Network Architectures for m-QAM Soft Demodulation
CN115510893B (en) Signal detection method, device, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant