CN115801504A - Time domain tap storage method and device, computer equipment and storage medium - Google Patents
Time domain tap storage method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN115801504A CN115801504A CN202310056173.4A CN202310056173A CN115801504A CN 115801504 A CN115801504 A CN 115801504A CN 202310056173 A CN202310056173 A CN 202310056173A CN 115801504 A CN115801504 A CN 115801504A
- Authority
- CN
- China
- Prior art keywords
- tap
- time domain
- cluster
- clusters
- learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000012545 processing Methods 0.000 claims abstract description 48
- 230000000737 periodic effect Effects 0.000 claims abstract description 28
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 12
- 238000001914 filtration Methods 0.000 claims description 33
- 238000004590 computer program Methods 0.000 claims description 23
- 238000012216 screening Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000004891 communication Methods 0.000 claims description 11
- 238000012423 maintenance Methods 0.000 claims description 8
- 238000004321 preservation Methods 0.000 claims description 5
- 238000004088 simulation Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003775 Density Functional Theory Methods 0.000 description 2
- 238000009960 carding Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Complex Calculations (AREA)
- Filters That Use Time-Delay Elements (AREA)
Abstract
The application relates to a time domain tap storage method, a time domain tap storage device, a computer device and a storage medium. The method comprises the following steps: acquiring a frequency domain channel estimation value of a reference signal; determining a channel estimation time domain tap based on the frequency domain channel estimation value; performing clustering self-learning processing on the time domain tap to obtain a first result, and determining a target tap based on the first result; and storing the target tap, and processing the screen diameter of the target tap to obtain a processed time domain tap. By adopting the method, based on the sparse distribution characteristic of the time domain taps, a periodic learning updating mode is carried out by utilizing a DBSCAN clustering learning algorithm, and the learning storage is carried out aiming at the effective tap clusters, so that the storage overhead can be obviously saved.
Description
Technical Field
The present application relates to the field of digital signal processing technologies, and in particular, to a time domain tap storage method and apparatus, a computer device, and a storage medium.
Background
A communication network transmits communication signals in a downlink from a fixed transceiver, called a base station, to mobile User Equipments (UEs) within a preset area; and the UE transmits signals to one or more base stations in the area in an uplink. And during wireless transmission, the multipath environment of the mobile channel causes multipath fading. The different path lengths lead to different arrival times of the signals, and the base station sends a pulse signal, so that the received signal not only contains the signal, but also contains various time delay signals of the signal. This phenomenon of spreading the pulse width of the received signal due to multipath effects is called delay spread.
In the classical DFT (Discrete Fourier Transform) based filtering or MMSE (Minimum Mean-square error) filtering, subsequent filtering calculation is performed by estimating a time domain tap, and particularly, when a time domain tap is obtained by performing N-point IDFT on a large bandwidth, such as NR or subsequent 6G, the number N of the time domain tap points is a level of 2048 points or more, is a complex number, has a large bit width, and is not a small storage overhead for software and hardware.
In a wireless output large-delay expansion scene, storage is saved by a technology of reserving front and rear windows of a time domain tap according to the assumption of signal distribution, but the time domain tap cannot be guaranteed not to be omitted in the process of saving time domain tap storage.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a time-domain tap storage method, apparatus, computer device, computer readable storage medium, and computer program product, which can avoid missing time-domain taps and achieve storage savings in case of large time delay.
In a first aspect, a time domain tap storage method is provided. The time domain tap storage method comprises the following steps:
acquiring a frequency domain channel estimation value of a reference signal;
determining a channel estimation time domain tap based on the frequency domain channel estimation value;
performing clustering self-learning processing on the time domain tap to obtain a first result, and determining a target tap based on the first result;
and storing the target tap, and processing the screen diameter of the target tap to obtain a processed time domain tap.
In one embodiment, storing the target tap, and processing the target tap screen size comprises:
performing front and rear window fixed storage on the channel estimation time domain tap;
before IFFT or IDFT output signals are stored, updating the tap clustering distribution of NIFFT points by using clustering algorithm periodic learning, dividing suspicious signal windows into clusters, performing cluster learning on the centers of the clusters in a period, and after the periodic learning is completed, performing inter-cluster screening and merging on all the clusters to determine the initial position of each cluster;
based on the process of screening and determining clusters of tap clustering distribution and corresponding stored real initial position pairs, storing subsequent time domain taps;
and performing periodic learning, and repeating the steps after performing front and rear window fixed storage on the channel estimation time domain tap.
In one embodiment, the front-back window fixing and storing the channel estimation time domain tap specifically includes:
carrying out inverse Fourier transform on the frequency domain channel estimation value which is not subjected to filtering noise to obtain a channel estimation time domain tap;
when the length of the fixed front window is set toWhen it is time, save point 0 to-1 point and the time domain tap in between;
when the length of the rear window is fixedIntermingled preservationPoint toPoints and time domain taps therebetween.
In one embodiment, after the periodic learning is completed, the inter-cluster screening and merging is performed on all clusters, including:
initially learning and recording the learning times countN as 1, selecting a cluster center to be candidate learned, selecting a time domain tap meeting a power threshold, taking the time domain tap as the center of the cluster, determining the window length M of the cluster, storing the maximum diameter power and the index of the current window, and meeting the judgment rule of the power thresholdThe power threshold is determined by simulation, and N clusters are created at maximum;
adjusting the learning and updating result according to the maintenance sequence of the cluster from small to large according to the central index of the cluster, and updating the central power and the effective point power sum of the corresponding cluster;
entering a periodic learning updating stage, repeating the steps before the periodic learning updating stage, and updating the learning times countN;
after the periodic learning update is completed, determining the center indexes and the cluster power sums of the N clusters, and performing boundary judgment and cluster communication combination;
and after the clusters are combined, sequentially judging effective clusters according to the cluster power and the sequence from large to small, wherein the judgment standard is that the maximum tap storage is used up.
In one embodiment, selecting a time domain tap that meets a power threshold, taking the time domain tap as a center of a cluster, determining a window length M of the cluster, and storing a maximum path power and an index of a current window includes:
index n is already in the window range corresponding to the maintained cluster id, i.e. index n is already in the window range corresponding to the maintained cluster id
And isUpdating the accumulated power sum of the effective points in the cluster radius M/2, wherein the power sum isWhereinFor the center index of the id-th cluster, sumPTs [ id ]]Updating the center index and the maximum power of a replacement window when the tap power of the index n is greater than the center power of the cluster;
and if the index N is not in the window range corresponding to the maintained cluster id, creating new clusters, comparing the new clusters with the existing clusters with the minimum central power, wherein the total cluster number exceeds N, the power sum of the new clusters exceeds the central power and is minimum, or the power sum exceeds the statistical average minimum power sum of the clusters, and updating the central index and the power of the replacement window.
In one embodiment, storing the target tap, processing the target tap screen diameter, and after obtaining the processed time domain tap, performing discrete fourier transform on the processed time domain tap to obtain a frequency domain channel estimation filtered value.
In a second aspect, the present application further provides a time domain tap storage apparatus. The device includes:
the calculation processing module is used for acquiring a frequency domain channel estimation value of the reference signal;
a tap obtaining module, configured to determine a channel estimation time domain tap based on the frequency domain channel estimation value;
the self-learning processing module is used for carrying out clustering self-learning processing on the time domain tap to obtain a first result and determining a target tap based on the first result;
and the storage processing module is used for storing the target tap, processing the screen diameter of the target tap and obtaining a processed time domain tap.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring a frequency domain channel estimation value of a reference signal;
determining a channel estimation time domain tap based on the frequency domain channel estimation value;
performing clustering self-learning processing on the time domain tap to obtain a first result, and determining a target tap based on the first result;
and storing the target tap, and processing the screen diameter of the target tap to obtain a processed time domain tap.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a frequency domain channel estimation value of a reference signal;
determining a channel estimation time domain tap based on the frequency domain channel estimation value;
performing clustering self-learning processing on the time domain tap to obtain a first result, and determining a target tap based on the first result;
and storing the target tap, and processing the screen diameter of the target tap to obtain a processed time domain tap.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring a frequency domain channel estimation value of a reference signal;
determining a channel estimation time domain tap based on the frequency domain channel estimation value;
performing clustering self-learning processing on the time domain tap to obtain a first result, and determining a target tap based on the first result;
and storing the target tap, and processing the screen diameter of the target tap to obtain a processed time domain tap.
According to the time domain tap storage method, the time domain tap storage device, the computer equipment, the storage medium and the computer program product, when the time domain tap is estimated by the output channel, the distributed time domain tap is subjected to cluster self-learning based on the distribution sparsity of the time domain tap, the tap cluster distribution of the NIFFT point is obtained, the time domain tap is divided into a plurality of clusters according to the tap cluster distribution condition, an effective time domain tap cluster is screened out for storage, the storage space is remarkably saved, and the omission of the time domain tap is avoided;
in the process of screening the effective time domain tap cluster, the new time domain tap is compared with the center of the established cluster, the complexity of calculation is reduced through simple logic judgment, and the calculation amount in the filtering process based on DFT is not increased.
Drawings
FIG. 1 is a diagram of an exemplary implementation of a time domain tap storage method;
FIG. 2 is a flow diagram of a time domain tap storage method in one embodiment;
FIG. 3 is a flowchart illustrating the step of obtaining a post-processing time domain tap in one embodiment;
FIG. 4 is a schematic flow chart of the tap clustering step in one embodiment;
FIG. 5 is a diagram illustrating initial stage time domain tap preservation in one embodiment;
FIG. 6 is a diagram illustrating tap cluster learning in one embodiment;
FIG. 7 is a diagram of a time-domain tap cluster storage distribution in one embodiment;
fig. 8 is a flow chart illustrating a time domain tap storage method in an NR upstream receiver;
fig. 9 is a flowchart illustrating a time domain tap storage method in an LTE downlink receiver;
FIG. 10 is a diagram illustrating a 1024-point time-domain tap distribution in another embodiment;
FIG. 11 is a block diagram of a time domain tap storage arrangement in one embodiment;
FIG. 12 is a diagram of an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The time domain tap storage method provided in the embodiment of the present application may be applied to an application environment as shown in fig. 1, and implement digital communication between the terminal device 100 and the base station 200. The terminal device 100 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers. In an NR system or an LTE system, a base station is called eNodeB, eNB for short, and a terminal device is called UE.
In the NR system, a base station eNB receiver receives an uplink reference signal, performs channel estimation calculation on the uplink reference signal, performs frequency domain filtering on the obtained unfiltered frequency domain channel estimation value based on a DFT method, and adds clustering learning and control storage on time domain tap distribution in a flow based on the DFT filtering.
In an LTE system, a terminal UE receiver receives a downlink reference signal, performs channel estimation calculation on the downlink reference signal, performs frequency domain filtering on an obtained unfiltered frequency domain channel estimation value based on a DFT method, and adds time domain tap distribution clustering learning and control storage in a flow based on the DFT filtering.
In one embodiment, as shown in fig. 2, a time domain tap storage method is provided, which is described by taking reference signal processing as an example when the method is applied to the digital communication system in fig. 1, and includes the following steps:
s202, acquiring a frequency domain channel estimation value of the reference signal.
The reference signal is an uplink reference signal sent by the terminal device or a received downlink reference signal.
Specifically, the terminal device sends an uplink reference signal to the base station, and the receiver of the base station receives the uplink reference signal. And after receiving the uplink reference signal, performing channel estimation on the uplink reference signal to obtain a frequency domain channel estimation value without noise filtering.
Or a receiver of the terminal equipment receives a downlink reference signal sent by the base station, and performs channel estimation on the downlink reference signal after receiving the downlink reference signal to obtain a frequency domain channel estimation value without noise filtering.
S204, determining a channel estimation time domain tap based on the frequency domain channel estimation value.
Specifically, the obtained frequency domain channel estimation value which is not subjected to noise filtering is transmitted to the IDFT module, the frequency domain channel estimation value which is not subjected to noise filtering is subjected to IDFT transformation, and the IDFT module outputs a channel estimation time domain tap outwards.
S206, clustering self-learning processing is carried out on the time domain tap to obtain a first result, and a target tap is determined based on the first result.
The first result is the tap distribution condition obtained by the clustering self-learning of the time domain taps with the sparse characteristics, and the target taps are specific clusters needing to store the time domain taps according to the clustering self-learning.
Specifically, the clustering self-learning of the time domain taps is performed, the similarity is divided into a category, a division standard of the similarity is formulated, and the category of the time domain taps to be stored is selected according to the division standard. The classified category is called a cluster, that is, a cluster in which a time domain tap is to be stored is selected. And carrying out screen diameter processing on the time domain taps of the stored clusters to obtain screened time domain taps.
And S208, storing the target tap, and carrying out screen diameter processing on the target tap to obtain a processed time domain tap.
Specifically, the DFT processing is performed on the time domain tap after the filtering, so as to obtain a value after the frequency domain channel estimation filtering, thereby completing the filtering of the frequency domain channel estimation. And then determining a frequency domain channel estimation filtered value according to the processed time domain tap.
In the above time domain tap storage method. The method comprises the steps of carrying out IDFT conversion on a frequency domain channel estimation value to obtain a channel estimation time domain tap, carrying out cluster self-learning by utilizing the distribution sparse characteristic of the time domain tap and obtaining tap cluster distribution of NIFFT points before outputting the channel estimation time domain tap, dividing the time domain tap according to clusters based on the tap cluster distribution condition, screening effective time domain tap clusters for storage, remarkably saving storage space and avoiding omission of the time domain tap.
In the process of screening the effective time domain tap clusters, the new time domain tap is compared with the center of the established cluster, the complexity of calculation is reduced through simple logic judgment, and the calculation amount in the filtering process based on DFT is not increased
In one embodiment, as shown in fig. 3, storing the target tap, and processing the target tap screen size includes:
s302, front and rear window fixed storage is carried out on the channel estimation time domain tap.
The front window and the rear window are fixedly stored, and storage space is saved by a technology of reserving a time domain tap front window and a time domain tap rear window according to the assumption of signal distribution.
Specifically, inverse Fourier transform is carried out on the frequency domain channel estimation value which is not subjected to filtering noise, and a channel estimation time domain tap is obtained;
when the length of the fixed front window is set toWhen it is time, save point 0 to-1 point and the time domain tap in between;
when the length of the rear window is fixedIntermingled preservationClick toPoints and time domain taps therebetween.
Assume the time domain taps are:
the parts that are fixedly saved are:
wherein,in order to fix the length of the front window,in order to fix the length of the rear window,for the frequency domain channel estimate without noise filtering,representing an inverse fourier transform.
The effective time domain taps in the front and rear windows are stored according to the set fixed length of the front and rear windows, as shown in fig. 5.
S304, before IFFT or IDFT output signals are stored, the clustering algorithm is used for periodically learning, calculating and updating the tap clustering distribution of NIFFT points, dividing the suspicious signal window into clusters, periodically performing clustering learning on the centers of the clusters, after the periodic learning is completed, performing inter-cluster screening and merging on all the clusters, and determining the initial position of each cluster.
Wherein, IDFT is inverse discrete fourier transform and IFFT is inverse fast fourier transform, IFFT is the same principle as IDFT, IFFT is fast algorithm of IDFT. The suspicious signal window is a part which is fixedly stored in the discharge, such as cluster0 and cluster1 in fig. 6, and the cluster is a tap cluster distribution.
Specifically, the center of the suspicious signal window (the center of the cluster) is subjected to cluster learning by using a clustering algorithm, wherein the clustering algorithm used is a DBSCAN clustering algorithm, and the process is as described in fig. 4.
S402, an initial learning stage: the number of learning times, countN, is recorded, which is initially 1. And selecting a cluster center to be candidate learned, traversing each path to be learned in the cluster, wherein the power of a time domain tap serving as the cluster center needs to meet a power threshold. And taking the time domain tap meeting the condition as a center, determining the window length of the current cluster as M, and storing the maximum path power and the index of the current window, wherein the index is the center of the initial window. Where a maximum of N clusters can be created.
Wherein the condition for satisfying the power threshold isAnd the power threshold is determined by simulation.
After the cluster center is determined, whether the index n is in the window range corresponding to the maintained cluster id needs to be determined, and the judgment standard isAnd is。
As a result, 1, the index n is within the window range corresponding to the maintained cluster id:
updating the effective point number in the cluster radius of the index n to accumulate the power sum. At the moment, the tap power of the index n is compared with the central power of the cluster id, and when the tap power of the index n is greater than the central power of the cluster id, namelyUpdating center index of replacement windowAnd maximum power。
As a result, 2, the index n is not in the window range corresponding to the maintained cluster id:
and (4) creating a new cluster, and comparing the new cluster with the existing cluster with the minimum central power if the total cluster number exceeds N. And the power of the new cluster simultaneously meets the condition that the sum of the central power of the cluster which is greater than the central power minimum cluster and the medium power of the cluster with the central power minimum is minimum, and the central index and the power of the replacement window are updated at the moment. Or the power of the new cluster is larger than the power sum with the minimum statistical mean of the clusters, and the center index and the power of the replacement window are updated at the moment.
S404, updating a result combing stage: after initial learning, the centers of all clusters are indexedSorting from small to large, correspondingly adjusting the maintenance information sequence of the corresponding clusters, and simultaneously carrying out central power on the clustersSum cluster's effective point power sumAre updated together.
S406, a learning updating stage: and after the initial learning stage is completed, entering a periodic learning updating stage, repeating the operation steps of the initial learning stage and the operation steps of the updating result combing stage in the periodic learning updating stage, wherein the learning times countN is updated in the time of repeating the operation steps of the initial learning stage.
S408, screening and merging: after the periodic learning update is completed, the central indexes of N clusters are obtainedSum cluster power sumAnd judging the boundary of the cluster, and performing cluster communication and combination according to the judgment result.
Specifically, the counting of the number of merging clusters, namely, the number of merging clusters, is started from 0, and the maximum number, namely, the number of merging clusters, is N-1. If two clusters are adjacent, the center index of one clusterIndexing with another centerThe distance between them is the radius of the window M/2, the two clusters will be merged, the power sum of the merged clustersAccumulating the power sum of two clustersAndmerged cluster center indexIs indexed to a small centerAnd the radius of the window M/2, in this caseIncreasing M/2.
Or center indexIn merging clustersIn the window of (2), when clusters are mergedIs increased by M/2 and the power sum of the clusters is combinedAccumulation。
Otherwise merging clustersLocation and merging clustersAnd the number of the newly added merging clusters is not changed.
S306, based on the process of screening and determining clusters of tap cluster distribution and corresponding stored real initial position pairs, storing subsequent time domain taps.
Specifically, as shown in fig. 7, the merging cluster window start position pair marked as tap cluster distribution in the figure is shown.
And S308, performing periodic learning, and repeating the steps after performing front and rear window fixed storage on the channel estimation time domain tap.
Specifically, cycle learning includes repeating steps 204 and 206 and saving subsequent unsaved time domain taps according to the cycle steps.
In one embodiment, as shown in fig. 8, the base station 200 receives a signal transmitted by the terminal device 100 during actual operation of the NR uplink receiver based on the application environment shown in fig. 1. And the base station carries out channel estimation on the uplink reference signal to obtain a frequency domain channel estimation value without noise filtering. And then, transmitting the frequency domain channel estimation value without noise filtering to an IDFT module for IDFT conversion to obtain a channel estimation time domain tap.
In another embodiment, as shown in fig. 9, based on the application environment shown in fig. 1, during the actual operation of the LTE downlink receiver, the terminal device 100 receives a signal sent by the base station 200. And the base station carries out channel estimation on the downlink reference signal to obtain a frequency domain channel estimation value without noise filtering. And then, transmitting the frequency domain channel estimation value without noise filtering to an IDFT module for IDFT conversion to obtain a channel estimation time domain tap.
In the actual operation process of the NR uplink receiver or the LTE downlink receiver, before outputting a channel estimation time domain tap, clustering learning is carried out by utilizing the distribution sparse characteristic of the time domain tap, tap clustering distribution of NIFFT points is obtained, the time domain tap is divided according to clusters based on the tap clustering distribution, and effective time domain tap clusters are screened out and stored. The method specifically comprises the following steps:
acquiring a frequency domain channel estimation value of a reference signal;
determining a channel estimation time domain tap based on the frequency domain channel estimation value;
performing clustering self-learning processing on the time domain tap to obtain a first result, and determining a target tap based on the first result;
and storing the target tap, and processing the screen diameter of the target tap to obtain a processed time domain tap.
The clustering self-learning of the time domain tap specifically comprises the following steps:
firstly, the window length of the front window and the rear window of the time domain tap is set, and the time domain tap in the window range is stored.
And then, periodically learning and updating the tap clustering distribution of NIFFT points for the remaining time domain taps by using a BDSCAN clustering algorithm, dividing the suspicious signal window into clusters, performing cluster learning on the cluster center, performing inter-cluster screening and merging after learning is completed, and determining the initial position of each cluster.
The cluster center is subjected to cluster learning, and the cluster center learning method specifically comprises an initial learning stage, an update result combing stage, a learning updating stage and a screening and combining stage.
An initial learning stage: the number of learning times countN is recorded and starts with countN being 1. Selecting a cluster center to be learned in a suspicious signal window, traversing each path to be learned in the cluster, wherein the power of a time domain tap serving as the cluster center needs to meet a power threshold, and the condition of meeting the power threshold is thatAnd the power threshold is determined by simulation. And taking the time domain tap meeting the condition as a center, determining the window length of the current cluster as M, and storing the maximum path power and the index of the current window, wherein the index is the center of the initial window. Where a maximum of N clusters can be created.
After the cluster center is determined, whether the index n is in the window range corresponding to the maintained cluster id needs to be determined, and the judgment standard isAnd is;
As a result, 1, the index n is within the window range corresponding to the maintained cluster id:
updating the effective point number in the cluster radius of the index n to accumulate the power sum. At the moment, the tap power of the index n is compared with the central power of the cluster id, and when the tap power of the index n is greater than the central power of the cluster id, namelyUpdating center index of replacement windowAnd maximum power;
As a result, 2, the index n is not in the window range corresponding to the maintained cluster id:
and (4) creating a new cluster, and comparing the new cluster with the existing cluster with the minimum central power if the total cluster number exceeds N. And the power of the new cluster simultaneously satisfies that the sum of the central power of the cluster with the minimum central power is minimum and the middle power of the cluster with the minimum central power is minimum, and the central index and the power of the replacement window are updated at the moment. Or the power of the new cluster is larger than the power sum with the minimum statistical mean of the clusters, and the center index and the power of the replacement window are updated at the moment.
And updating result carding stage: after initial learning, the centers of all clusters are indexedSorting from small to large, correspondingly adjusting the maintenance information sequence of the corresponding clusters, and simultaneously carrying out central power on the clustersSum cluster's effective point power sumAre updated together.
And a learning updating stage: and after the initial learning stage is finished, entering a periodic learning updating stage, repeating the operation steps of the initial learning stage and the operation steps of the updating result combing stage in the periodic learning updating stage, wherein the learning times countN is updated in the time of repeating the operation steps of the initial learning stage.
A screening and merging stage: and after the periodic learning and updating is completed, obtaining the central indexes and the cluster power sums of the N clusters, judging the boundaries of the clusters, and performing cluster communication and combination according to the judgment result.
Specifically, the counting of the number of merging clusters, namely, the number of merging clusters, is started from 0, and the maximum number, namely, the number of merging clusters, is N-1. If two clusters are adjacent, the center index of one clusterIndexing with another centerThe distance between them is the radius of the window M/2, the two clusters will be merged, the power sum of the merged clustersAccumulating the power sum of two clustersAndmerged cluster center indexIs indexed to a small centerAnd the radius of the window M/2, in this caseIncreasing M/2.
Or center indexIn merging clustersIn the window of (2), the clusters are merged at this timeIs increased by M/2 and the power sum of the clusters is combinedAccumulation。
Otherwise merging clustersLocation and merging clustersAnd the number of the newly added merging clusters is not changed.
And secondly, storing subsequent time domain taps based on the process of screening and determining clusters of tap cluster distribution and corresponding stored real initial position pairs.
And finally, carrying out periodic learning, and repeating the steps of setting the window length of the front window and the rear window of the time domain tap and storing the time domain tap in the window range.
In this embodiment, when the NR uplink receiver or the LTE downlink receiver implements a channel estimation scheme using DFT frequency-domain filtering, the scheme is embedded in a DFT filtering-based process, and distributed learning and control storage of the scheme are performed on IDFT output. The effective tap cluster is subjected to learning storage, so that the storage cost can be obviously saved, and effective cluster information is not omitted. Since the tap power calculation is included in the filtering process based on DFT, only information and simple logic judgment of the maintenance cluster are increased, and an excessive calculation load is not increased.
In an embodiment, as shown in fig. 10, in a typical LTE downlink 100RB broadband scenario, where an SFN channel (a channel with the largest delay spread defined by 3 gpp) performs 1024-point ifft conversion on frequency domain channel estimation before filtering and denoising to obtain time domain tap signal distribution, the delay spread almost occupies the full time domain, and step 202 cannot be implemented, that is, the front and rear windows of the channel time domain tap cannot be saved. At the moment, the NIFFT points are subjected to tap clustering distribution learning, all suspicious signals are subjected to cluster division, each path to be learned is traversed, and all paths are covered. As can be known from fig. 7, the four cluster start position pairs corresponding to the effective taps are (0,36), (375,415), (836,875) and (990,1024), respectively, and thus it can be known that the time span occupied by the four clusters is (36 +40+ 30)/1024 is approximately equal to 15%. Meanwhile, the system robustness is considered, the storage space is expanded, 256 points, namely 25% of storage space is used for storing clusters obtained by subsequent self-learning, 75% of storage space is saved on the whole, and the storage space is remarkably saved.
In the embodiment, a periodic learning updating mode is performed through a DBSCAN clustering learning algorithm based on the sparse distribution characteristic of the time domain taps, specific clusters needing to store the time domain taps are determined, effective taps are screened out for storage, and the storage space is remarkably saved.
On the basis of time domain tap power calculation in the filtering process of DFT, only information for maintaining cluster window information needs to be added, and simple logic judgment is added in the maintenance process, so that the storage space of the time domain tap is saved through simple calculation.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a time domain tap storage apparatus for implementing the above-mentioned time domain tap storage method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the time domain tap storage apparatus provided below may refer to the limitations in the above time domain tap storage method, and details are not described herein again.
In one embodiment, as shown in fig. 11, there is provided a time domain tap storage apparatus comprising: a calculation processing module 1101, a tap obtaining module 1102, a self-learning processing module 1103, and a storage processing module 1104, wherein:
a calculation processing module 1101, configured to obtain a frequency domain channel estimation value of a reference signal;
a tap obtaining module 1102, configured to determine a channel estimation time domain tap based on the frequency domain channel estimation value;
the self-learning processing module 1103 is configured to perform clustering self-learning processing on the time domain tap to obtain a first result, and determine a target tap based on the first result;
and the storage processing module 1104 is configured to store the target tap, and perform a screen diameter processing on the target tap to obtain a processed time domain tap.
Specifically, the calculation processing module 1101 further includes a channel estimation module, configured to perform channel estimation on a decoded downlink reference signal acquired by the terminal device 100 or an decoded uplink reference signal acquired by the base station 200, so as to obtain a frequency domain channel estimation value without performing noise filtering.
The tap obtaining module 1102 includes a fourier module, configured to receive a frequency domain channel estimation value without noise filtering, perform IDFT transformation on the frequency domain channel estimation value without noise filtering, and output a channel estimation time domain tap outwards by the IDFT module.
The self-learning processing module 1103 includes a cluster learning module, and is configured to periodically learn, calculate, and update tap cluster distribution of NIFFT points by using a DBSCAN clustering algorithm before storing IFFT or IDFT output signals, partition a suspicious signal window into clusters, and perform cluster learning on the centers of the clusters in a period.
The clustering self-learning of the time domain tap specifically comprises the following steps:
firstly, the window length of the front window and the rear window of the time domain tap is set, and the time domain tap in the window range is stored.
And then, periodically learning and updating the tap clustering distribution of NIFFT points for the remaining time domain taps by using a BDSCAN clustering algorithm, dividing the suspicious signal window into clusters, performing cluster learning on the cluster center, performing inter-cluster screening and merging after learning is completed, and determining the initial position of each cluster.
The method comprises an initial learning stage, an updating result combing stage, a learning updating stage and a screening and merging stage.
First, the initial learning phase: the number of learning times countN is recorded and starts with countN being 1. Selecting a cluster center to be learned in a suspicious signal window, traversing each path to be learned in the cluster, wherein the power of a time domain tap serving as the cluster center needs to meet a power threshold, and the condition of meeting the power threshold is thatAnd the power threshold is determined by simulation. And taking the time domain tap meeting the condition as a center, determining the window length of the current cluster as M, and storing the maximum path power and the index of the current window, wherein the index is the center of the initial window. Where a maximum of N clusters can be created.
After the cluster center is determined, whether the index n is in the window range corresponding to the maintained cluster id needs to be determined, and the judgment standard is thatAnd is;
As a result, 1, the index n is within the window range corresponding to the maintained cluster id:
updating the effective point number in the cluster radius of the index n to accumulate the power sum. At the moment, the tap power of the index n is compared with the central power of the cluster id, and when the tap power of the index n is greater than the central power of the cluster id, namelyUpdating center index of replacement windowAnd maximum power;
As a result, 2, the index n is not in the window range corresponding to the maintained cluster id:
and creating a new cluster, and comparing the new cluster with the existing cluster with the minimum central power if the total cluster number exceeds N. And the power of the new cluster simultaneously meets the condition that the sum of the central power of the cluster which is greater than the central power minimum cluster and the medium power of the cluster with the central power minimum is minimum, and the central index and the power of the replacement window are updated at the moment. Or the power of the new cluster is larger than the power sum with the minimum statistical mean of the clusters, and the center index and the power of the replacement window are updated at the moment.
Then, updating a result carding stage: after initial learning, the centers of all clusters are indexedSorting from small to large, correspondingly adjusting the maintenance information sequence of the corresponding clusters, and simultaneously carrying out central power on the clustersSum cluster's effective point power sumAre updated together.
Secondly, a learning updating stage: and after the initial learning stage is completed, entering a periodic learning updating stage, repeating the operation steps of the initial learning stage and the operation steps of the updating result combing stage in the periodic learning updating stage, wherein the learning times countN is updated in the time of repeating the operation steps of the initial learning stage.
And finally, a screening and merging stage: after the periodic learning update is completed, the central indexes of N clusters are obtainedSum cluster power sumJudging the boundary of the cluster and carrying out judgment according to the judgment resultAnd merging the clusters.
Specifically, the counting of the number of merging clusters, namely, the number of merging clusters, is started from 0, and the maximum number, namely, the number of merging clusters, is N-1. If two clusters are adjacent, the center index of one clusterIndexing with another centerThe distance between them is the radius of the window M/2, the two clusters will be merged, the power sum of the merged clustersAccumulating the power sum of two clustersAndmerged cluster-centric indexingIs indexed to a small centerAnd the radius of the window M/2, in this caseIncreasing M/2.
Or center indexIn merging clustersIn the window of (2), when clusters are mergedIs increased by M/2 and the power sum of the clusters is combinedAccumulation。
Otherwise merging clustersLocation and merging clustersAnd the number of the newly added merging clusters is not changed.
The storage processing module 1104 includes a tap storage module, which is used for comparing and analyzing the time domain taps meeting the requirements with the maintained clusters, updating the clusters and adjusting the maintenance information sequence of the clusters, and screening the process of determining the clusters distributed by the tap clusters and the corresponding stored real initial position pairs, and storing the subsequent time domain taps.
The various modules in the time domain tap storage described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 12. The computer device includes a processor, a memory, an input/output interface, and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store specific cluster data of the valid time domain taps. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a time domain tap storage method.
It will be appreciated by those skilled in the art that the configuration shown in fig. 12 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, carries out the steps in the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the various embodiments provided herein may be, without limitation, general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, or the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (10)
1. A time domain tap storage method, the method comprising:
acquiring a frequency domain channel estimation value of a reference signal;
determining a channel estimation time domain tap based on the frequency domain channel estimation value;
performing clustering self-learning processing on the time domain tap to obtain a first result, and determining a target tap based on the first result;
and storing the target tap, and processing the screen diameter of the target tap to obtain a processed time domain tap.
2. The method of claim 1, wherein said storing said target tap, processing said target tap screen size, comprises:
fixedly storing a front window and a rear window of the channel estimation time domain tap;
before IFFT or IDFT output signals are stored, updating the tap clustering distribution of NIFFT points by using clustering algorithm periodic learning, dividing suspicious signal windows into clusters, performing cluster learning on the centers of the clusters in a period, and after the periodic learning is completed, performing inter-cluster screening and merging on all the clusters to determine the initial position of each cluster;
based on the process of screening and determining clusters of tap clustering distribution and corresponding stored real initial position pairs, storing subsequent time domain taps;
and carrying out periodic learning, and repeating the steps after front and rear window fixed storage is carried out on the channel estimation time domain tap.
3. The method of claim 2, wherein the performing fixed pre-and post-window preservation of the channel estimation time domain taps specifically comprises:
carrying out inverse Fourier transform on the frequency domain channel estimation value which is not subjected to filtering noise to obtain a channel estimation time domain tap;
when the length of the fixed front window is set toWhen it is time, save point 0 to-1 point and the time domain tap in between;
4. The method of claim 2, wherein after the periodic learning is completed, performing inter-cluster screening and merging on all clusters, including:
initially learning and recording the learning times countN as 1, selecting a cluster center to be candidate learned, selecting a time domain tap meeting a power threshold, taking the time domain tap as the center of the cluster, determining the window length M of the cluster, storing the maximum diameter power and the index of the current window, and meeting the judgment rule of the power thresholdWherein the power threshold is determined by simulation, and the maximum number of clusters is N;
adjusting the learning and updating result according to the maintenance sequence of the cluster from small to large according to the central index of the cluster, and updating the central power and the effective point power sum of the corresponding cluster;
entering a periodic learning updating stage, repeating the steps before the periodic learning updating stage, and updating the learning times countN;
after periodic learning and updating are completed, determining the central indexes and the cluster power sums of the N clusters, and performing boundary judgment and cluster communication and combination;
and after the clusters are combined, sequentially judging effective clusters according to the cluster power and the sequence from large to small, wherein the judgment standard is that the maximum tap storage is used up.
5. The method of claim 4, wherein selecting the time domain tap that satisfies the power threshold, centering the time domain tap in a cluster and determining a window length M of the cluster, and storing the maximum path power and the index of the current window comprises:
index n is already in the window range corresponding to the maintained cluster id, i.e. index n is already in the window range corresponding to the maintained cluster id
And isUpdating the accumulated power sum of the effective points in the cluster radius M/2, wherein the power sum isIn whichFor the center index of the id-th cluster, sumPTs [ id]Updating the center index and the maximum power of a replacement window when the tap power of the index n is greater than the center power of the cluster;
and if the index N is not in the window range corresponding to the maintained cluster id, creating new clusters, comparing the new clusters with the existing clusters with the minimum central power, wherein the total cluster number exceeds N, the power sum of the new clusters exceeds the central power and is minimum, or the power sum exceeds the statistical average minimum power sum of the clusters, and updating the central index and the power of the replacement window.
6. The method of claim 1, wherein storing the target tap, filtering the target tap to obtain a processed time-domain tap further comprises performing a discrete fourier transform on the processed time-domain tap to obtain a frequency-domain channel estimation filtered value.
7. An apparatus for time domain tap storage, the apparatus comprising:
the calculation processing module is used for acquiring a frequency domain channel estimation value of the reference signal;
a tap obtaining module, configured to determine a channel estimation time domain tap based on the frequency domain channel estimation value;
the self-learning processing module is used for carrying out clustering self-learning processing on the time domain tap to obtain a first result and determining a target tap based on the first result;
and the storage processing module is used for storing the target tap, processing the screen diameter of the target tap and obtaining a processed time domain tap.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310056173.4A CN115801504B (en) | 2023-01-17 | 2023-01-17 | Time domain tap storage method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310056173.4A CN115801504B (en) | 2023-01-17 | 2023-01-17 | Time domain tap storage method, device, computer equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115801504A true CN115801504A (en) | 2023-03-14 |
| CN115801504B CN115801504B (en) | 2023-06-13 |
Family
ID=85429744
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310056173.4A Active CN115801504B (en) | 2023-01-17 | 2023-01-17 | Time domain tap storage method, device, computer equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115801504B (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100284447A1 (en) * | 2009-05-11 | 2010-11-11 | Qualcomm Incorporated | Frequency domain feedback channel estimation for an interference cancellation repeater including sampling of non causal taps |
| CN101997799A (en) * | 2009-08-20 | 2011-03-30 | 石强 | Orthogonal frequency division multiplexing (OFDM) channel estimation method based on filter bank |
| US20120307939A1 (en) * | 2009-12-29 | 2012-12-06 | Centre Of Excellence In Wireless Technology | Estimation of channel impulse response in a communication receiver |
| CN114465851A (en) * | 2021-12-25 | 2022-05-10 | 西北工业大学 | Cluster sparse underwater acoustic channel estimation method for optimizing kernel-width maximum-skip rule |
-
2023
- 2023-01-17 CN CN202310056173.4A patent/CN115801504B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100284447A1 (en) * | 2009-05-11 | 2010-11-11 | Qualcomm Incorporated | Frequency domain feedback channel estimation for an interference cancellation repeater including sampling of non causal taps |
| CN101997799A (en) * | 2009-08-20 | 2011-03-30 | 石强 | Orthogonal frequency division multiplexing (OFDM) channel estimation method based on filter bank |
| US20120307939A1 (en) * | 2009-12-29 | 2012-12-06 | Centre Of Excellence In Wireless Technology | Estimation of channel impulse response in a communication receiver |
| CN114465851A (en) * | 2021-12-25 | 2022-05-10 | 西北工业大学 | Cluster sparse underwater acoustic channel estimation method for optimizing kernel-width maximum-skip rule |
Non-Patent Citations (1)
| Title |
|---|
| 董光阳: "低压电力线OFDM系统信道估计的研究", pages 3 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115801504B (en) | 2023-06-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109195170B (en) | Cell capacity expansion method and device and storage medium | |
| CN110868723B (en) | A Multi-band Iterative Spectrum Sensing Method Based on Power Variance Comparison | |
| Dong et al. | Age of information upon decisions | |
| CN110677854A (en) | Method, apparatus, device and medium for carrier frequency capacity adjustment | |
| CN118734033A (en) | Signal feature extraction method and device based on dynamic dilated convolution | |
| CN116032702B (en) | Adaptive channel estimation method, apparatus, computer device and storage medium | |
| CN110764975A (en) | Early warning method and device for equipment performance and monitoring equipment | |
| CN114723071B (en) | Federal learning method and device based on client classification and information entropy | |
| CN115801504B (en) | Time domain tap storage method, device, computer equipment and storage medium | |
| CN110557351B (en) | Method and apparatus for generating information | |
| CN115834303B (en) | Adaptive frequency domain channel estimation method, device, communication equipment and storage medium | |
| CN119316942A (en) | Synaesthesia integrated frame structure configuration method, device, equipment, storage medium and program product | |
| CN113709814A (en) | Load balancing method and device, computing equipment and computer readable storage medium | |
| US20220004596A1 (en) | Inverse Matrix Calculation Device and Inverse Matrix Calculation Processing Method | |
| CN114553338B (en) | Method and device for determining interference signal, electronic equipment and readable storage medium | |
| CN113438689B (en) | Energy-saving method, device, equipment and storage medium for a base station | |
| KR101500922B1 (en) | A method and an apparatus for distributed estimation using an adaptive filter | |
| CN118802938B (en) | Resource allocation methods, devices and equipment | |
| CN110768736B (en) | Channel simulation method and device | |
| CN116318464B (en) | Self-adaptive threshold selection method and device for wireless link monitoring | |
| CN116032701A (en) | Channel estimation method, device, communication device and storage medium | |
| US20170139969A1 (en) | Method for filtering and analyzing big data, electronic device, and non-transitory computer-readable storage medium | |
| CN110554916B (en) | Distributed cluster-based risk index calculation method and device | |
| CN116455719B (en) | Frequency offset estimation method, device, communication equipment and readable storage medium | |
| CN113347660A (en) | Communication signal detection method, apparatus, device and medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |