CN117236201A - A downscaling method based on Diffusion and ViT - Google Patents
A downscaling method based on Diffusion and ViT Download PDFInfo
- Publication number
- CN117236201A CN117236201A CN202311525721.XA CN202311525721A CN117236201A CN 117236201 A CN117236201 A CN 117236201A CN 202311525721 A CN202311525721 A CN 202311525721A CN 117236201 A CN117236201 A CN 117236201A
- Authority
- CN
- China
- Prior art keywords
- model
- diffusion
- steps
- precipitation
- resolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a downscaling method based on Diffusion and ViT, which comprises the following steps: s1, establishing a low-resolution numerical mode precipitation forecast and a high-resolution precipitation observation sample, and preprocessing; s2, constructing a Diffusion-Vision-transformation precipitation prediction model; s3, training a model until errors of the Diffusion-Vision-transformation converge, and storing the model and predicting; according to the invention, the Vision Transformer model is used for replacing the U-Net structure in the original Diffusion model, so that the training efficiency of the model is greatly improved, and the time of the model for prediction is reduced.
Description
Technical Field
The invention relates to the technical field of weather forecast, in particular to a scale-down method based on Diffusion and ViT.
Background
Most of the traditional statistical downscaling methods are models based on linear frames, and are difficult to process complex and high-dimensional meteorological field data and characterize an atmospheric nonlinear dynamic process. The rise in deep learning provides new directions for characterizing complex data that are highly dimensional and strongly nonlinear, such as meteorological element fields. By utilizing the efficient spatial feature extraction module to extract key information of high-dimensional spatial data, a statistical model of low-resolution input to high-resolution output is established, the deep learning model can be effectively applied to scenes such as picture denoising and picture resolution improvement, and the like, and the method is generally called as a super-resolution model. However, how to efficiently migrate the model to the down-scale problem of meteorology, and further improve the calculation efficiency and the prediction accuracy of the model, still needs further research and exploration.
Disclosure of Invention
The invention aims to: the invention aims to provide a downscaling method based on Diffusion and ViT to solve the problems of insufficient spatial resolution and large prediction error of numerical mode precipitation prediction.
The technical scheme is as follows: the invention discloses a downscaling method based on Diffusion and ViT, which comprises the following steps:
s1: establishing a low-resolution numerical mode precipitation forecast and a high-resolution precipitation observation sample, and preprocessing;
s2: constructing a Diffusion-Vision-transformation precipitation prediction model; the method comprises the following steps:
s21: forward noise adding is carried out on the high-resolution precipitation observation sample in the Diffusion model;
s22: extracting high-order spatial features of low-resolution numerical mode precipitation prediction by using a Vision-transducer model;
s23: denoising the result obtained in the step S21 in a Diffusion model, and introducing the high-order spatial features obtained in the step S22 as condition information to obtain a reduced-scale high-resolution precipitation forecast;
s3: training the model until the error of the dispersion-Vision-transducer converges, and storing the model and predicting.
Further, in the step S1, the preprocessing includes: the data set is subjected to operations of logarithmization and normalization.
Further, the specific process of step S21 is as follows:
setting a high-resolution precipitation observation sample pretreated at a certain momentGaussian noise +.A. The original observation was added stepwise in T times>Obtain->Data distribution at time t +.>At the previous timeThe formula is as follows:
;
wherein,is a preset constant superparameter, and ranges between 0 and 1;
data distribution at last time tCan be made of data +.0 time instant>The distribution is obtained by the following formula:
;
wherein,and for->Then->。
Further, the step S22 is specifically as follows: input paired high resolution precipitation observation sampleAnd low scorePrecipitation forecast with resolution value mode>And determining the step number T of forward noise and the variance super-parameter of the added random Gaussian noise。
Further, the step S23 includes the following steps:
s231: dividing the low-resolution numerical mode precipitation forecast into a plurality of image blocks, and then carrying out linear mapping on the divided image blocks;
s232: the position information of different image blocks is represented by position codes, and the processed coding information is used as the input of N groups of self-attention modules;
s233: the convolution operation is replaced with a spatial self-attention module.
Further, the formula of step S231 is as follows:
;
wherein,for a group of segmented tiles, +.>For the weight coefficient to be trained, +.>For the truncation coefficient to be trained, +.>Is a set of vectors that have undergone linear mapping.
Further, the step S232 position encoding is a two-dimensional position embedding method.
Further, the step S233 specifically includes the following steps:
set a group of divided blocks asThree sets of weights are utilized, namely query weight +.>Key weight->Numerical weight->Raw data is divided into three features: query matrix->Key value matrix->Matrix of values->The method comprises the steps of carrying out a first treatment on the surface of the Then->Corresponding self-attention->The formula is as follows:
;
wherein,is->Square root of dimension.
Further, the step S3 specifically includes the following steps:
the results obtained through steps S21-S22 are:wherein, the method comprises the steps of, wherein,for the model obtained for steps S21-S22, and (2)>For low resolution numerical model precipitation forecast +.>For paired high resolution precipitation observation samples, +.>The super parameter preset in the step S21 is T, and the number of steps of forward noise adding in the step S21 is T; the prediction error of the Diffusion-Vision-transformation model in step S3>The formula is as follows:
;
wherein,is a random Gaussian distribution, then +.>;
Forecast errors when the diffion-Vision-transform modelUpon convergence, deducing step T in reverse until model prediction +.>The method comprises the steps of carrying out a first treatment on the surface of the Wherein, the former step +.>By the next step->The formula is as follows:
;
wherein,for the model obtained for steps S21-S22, and (2)>For low resolution numerical model precipitation forecast +.>For the super parameter preset in step S21, < ->Is a random Gaussian distribution, then +.>。
An apparatus of the present invention includes a memory, a processor, and a program stored on the memory and executable on the processor, the processor implementing steps in any of the methods of downscaling based on diffion and ViT when the program is executed.
The beneficial effects are that: compared with the prior art, the invention has the following remarkable advantages: (1) The refinement degree of the downscaling prediction is improved by utilizing the Diffusion model, and the method has more advantages particularly in the task aiming at downscaling multiple exceeding 4; (2) By using the Vision Transformer model to replace the U-Net structure in the original Diffusion model, the training efficiency of the model is greatly improved, and the time of the model for prediction is reduced.
Drawings
FIG. 1 is a general flow chart of the present invention;
FIG. 2 is a schematic diagram of a training flow of the diffration-ViT model;
FIG. 3 is a schematic diagram of a diffration model;
FIG. 4 is a schematic diagram of a Vision-transducer model.
Description of the embodiments
The technical scheme of the invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention provides a downscaling method based on diffration and ViT, which includes the following steps:
s1: establishing a low-resolution numerical mode precipitation forecast and a high-resolution precipitation observation sample, and preprocessing; the pretreatment comprises the following steps: the data set is subjected to operations of logarithmization and normalization.
As shown in fig. 2, S2: constructing a Diffusion-Vision-transformation precipitation prediction model; the method comprises the following steps:
s21: forward noise adding is carried out on the high-resolution precipitation observation sample in the Diffusion model; the method comprises the following steps: as shown in FIG. 3, a high-resolution precipitation observation sample pretreated at a certain moment is setGaussian noise +.A. The original observation was added stepwise in T times>Obtain->Data distribution at time t +.>Before +.>The formula is as follows: data distribution at time t +.>Before +.>The formula is as follows:
;
wherein,is a preset constant superparameter, and ranges between 0 and 1;
finallyData distribution at time tCan be made of data +.0 time instant>The distribution is obtained by the following formula:
;
wherein,and for->Then->。
S22: extracting high-order spatial features of low-resolution numerical mode precipitation prediction by using a Vision-transducer model; the method comprises the following steps: input paired high resolution precipitation observation sampleAnd low resolution numerical mode precipitation forecast +.>And determining the step number T of forward noise addition and the variance super-parameter of the added random Gaussian noise +.>。
Denoising the result obtained in the step S21 in a Diffusion model, and introducing the high-order spatial features obtained in the step S22 as condition information to obtain a reduced-scale high-resolution precipitation forecast;
the method comprises the following steps:
s231: as shown in fig. 4, the low-resolution numerical mode precipitation prediction is divided into a plurality of blocks, and then the divided blocks are subjected to linear mapping; the formula is as follows:
;
wherein,for a group of segmented tiles, +.>For the weight coefficient to be trained, +.>For the truncation coefficient to be trained, +.>Is a set of vectors that have undergone linear mapping.
S232: the position information of different image blocks is represented by position codes, and the processed coding information is used as the input of N groups of self-attention modules; the position coding is a two-dimensional position embedding method, and specifically comprises the following steps: by encoding the position of each tile relative to the X-axis and the Y-axis, different tiles are represented with different position encodings.
S233: the convolution operation is replaced with a spatial self-attention module. The method comprises the following steps:
set a group of divided blocks asThree sets of weights are utilized, namely query weight +.>Key weight->Numerical weight->Raw data is divided into three features: query matrix->Key value matrix->Matrix of values->The method comprises the steps of carrying out a first treatment on the surface of the Then->Corresponding self-attention->The formula is as follows:
;
wherein,is->Square root of dimension. The spatial self-attention module consists of a regularization layer, a multi-head self-attention, a residual structure and a feedforward neural network.
S3: training the model until the error of the dispersion-Vision-transducer converges, and storing the model and predicting. The method comprises the following steps:
the results obtained through steps S21-S22 are:wherein, the method comprises the steps of, wherein,for the model obtained for steps S21-S22, and (2)>For low resolution numerical model precipitation forecast +.>For paired high resolution precipitation observation samples, +.>Is step S21, wherein T is the number of steps of forward noise adding in the step S21; the prediction error of the Diffusion-Vision-transformation model in step S3>The formula is as follows:
;
wherein,is a random Gaussian distribution, then +.>;
Forecast errors when the diffion-Vision-transform modelUpon convergence, deducing step T in reverse until model prediction +.>The method comprises the steps of carrying out a first treatment on the surface of the Wherein, the former step +.>By the next step->The formula is as follows:
;
wherein,for the model obtained for steps S21-S22, and (2)>For low resolution numerical model precipitation forecast +.>Is the steps ofSuper parameter preset in S21, ++>Is a random Gaussian distribution, then +.>。
The embodiment of the invention also provides equipment, which comprises a memory, a processor and a program stored on the memory and capable of running on the processor, wherein the processor realizes the steps in any one of the downscaling methods based on the Diffusion and ViT when executing the program.
Claims (10)
1. A downscaling method based on Diffusion and ViT, comprising the steps of:
s1: establishing a low-resolution numerical mode precipitation forecast and a high-resolution precipitation observation sample, and preprocessing;
s2: constructing a Diffusion-Vision-transformation precipitation prediction model; the method comprises the following steps:
s21: forward noise adding is carried out on the high-resolution precipitation observation sample in the Diffusion model;
s22: extracting high-order spatial features of low-resolution numerical mode precipitation prediction by using a Vision-transducer model;
s23: denoising the result obtained in the step S21 in a Diffusion model, and introducing the high-order spatial features obtained in the step S22 as condition information to obtain a reduced-scale high-resolution precipitation forecast;
s3: training the model until the error of the dispersion-Vision-transducer converges, and storing the model and predicting.
2. The downscaling method based on Diffusion and ViT of claim 1, wherein the preprocessing in step S1 comprises: the data set is subjected to operations of logarithmization and normalization.
3. The downscaling method based on the Diffusion and the ViT according to claim 1, wherein the specific procedure of the step S21 is as follows:
setting a high-resolution precipitation observation sample pretreated at a certain momentGaussian noise +.A. The original observation was added stepwise in T times>Obtain->Data distribution at time t +.>Before +.>The formula is as follows:
;
wherein,is a preset constant superparameter, and ranges between 0 and 1;
data distribution at last time tCan be made of data +.0 time instant>The distribution is obtained by the following formula:
;
wherein,and for->Then->。
4. The downscaling method based on Diffusion and ViT of claim 1, wherein S22 is specifically as follows: input paired high resolution precipitation observation sampleAnd low resolution numerical mode precipitation forecast +.>And determining the step number T of forward noise addition and the variance super-parameter of the added random Gaussian noise +.>。
5. The downscaling method based on Diffusion and ViT of claim 1, wherein the step S23 comprises the steps of:
s231: dividing the low-resolution numerical mode precipitation forecast into a plurality of image blocks, and then carrying out linear mapping on the divided image blocks;
s232: the position information of different image blocks is represented by position codes, and the processed coding information is used as the input of N groups of self-attention modules;
s233: the convolution operation is replaced with a spatial self-attention module.
6. The downscaling method based on Diffusion and ViT of claim 4, wherein the formula of step S231 is as follows:
;
wherein,for a group of segmented tiles, +.>For the weight coefficient to be trained, +.>For the truncation coefficient to be trained, +.>Is a set of vectors that have undergone linear mapping.
7. The downscaling method of claim 4 wherein the step S232 position encoding is a two-dimensional position embedding method.
8. The downscaling method based on Diffusion and ViT of claim 4, wherein the step S233 is specifically as follows:
set a group of divided blocks asThree sets of weights are utilized, namely query weight +.>Key weight->Numerical weight->Raw data is divided into three features: query matrix->Key value matrix->Matrix of values->The method comprises the steps of carrying out a first treatment on the surface of the Then->Corresponding self-attention->The formula is as follows:
;
wherein,is->Square root of dimension.
9. The downscaling method based on Diffusion and ViT according to claim 1, wherein the step S3 is specifically as follows:
the results obtained through steps S21-S22 are:wherein->For the model obtained for steps S21-S22, and (2)>For low resolution numerical model precipitation forecast +.>For paired high resolution precipitation observation samples, +.>The super parameter preset in the step S21 is T, and the number of steps of forward noise adding in the step S21 is T; the prediction error of the Diffusion-Vision-transformation model in step S3>The formula is as follows:
;
wherein,is a random Gaussian distribution, then +.>;
Forecast errors when the diffion-Vision-transform modelUpon convergence, deducing step T in reverse until model prediction +.>The method comprises the steps of carrying out a first treatment on the surface of the Wherein, the former step +.>By the next step->The formula is as follows:
;
wherein,for the model obtained for steps S21-S22, and (2)>For low resolution numerical model precipitation forecast +.>For the super parameter preset in step S21, < ->Is a random Gaussian distribution, then +.>。
10. An apparatus comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor performs the steps in a method of downscaling based on diffion and ViT as claimed in any one of claims 1-9 when the program is executed.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311525721.XA CN117236201B (en) | 2023-11-16 | 2023-11-16 | Diffusion and ViT-based downscaling method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311525721.XA CN117236201B (en) | 2023-11-16 | 2023-11-16 | Diffusion and ViT-based downscaling method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117236201A true CN117236201A (en) | 2023-12-15 |
| CN117236201B CN117236201B (en) | 2024-02-23 |
Family
ID=89098904
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311525721.XA Active CN117236201B (en) | 2023-11-16 | 2023-11-16 | Diffusion and ViT-based downscaling method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117236201B (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118033590A (en) * | 2024-04-12 | 2024-05-14 | 南京信息工程大学 | A short-term precipitation forecasting method based on improved VIT neural network |
| CN118366046A (en) * | 2024-06-20 | 2024-07-19 | 南京信息工程大学 | Wind field downscaling method based on deep learning and combining with topography |
| CN119720071A (en) * | 2024-11-01 | 2025-03-28 | 中国气象局成都高原气象研究所 | A spatial downscaling method for precipitation fields |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6688180B1 (en) * | 1999-07-05 | 2004-02-10 | Sinvent As | Multi-test assembly for evaluating, detecting and mountoring processes at elevated pressure |
| US20080179742A1 (en) * | 2006-07-24 | 2008-07-31 | Interuniversitair Microelektronica Centrum (Imec) | Method and solution to grow charge-transfer complex salts |
| CN109524061A (en) * | 2018-10-23 | 2019-03-26 | 中国人民解放军陆军防化学院 | A kind of radionuclide diffusion calculation method based on transmission coefficient matrix |
| US20220043001A1 (en) * | 2014-04-10 | 2022-02-10 | Yale University | Methods and compositions for detecting misfolded proteins |
| US20220301097A1 (en) * | 2022-06-03 | 2022-09-22 | Intel Corporation | Methods and apparatus to implement dual-attention vision transformers for interactive image segmentation |
| CN115964869A (en) * | 2022-12-14 | 2023-04-14 | 西北核技术研究所 | A simulation method of air pollution diffusion and migration with high temporal and spatial resolution |
| US20230123322A1 (en) * | 2021-04-16 | 2023-04-20 | Strong Force Vcn Portfolio 2019, Llc | Predictive Model Data Stream Prioritization |
| US20230176550A1 (en) * | 2021-05-06 | 2023-06-08 | Strong Force Iot Portfolio 2016, Llc | Quantum, biological, computer vision, and neural network systems for industrial internet of things |
| US20230222132A1 (en) * | 2021-05-11 | 2023-07-13 | Strong Force Vcn Portfolio 2019, Llc | Edge Device Query Processing of Distributed Database |
| CN116740223A (en) * | 2023-04-26 | 2023-09-12 | 先进操作系统创新中心(天津)有限公司 | How to generate images based on text |
| CN116953642A (en) * | 2023-06-29 | 2023-10-27 | 安徽大学 | Millimeter wave radar gesture recognition method based on adaptive coding Vision Transformer network |
-
2023
- 2023-11-16 CN CN202311525721.XA patent/CN117236201B/en active Active
Patent Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6688180B1 (en) * | 1999-07-05 | 2004-02-10 | Sinvent As | Multi-test assembly for evaluating, detecting and mountoring processes at elevated pressure |
| US20080179742A1 (en) * | 2006-07-24 | 2008-07-31 | Interuniversitair Microelektronica Centrum (Imec) | Method and solution to grow charge-transfer complex salts |
| US20220043001A1 (en) * | 2014-04-10 | 2022-02-10 | Yale University | Methods and compositions for detecting misfolded proteins |
| CN109524061A (en) * | 2018-10-23 | 2019-03-26 | 中国人民解放军陆军防化学院 | A kind of radionuclide diffusion calculation method based on transmission coefficient matrix |
| US20230123322A1 (en) * | 2021-04-16 | 2023-04-20 | Strong Force Vcn Portfolio 2019, Llc | Predictive Model Data Stream Prioritization |
| US20230176550A1 (en) * | 2021-05-06 | 2023-06-08 | Strong Force Iot Portfolio 2016, Llc | Quantum, biological, computer vision, and neural network systems for industrial internet of things |
| US20230222132A1 (en) * | 2021-05-11 | 2023-07-13 | Strong Force Vcn Portfolio 2019, Llc | Edge Device Query Processing of Distributed Database |
| US20230252047A1 (en) * | 2021-05-11 | 2023-08-10 | Strong Force Vcn Portfolio 2019, Llc | Query Prediction Modeling for Distributed Databases |
| US20220301097A1 (en) * | 2022-06-03 | 2022-09-22 | Intel Corporation | Methods and apparatus to implement dual-attention vision transformers for interactive image segmentation |
| CN115964869A (en) * | 2022-12-14 | 2023-04-14 | 西北核技术研究所 | A simulation method of air pollution diffusion and migration with high temporal and spatial resolution |
| CN116740223A (en) * | 2023-04-26 | 2023-09-12 | 先进操作系统创新中心(天津)有限公司 | How to generate images based on text |
| CN116953642A (en) * | 2023-06-29 | 2023-10-27 | 安徽大学 | Millimeter wave radar gesture recognition method based on adaptive coding Vision Transformer network |
Non-Patent Citations (3)
| Title |
|---|
| 杨舒楠: "江淮梅雨锋暴雨的中尺度可预报性研究", 《中国博士学位论文全文数据库 基础科学辑》, no. 6, pages 009 - 6 * |
| 王鹏新: "基于点扩散函数的条件植被温度指数降尺度转换方法", 《农业机械学报》, vol. 48, no. 12, pages 165 - 173 * |
| 秦菁: "面向多天气退化图像恢复的自注意力扩散模型", 《上海交通大学学报》, pages 1 - 22 * |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118033590A (en) * | 2024-04-12 | 2024-05-14 | 南京信息工程大学 | A short-term precipitation forecasting method based on improved VIT neural network |
| CN118366046A (en) * | 2024-06-20 | 2024-07-19 | 南京信息工程大学 | Wind field downscaling method based on deep learning and combining with topography |
| CN118366046B (en) * | 2024-06-20 | 2024-08-30 | 南京信息工程大学 | Wind field downscaling method based on deep learning and combining with topography |
| CN119720071A (en) * | 2024-11-01 | 2025-03-28 | 中国气象局成都高原气象研究所 | A spatial downscaling method for precipitation fields |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117236201B (en) | 2024-02-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN117236201B (en) | Diffusion and ViT-based downscaling method | |
| CN114708434B (en) | Cross-domain remote sensing image semantic segmentation method based on iterative intra-domain adaptation and self-training | |
| CN101950365B (en) | Multi-task super-resolution image reconstruction method based on KSVD dictionary learning | |
| CN102156875B (en) | Image super-resolution reconstruction method based on multitask KSVD (K singular value decomposition) dictionary learning | |
| CN110222784B (en) | Solar cell defect detection method integrating short-term and long-term depth features | |
| CN103295197B (en) | Based on the image super-resolution rebuilding method of dictionary learning and bilateral canonical | |
| Li et al. | Efficient image super-resolution with feature interaction weighted hybrid network | |
| CN112560966B (en) | Polarimetric SAR image classification method, media and equipment based on scattergram convolutional network | |
| CN111598786B (en) | A hyperspectral image unmixing method based on deep denoising autoencoder network | |
| CN118247668B (en) | Diffusion model-based hyperspectral image multi-source domain self-adaptive classification method | |
| CN112560719B (en) | High-resolution image water body extraction method based on multi-scale convolution-multi-core pooling | |
| CN113537573A (en) | Wind power operation trend prediction method based on dual spatiotemporal feature extraction | |
| Wen et al. | Encoder-minimal and decoder-minimal framework for remote sensing image dehazing | |
| CN113611354B (en) | A Protein Torsion Angle Prediction Method Based on Lightweight Deep Convolutional Networks | |
| CN114357211B (en) | Contrastive learning hashing image retrieval method based on adaptive distribution balance feature | |
| CN115019101A (en) | Image classification method based on information bottleneck algorithm in image classification network | |
| CN117878928B (en) | A wind power prediction method and device based on deep learning | |
| CN116698410B (en) | A multi-sensor data monitoring method for rolling bearings based on convolutional neural network | |
| CN119445577A (en) | An adaptive contrast view generation method for semantic segmentation of remote sensing images | |
| CN118115769A (en) | A deep convolution embedding clustering method, storage medium, and terminal device based on Resnet50 improved autoencoder | |
| CN118196487A (en) | Hyperspectral image classification method based on multi-scale feature pyramid | |
| CN118587451A (en) | A mask-enhanced intelligent spatial downscaling method for ocean feature fields | |
| CN111080516A (en) | Super-resolution image reconstruction method based on self-sampling enhancement | |
| CN117523333A (en) | A land cover classification method based on attention mechanism | |
| CN115631377A (en) | Image classification method based on space transformation network and convolutional neural network |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |