CN110097541A - A kind of image of no reference removes rain QA system - Google Patents
A kind of image of no reference removes rain QA system Download PDFInfo
- Publication number
- CN110097541A CN110097541A CN201910324113.XA CN201910324113A CN110097541A CN 110097541 A CN110097541 A CN 110097541A CN 201910324113 A CN201910324113 A CN 201910324113A CN 110097541 A CN110097541 A CN 110097541A
- Authority
- CN
- China
- Prior art keywords
- module
- feature
- level
- output
- scale
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明提出一种无参考的图像去雨质量评价系统,包括多尺度特征前向提取路径、多尺度特征后向提取路径、门控融合模块以及分数预测模块;待评价图像输入至多尺度特征前向提取路径后,再经多尺度特征后向提取路径输出多尺度特征至门控融合模块,门控融合模块对各尺度特征进行加权融合后输出至分数预测模块得到分数。双向门控融合网络先通过前向路径得到高维特征,再通过后向路径得到更多图像局部内容、上下文信息,最终通过门控融合得到图像的多尺度特征。本发明用来预测人类对不同去雨结果的感知质量,有助于在真实的雨景中评估和开发。
The invention proposes a reference-free image rain removal quality evaluation system, which includes a multi-scale feature forward extraction path, a multi-scale feature backward extraction path, a gated fusion module and a score prediction module; the image to be evaluated is input to the multi-scale feature forward After the path is extracted, the multi-scale features are output to the gated fusion module through the multi-scale feature extraction path, and the gated fusion module performs weighted fusion of each scale feature and outputs it to the score prediction module to obtain the score. The bidirectional gated fusion network first obtains high-dimensional features through the forward path, then obtains more local content and context information of the image through the backward path, and finally obtains the multi-scale features of the image through gated fusion. The present invention is used to predict the human perception quality of different rain removal results, which is helpful for evaluation and development in a real rain scene.
Description
技术领域technical field
本发明涉及图像处理中去雨技术。The present invention relates to the technique of removing rain in image processing.
背景技术Background technique
去雨质量评价(Deraining Quality Assessment,DQA)在评价和指导图像去雨算法的设计方面起着重要的作用。由于在实际下雨天气中不存在无雨图像,现有的去雨算法通过模拟非常有限类型的雨,这远远不足以评估去雨算法的实用性。Deraining Quality Assessment (DQA) plays an important role in evaluating and guiding the design of image deraining algorithms. Since no rain-free images exist in actual rainy weather, existing rain-removing algorithms simulate very limited types of rain, which are far from sufficient to evaluate the practicality of rain-removing algorithms.
在去雨质量评价领域,传统方法由于对无雨真实图像和全参考图像质量评价(如PSNR SSIM)的依赖,所有现有的图像去雨算法都只能在人造数据上得到评估结果。与各种真实的雨景相比,计算机合成的降雨图片远远不能保证去雨算法的实用性。同时,对真实降雨图像进行质量评价无法验证每种去雨算法的统计意义。就我们所知,目前还没有专门为去雨图像设计的盲图像质量评估(BIQA)模型。In the field of rain removal quality evaluation, traditional methods rely on real images without rain and full reference image quality evaluation (such as PSNR SSIM), and all existing image rain removal algorithms can only obtain evaluation results on artificial data. Compared with various real rain scenes, the computer synthesized rainfall pictures are far from guaranteeing the practicability of the rain removal algorithm. At the same time, the quality evaluation of real rainfall images cannot verify the statistical significance of each rain removal algorithm. To the best of our knowledge, there is currently no Blind Image Quality Assessment (BIQA) model specifically designed for derained images.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的技术问题是,针对去雨图像具有各向异性和内容多变性,提出一种融合了图像的多尺度特征的双向门控融合网络(B-GFN)的去雨质量评价系统。The technical problem to be solved by the present invention is to propose a bidirectional gated fusion network (B-GFN) quality evaluation system for rain removal that integrates the multi-scale features of the image for the anisotropy and content variability of the rain removal image.
本发明为解决上述技术问题所采用的技术方案是,一种无参考的图像去雨质量评价系统,包括多尺度特征前向提取路径、多尺度特征后向提取路径、门控融合模块以及分数预测模块;待评价图像输入至多尺度特征前向提取路径后,再经多尺度特征后向提取路径输出多尺度特征至门控融合模块,门控融合模块对各尺度特征进行加权融合后输出至分数预测模块得到分数。The technical solution adopted by the present invention to solve the above technical problems is a reference-free image rain removal quality evaluation system, including a multi-scale feature forward extraction path, a multi-scale feature backward extraction path, a gated fusion module and a score prediction module; after the evaluation image is input to the multi-scale feature forward extraction path, the multi-scale feature is output to the gated fusion module through the multi-scale feature backward extraction path, and the gated fusion module performs weighted fusion of each scale feature and outputs it to score prediction Modules get points.
本发明提出了一种盲图质量评估模型来准确地预测去雨质量。由于,我们提出了一种双向门控融合网络(B-GFN),该网络先通过前向路径得到高维特征,再通过后向路径得到更多图像局部内容、上下文信息,最终通过门控融合得到图像的多尺度特征。The present invention proposes a blind image quality assessment model to accurately predict the quality of rain removal. Because, we propose a bidirectional gated fusion network (B-GFN), which first obtains high-dimensional features through the forward path, and then obtains more image local content and context information through the backward path, and finally passes through the gated fusion. Get multi-scale features of the image.
本发明的有益效果是,提出双向门控融合网络(B-GFN)用来预测人类对不同去雨结果的感知质量,有助于在真实的雨景中评估和开发。The beneficial effect of the present invention is that a bidirectional gated fusion network (B-GFN) is proposed to predict the human perception quality of different rain removal results, which is helpful for evaluation and development in real rain scenes.
附图说明Description of drawings
图1为实施例系统网络结构图。FIG. 1 is a network structure diagram of an embodiment system.
具体实施方式Detailed ways
如图1所示,给出了所提出的DQA模型的体系结构,即双向门控融合网络(B-GFN)。多尺度特征前向提取路径包括1个卷积与ReLU块、1个最大池化模块、尺度从大到小的4级特征前向提取单元;多尺度特征后向提取路径包括尺度从小到大的4级特征后向提取单元、3级上采样模块、3级加法运算模块以及4级空间金字塔池化模块。As shown in Figure 1, the architecture of the proposed DQA model, the Bidirectional Gated Fusion Network (B-GFN), is presented. The multi-scale feature forward extraction path includes a convolution and ReLU block, a maximum pooling module, and a 4-level feature forward extraction unit from large to small; the multi-scale feature backward extraction path includes a scale from small to large. 4-level feature backward extraction unit, 3-level upsampling module, 3-level addition operation module, and 4-level spatial pyramid pooling module.
多尺度特征前向提取路径中,待评价图像输入内核为7*7的卷积与ReLU块CB后(得到通道数为96尺度为160*160的图像谱)经最大池化模块maxPool后(得到通道数为96尺度为80*80的图像谱)输入至逐级串联的特征前向提取单元,除第4级之外的每级尺度下的特征前向提取单元输出至下一级特征前向提取单元以及多尺度特征后向提取路径中相同尺度的特征提取模块中,第4级特征前仅向提取单元输出至多尺度特征后向提取路径中第1级特征后向提取单元;经过第1级特征前向提取单元得到通道数为192尺度为40*40的图像谱,经第2级特征前向提取单元得到通道数为384尺度为20*20的图像谱,经第3级特征前向提取单元得到通道数为1056尺度为10*10的图像谱,经第4级特征前向提取单元得到通道数为2208尺度为5*5的图像谱。特征前向提取单元包括稠密连接块DB以及转换层TL;稠密连接块的输入为特征前向提取单元的输入,稠密连接块的输出与转换层的输入连接,转换层的输出为特征前向提取单元的输出;稠密连接块与densenet161具有相同的结构。In the forward extraction path of multi-scale features, the image to be evaluated is input into the convolution with a kernel of 7*7 and the ReLU block CB (the number of channels is 96 and the image spectrum with a scale of 160*160) is passed through the maximum pooling module maxPool (obtained). The number of channels is 96 and the image spectrum with a scale of 80*80) is input to the feature forward extraction unit in series, and the feature forward extraction unit at each level except the fourth level is output to the next level feature forward In the extraction unit and the feature extraction module of the same scale in the multi-scale feature backward extraction path, the fourth-level feature is only output to the extraction unit to the first-level feature backward extraction unit in the multi-scale feature backward extraction path; The feature forward extraction unit obtains an image spectrum with 192 channels and a scale of 40*40. The second-level feature forward extraction unit obtains 384 channels and an image spectrum with a scale of 20*20. After the third-level feature forward extraction The unit obtains an image spectrum with a channel number of 1056 and a scale of 10*10, and the fourth-level feature forward extraction unit obtains an image spectrum with a channel number of 2208 and a scale of 5*5. The feature forward extraction unit includes a dense connection block DB and a conversion layer TL; the input of the dense connection block is the input of the feature forward extraction unit, the output of the dense connection block is connected with the input of the conversion layer, and the output of the conversion layer is the feature forward extraction. The output of the unit; the densely connected block has the same structure as densenet161.
多尺度特征后向提取路径中,第1级特征后向提取单元输出至下一级的上采样模块输入端以及第1级空间金字塔池化模块,第2级以及第3级尺度的特征后向提取单元输出至本级加法运算模块,本级加法运算模块对本级特征后向提取单元的输出以及本级上采样模块的输出进行加运算后输出至本级空间金字塔池化模块以及下一级上采样模块;第4级特征后向提取单元输出至本级加法运算模块,本级加法运算模块对本级特征后向提取单元的输出以及本级上采样模块的输出进行加运算后输出至本级空间金字塔池化模块。第1级尺度的特征后向提取单元输出通道数为256尺度为5*5的图像谱,第2级尺度的特征后向提取单元输出通道数为256尺度为10*10的图像谱,第3级尺度的特征后向提取单元输出通道数为256尺度为20*20的图像谱,第4级尺度的特征后向提取单元输出通道数为256尺度为40*40的图像谱。特征后向提取单元为卷积与ReLU块CB。In the multi-scale feature backward extraction path, the first-level feature backward extraction unit outputs to the input of the next-level upsampling module and the first-level spatial pyramid pooling module, and the second-level and third-level scale feature backward The extraction unit is output to the addition operation module of this level, and the addition operation module of this level adds the output of the feature backward extraction unit of this level and the output of the upsampling module of this level, and then outputs it to the spatial pyramid pooling module of this level and the upper level of the next level. Sampling module; the fourth-level feature backward extraction unit is output to the current-level addition operation module, and the current-level addition operation module adds the output of the current-level feature backward extraction unit and the output of the current-level up-sampling module and outputs it to the current-level space Pyramid pooling module. The feature backward extraction unit of the first-level scale outputs 256 image spectra with a scale of 5*5, and the feature backward extraction unit of the second-level scale outputs 256 image spectra with a scale of 10*10. The level-scale feature backward extraction unit outputs 256 image spectra with a scale of 20*20, and the fourth-level feature backward extraction unit outputs 256 image spectra with a scale of 40*40. The feature backward extraction unit is convolution and ReLU block CB.
多尺度特征后向提取路径弥补了不同数据库之间的多层特征图之间的差距。X={xi}1≤i≤4表示从四个DB输出的“前向路径”特征,现在,我们在一个“后向路径”中重用这些特征映射,它将顶层特性传播回前层。表示从“反向路径”中产生的特征。我们用以下方程计算的每个元素其中f1×1(·)表示1×1卷积和ReLU运算,↑2表示因子2的上采样运算。The multi-scale feature backward extraction path bridges the gap between multi-layer feature maps between different databases. X = {x i } 1≤i≤4 represents the "forward path" features output from the four DBs, now, we reuse these feature maps in a "backward path", which propagates the top-level features back to the previous layers. Represents features resulting from the "reverse path". We calculate with the following equation each element of where f 1×1 ( ) represents the 1×1 convolution and ReLU operations, and ↑2 represents the upsampling operation by a factor of 2.
门控融合模块包括特征向量叠加模块、卷积与ReLU块、Sigmod函数、点乘运算模块;特征向量叠加模块接收来自于N级空间金字塔池化模块的输出并叠加接收到的特征向量后输出至点乘运算模块以及卷积与ReLU块,卷积与ReLU块输出至Sigmod函数,加权矩阵通过Sigmod函数元素化乘积,Sigmod函数输出加权融合的特征向量至点乘运算模块,点乘运算模块对加权融合的特征向量以及特征向量叠加模块的输出进行像素相乘操作后输出至分数预测模块。The gated fusion module includes a feature vector stacking module, a convolution and ReLU block, a Sigmod function, and a dot product operation module; the feature vector stacking module receives the output from the N-level spatial pyramid pooling module and superimposes the received feature vector. The point product operation module and the convolution and ReLU blocks, the convolution and ReLU blocks are output to the Sigmod function, the weighting matrix is elementalized by the Sigmod function, and the Sigmod function outputs the weighted and fused feature vector to the point product operation module. The fused feature vector and the output of the feature vector superposition module are pixel-multiplied and output to the score prediction module.
通过门控融合模块自适应地确定分配给每个尺度的权重。我们首先通过空间金字塔池化SPP(2014年何凯明提出的池化方法)将的所有元素叠加到等长特征向量上,然后将它们叠加到4×5120特征矩阵Y=[y1;y2;y3;y4]。权矩阵W由以下式子给出:W=s[f1×1(Y)]其中s(·)是Sigmoid函数。我们通过一个元素化的乘积运算来表示权重,并用1×1的卷积生成融合的特征向量,即:z=f1×1(W⊙Y),其中⊙表示像素相乘操作。The weights assigned to each scale are adaptively determined by a gated fusion module. We first use the spatial pyramid pooling SPP (the pooling method proposed by Kaiming He in 2014) to convert the All elements of are superimposed on equal-length eigenvectors, and then they are superimposed to a 4×5120 eigenmatrix Y=[y 1 ; y 2 ; y 3 ; y 4 ]. The weight matrix W is given by: W=s[f 1×1 (Y)] where s(·) is the sigmoid function. We represent the weights by an elemental product operation and use a 1×1 convolution to generate the fused feature vector, ie: z = f 1×1 (W⊙Y), where ⊙ denotes the pixel multiplication operation.
分数预测模块包括三个全连接层FC以及Sigmod函数,门控融合模块输出至三个全连接层,三个全连接层输出至Sigmod函数,Sigmod函数对输入进行映射得到质量分数Qp。Qgt表示真实质量分数。The score prediction module includes three fully connected layers FC and a Sigmod function. The gated fusion module outputs three fully connected layers, and the three fully connected layers output to the Sigmod function. The Sigmod function maps the input to obtain the quality score Q p . Q gt represents the true quality score.
我们的学习目标是最小化下面的损失函数:Our learning objective is to minimize the following loss function:
其中N代表所有培训样本的数量,Qp(j)和Qgt(j)分别表示第j个预测的和真实的质量分数。where N represents the number of all training samples, and Q p (j) and Q gt (j) represent the jth predicted and true quality scores, respectively.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910324113.XA CN110097541B (en) | 2019-04-22 | 2019-04-22 | No-reference image rain removal quality evaluation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910324113.XA CN110097541B (en) | 2019-04-22 | 2019-04-22 | No-reference image rain removal quality evaluation system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110097541A true CN110097541A (en) | 2019-08-06 |
CN110097541B CN110097541B (en) | 2023-03-28 |
Family
ID=67445458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910324113.XA Active CN110097541B (en) | 2019-04-22 | 2019-04-22 | No-reference image rain removal quality evaluation system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110097541B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972902A (en) * | 2022-04-03 | 2022-08-30 | 浙江理工大学 | No-reference video quality evaluation method fusing double-deep learning network |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040143602A1 (en) * | 2002-10-18 | 2004-07-22 | Antonio Ruiz | Apparatus, system and method for automated and adaptive digital image/video surveillance for events and configurations using a rich multimedia relational database |
US20050232512A1 (en) * | 2004-04-20 | 2005-10-20 | Max-Viz, Inc. | Neural net based processor for synthetic vision fusion |
US20160196527A1 (en) * | 2015-01-06 | 2016-07-07 | Falkonry, Inc. | Condition monitoring and prediction for smart logistics |
US20170177975A1 (en) * | 2015-12-21 | 2017-06-22 | Ningbo University | Image quality objective evaluation method based on manifold feature similarity |
CN107220506A (en) * | 2017-06-05 | 2017-09-29 | 东华大学 | Breast cancer risk assessment analysis system based on deep convolutional neural network |
CN108597501A (en) * | 2018-04-26 | 2018-09-28 | 深圳市唯特视科技有限公司 | A kind of audio-visual speech model based on residual error network and bidirectional valve controlled cycling element |
CN108682044A (en) * | 2018-05-21 | 2018-10-19 | 深圳市唯特视科技有限公司 | A kind of three-dimensional style metastasis model based on dual path stylization network |
CN108932455A (en) * | 2017-05-23 | 2018-12-04 | 上海荆虹电子科技有限公司 | Remote sensing images scene recognition method and device |
CN109064405A (en) * | 2018-08-23 | 2018-12-21 | 武汉嫦娥医学抗衰机器人股份有限公司 | A kind of multi-scale image super-resolution method based on dual path network |
CN109299262A (en) * | 2018-10-09 | 2019-02-01 | 中山大学 | A textual entailment relation recognition method fused with multi-granularity information |
-
2019
- 2019-04-22 CN CN201910324113.XA patent/CN110097541B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040143602A1 (en) * | 2002-10-18 | 2004-07-22 | Antonio Ruiz | Apparatus, system and method for automated and adaptive digital image/video surveillance for events and configurations using a rich multimedia relational database |
US20050232512A1 (en) * | 2004-04-20 | 2005-10-20 | Max-Viz, Inc. | Neural net based processor for synthetic vision fusion |
US20160196527A1 (en) * | 2015-01-06 | 2016-07-07 | Falkonry, Inc. | Condition monitoring and prediction for smart logistics |
US20170177975A1 (en) * | 2015-12-21 | 2017-06-22 | Ningbo University | Image quality objective evaluation method based on manifold feature similarity |
CN108932455A (en) * | 2017-05-23 | 2018-12-04 | 上海荆虹电子科技有限公司 | Remote sensing images scene recognition method and device |
CN107220506A (en) * | 2017-06-05 | 2017-09-29 | 东华大学 | Breast cancer risk assessment analysis system based on deep convolutional neural network |
CN108597501A (en) * | 2018-04-26 | 2018-09-28 | 深圳市唯特视科技有限公司 | A kind of audio-visual speech model based on residual error network and bidirectional valve controlled cycling element |
CN108682044A (en) * | 2018-05-21 | 2018-10-19 | 深圳市唯特视科技有限公司 | A kind of three-dimensional style metastasis model based on dual path stylization network |
CN109064405A (en) * | 2018-08-23 | 2018-12-21 | 武汉嫦娥医学抗衰机器人股份有限公司 | A kind of multi-scale image super-resolution method based on dual path network |
CN109299262A (en) * | 2018-10-09 | 2019-02-01 | 中山大学 | A textual entailment relation recognition method fused with multi-granularity information |
Non-Patent Citations (6)
Title |
---|
JINGWEN WANG等: "Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
张娅楠等: "多模态融合下长时程肺部病灶良恶性预测方法", 《计算机工程与应用》 * |
王凯等: "融合上下文依赖和句子语义的事件线索检测研究", 《计算机科学与探索》 * |
石仕伟: "基于深度学习的视频行为识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
罗昊: "基于多尺度内容自适应的单幅图像去雨算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
谭腾飞等: "多层图像叠加处理的低功耗自适应流水线设计", 《浙江大学学报(工学版)》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972902A (en) * | 2022-04-03 | 2022-08-30 | 浙江理工大学 | No-reference video quality evaluation method fusing double-deep learning network |
CN114972902B (en) * | 2022-04-03 | 2025-03-11 | 浙江理工大学 | A no-reference video quality assessment method integrating dual deep learning networks |
Also Published As
Publication number | Publication date |
---|---|
CN110097541B (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113362223B (en) | Image Super-Resolution Reconstruction Method Based on Attention Mechanism and Two-Channel Network | |
CN113688723B (en) | A pedestrian target detection method in infrared images based on improved YOLOv5 | |
CN111611878B (en) | Method for crowd counting and future people flow prediction based on video image | |
CN113673590B (en) | Rain removal method, system and medium based on multi-scale hourglass densely connected network | |
CN106778595B (en) | Method for detecting abnormal behaviors in crowd based on Gaussian mixture model | |
CN111401436B (en) | Streetscape image segmentation method fusing network and two-channel attention mechanism | |
CN110992275A (en) | Refined single image rain removing method based on generation countermeasure network | |
CN110879982B (en) | A crowd counting system and method | |
CN113269787A (en) | Remote sensing image semantic segmentation method based on gating fusion | |
CN112818969A (en) | Knowledge distillation-based face pose estimation method and system | |
CN109840560A (en) | Based on the image classification method for incorporating cluster in capsule network | |
CN115205667A (en) | A Dense Object Detection Method Based on YOLOv5s | |
CN113610144A (en) | Vehicle classification method based on multi-branch local attention network | |
CN108960404A (en) | A kind of people counting method and equipment based on image | |
CN112446253A (en) | Skeleton behavior identification method and device | |
CN116137043A (en) | A Colorization Method of Infrared Image Based on Convolution and Transformer | |
CN112949636A (en) | License plate super-resolution identification method and system and computer readable medium | |
CN117557857A (en) | Detection network light weight method combining progressive guided distillation and structural reconstruction | |
CN119323565A (en) | Transformer fault detection method based on improved YOLOv model | |
CN117275681A (en) | Method and device for detecting and evaluating honeycomb lung disease course period based on transducer parallel cross fusion model | |
CN116894753A (en) | A small sample image steganalysis model training method, analysis method and device | |
CN105894507B (en) | Image quality evaluating method based on amount of image information natural scene statistical nature | |
CN114187506A (en) | Remote sensing image scene classification method of viewpoint-aware dynamic routing capsule network | |
CN118247670A (en) | Multi-aggregation feature pyramid remote sensing image detection method based on deep learning | |
CN119131405A (en) | A nighttime image deraining method based on global semantics guidance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |