[go: up one dir, main page]

CN117726943A - Ultra-high resolution vegetation canopy height estimation method - Google Patents

Ultra-high resolution vegetation canopy height estimation method Download PDF

Info

Publication number
CN117726943A
CN117726943A CN202311747135.XA CN202311747135A CN117726943A CN 117726943 A CN117726943 A CN 117726943A CN 202311747135 A CN202311747135 A CN 202311747135A CN 117726943 A CN117726943 A CN 117726943A
Authority
CN
China
Prior art keywords
data
ultra
high resolution
canopy height
vegetation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311747135.XA
Other languages
Chinese (zh)
Inventor
孙颖
肖坤
辛秦川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202311747135.XA priority Critical patent/CN117726943A/en
Publication of CN117726943A publication Critical patent/CN117726943A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a method for estimating the height of a canopy of vegetation with ultra-high resolution, and relates to the field of intelligent vegetation monitoring. The method comprises the following steps: acquiring field ultra-high resolution aviation RGB image data, LIDAR point cloud data, golgi digital elevation model data and synthetic aperture radar satellite data aiming at a sample area, and combining the data into a first multi-source data set; performing data preprocessing on the first multi-source data set to obtain a second multi-source data set; establishing an initial ultra-high resolution vegetation canopy height estimation model by taking a depth residual convolutional neural network as an encoder and taking FPN and a transducer as decoders, and training the ultra-high resolution vegetation canopy height estimation model by using a second multi-source data set; and acquiring a vegetation image aiming at the target area, and obtaining vegetation canopy height information of the target area by using an ultra-high resolution vegetation canopy height estimation model. Compared with the prior art, the method improves the overall accuracy of vegetation canopy height prediction.

Description

一种超高分辨率植被冠层高度估算方法An ultra-high resolution vegetation canopy height estimation method

技术领域Technical field

本发明涉及植被智能监测技术领域,更具体地,涉及一种超高分辨率植被冠层高度估算方法。The present invention relates to the field of vegetation intelligent monitoring technology, and more specifically, to an ultra-high-resolution vegetation canopy height estimation method.

背景技术Background technique

植被能够维持地球气候和生物圈的基本生态系统功能和提供广泛的生态系统服务并在减少温室气体排放和减轻气候变化带来的风险方面发挥着重要作用。植被冠层高度作为森林垂直结构中简单但重要的因子,它既能体现出植被的生长能力,也能反映植物的生态位需求,还能指示森林生物量的高低。更高的植被冠层可以占据更有利的生态位,获得更多的光照,有利于光合作用。从遥感数据中自动获取植被冠层高度信息对于估算森林的地上生物量和木材量,监测森林退化的影响,衡量森林恢复的成功,并可用于初级生产和生物多样性等其他关键生态系统变量建模等方面具有重要作用。Vegetation can maintain basic ecosystem functions of the Earth's climate and biosphere and provide a wide range of ecosystem services, and plays an important role in reducing greenhouse gas emissions and mitigating risks caused by climate change. Vegetation canopy height is a simple but important factor in the vertical structure of the forest. It can not only reflect the growth ability of vegetation, but also reflect the ecological niche needs of plants, and can also indicate the level of forest biomass. A taller vegetation canopy can occupy a more favorable ecological niche, obtain more light, and facilitate photosynthesis. Automatically obtaining vegetation canopy height information from remotely sensed data is useful for estimating aboveground biomass and wood mass of forests, monitoring the impact of forest degradation, measuring the success of forest restoration, and can be used to construct other key ecosystem variables such as primary production and biodiversity. It plays an important role in mold and other aspects.

现有研究表明,深度卷积神经网络(CNNs)在处理遥感图像(如场景分类和物体检测和预测)方面可以取得非常不错的效果。CNNs不仅可以自动学习图像的低级和中级特征,还可以自动学习原始图像中的高级语义特征。深度卷积神经网络的变体结构——特征金字塔网络(Feature Pyramid Networks,FPN)自从被提出后,更成为了当前多特征层预测的主流框架。FPN通常是一种自上而下的神经网络,可以在分割图像时同步地对每一个像元进行预测。Existing research shows that deep convolutional neural networks (CNNs) can achieve very good results in processing remote sensing images (such as scene classification and object detection and prediction). CNNs can automatically learn not only low-level and mid-level features of images, but also high-level semantic features in the original images. Since the feature pyramid network (FPN), a variant structure of deep convolutional neural network, was proposed, it has become the current mainstream framework for multi-feature layer prediction. FPN is usually a top-down neural network that can predict each pixel simultaneously when segmenting an image.

尽管如此,基于FPN的植被冠层预测方法仍有以下问题需要妥善解决,包括:Despite this, the FPN-based vegetation canopy prediction method still has the following issues that need to be properly resolved, including:

(1)通过使用CNNs作为其编码器用于图像特征的提取,这样的输出虽然包含了高层次语义特征,但其过于粗略,容易丢失图像的边缘细节信息;(1) By using CNNs as its encoder for image feature extraction, although such output contains high-level semantic features, it is too rough and easily loses the edge details of the image;

(2)尽管通过“跳跃”连接或者利用最大池化层的最大值位置将低层次特征传递给FPN的解码器能够优化预测结果,但这种方式容易导致冗余特征的产生,降低了网络的学习效率。(2) Although passing low-level features to the FPN decoder through "skip" connections or using the maximum value position of the max pooling layer can optimize the prediction results, this method can easily lead to the generation of redundant features and reduce the network's performance. Learning efficiency.

另外,所输出的特征通常包含类别不确定性或非边界相关信息,这些信息对于估计结果的优化造成影响:In addition, the output features usually contain category uncertainty or non-boundary related information, which affects the optimization of the estimation results:

1)大多数场景下的植被,尤其在发达城市区域,植被分布的不规则性,受其它环境因素的影响,其光谱反射率差异较大,且容易被周边高层楼宇的阴影遮挡;1) Vegetation in most scenarios, especially in developed urban areas, has irregular vegetation distribution and is affected by other environmental factors. Its spectral reflectivity varies greatly and is easily blocked by the shadows of surrounding high-rise buildings;

2)高分辨率遥感图像的类内差异大、类间差异小使得植被的光谱和几何特征变得复杂。2) The large intra-class differences and small inter-class differences in high-resolution remote sensing images make the spectral and geometric characteristics of vegetation complex.

发明内容Contents of the invention

本发明为克服上述现有技术所述的边缘信息丢失以及输出的特征包含类别不确定性或非边界相关信息,致使估算结果精度低的缺陷,提供一种超高分辨率植被冠层高度估算方法。The present invention provides an ultra-high-resolution vegetation canopy height estimation method to overcome the defects described in the above-mentioned prior art, such as the loss of edge information and the output features containing category uncertainty or non-boundary related information, resulting in low accuracy of estimation results. .

为解决上述技术问题,本发明的技术方案如下:In order to solve the above technical problems, the technical solutions of the present invention are as follows:

第一方面,一种超高分辨率植被冠层估算方法,包括:The first aspect is a super-resolution vegetation canopy estimation method, including:

获取针对样本区域的野外超高分辨率航空RGB影像数据、LIDAR点云数据、哥白尼数字高程模型数据与合成孔径雷达卫星数据,并组合为第一多源数据集;Obtain field ultra-high-resolution aerial RGB image data, LIDAR point cloud data, Copernicus digital elevation model data and synthetic aperture radar satellite data for the sample area, and combine them into the first multi-source data set;

对所述第一多源数据集进行数据预处理,得到第二多源数据集;Perform data preprocessing on the first multi-source data set to obtain a second multi-source data set;

以深度残差卷积神经网络作为编码器、以FPN和Transformer(变压器)作为解码器,建立初始的超高分辨率植被冠层高度估算模型,利用所述第二多源数据集训练所述超高分辨率植被冠层高度估算模型;Using the deep residual convolutional neural network as the encoder and FPN and Transformer as the decoder, an initial ultra-high-resolution vegetation canopy height estimation model is established, and the second multi-source data set is used to train the ultra-high resolution vegetation canopy height estimation model. High-resolution vegetation canopy height estimation model;

获取针对目标区域的植被图像,利用所述超高分辨率植被冠层高度估算模型得到目标区域的植被冠层高度信息。Obtain a vegetation image of the target area, and use the ultra-high-resolution vegetation canopy height estimation model to obtain vegetation canopy height information of the target area.

第二方面,一种超高分辨率植被冠层高度估算系统,应用第一方面所述方法,包括:In the second aspect, a super-resolution vegetation canopy height estimation system applies the method described in the first aspect, including:

多源数据获取模块,用于获取针对样本区域的野外超高分辨率航空RGB影像数据、LIDAR点云数据、哥白尼数字高程模型数据与合成孔径雷达卫星数据,并组合为第一多源数据集;还用于对所述第一多源数据集进行数据预处理,得到第二多源数据集;还用于获取针对目标区域的植被图像;The multi-source data acquisition module is used to obtain field ultra-high-resolution aerial RGB image data, LIDAR point cloud data, Copernicus digital elevation model data and synthetic aperture radar satellite data for the sample area, and combine them into the first multi-source data set; also used to perform data preprocessing on the first multi-source data set to obtain a second multi-source data set; and also used to obtain vegetation images for the target area;

模型训练模块,用于建立以深度残差卷积神经网络作为编码器、以FPN和Transformer作为解码器的初始的超高分辨率植被冠层高度估算模型,利用所述第二多源数据集训练所述超高分辨率植被冠层高度估算模型;Model training module, used to establish an initial ultra-high-resolution vegetation canopy height estimation model using a deep residual convolutional neural network as the encoder and FPN and Transformer as the decoder, and train using the second multi-source data set The ultra-high resolution vegetation canopy height estimation model;

植被冠层高度信息估算模块,用于搭载训练完成所述超高分辨率植被冠层高度估算模型;还用于根据针对目标区域的植被图像,利用所述超高分辨率植被冠层高度估算模型得到目标区域的植被冠层高度信息。A vegetation canopy height information estimation module is used to carry out training to complete the ultra-high-resolution vegetation canopy height estimation model; and is also used to utilize the ultra-high-resolution vegetation canopy height estimation model based on vegetation images of the target area. Obtain vegetation canopy height information in the target area.

第三方面,一种计算机可读存储介质,所述存储介质上存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行,以实现如第一方面所述方法。In a third aspect, a computer-readable storage medium stores at least one instruction, at least a program, a code set or an instruction set, and the at least one instruction, at least a program, code set or instruction set is processed by The programmer is loaded and executed to implement the method described in the first aspect.

与现有技术相比,本发明技术方案的有益效果是:Compared with the existing technology, the beneficial effects of the technical solution of the present invention are:

本发明公开了一种超高分辨率冠层高度估算方法,通过建立以深度残差卷积神经网络作为编码器、以FPN和Transformer作为解码器的超高分辨率植被冠层高度估算模型,并利用多源遥感数据对其进行训练,使所述超高分辨率植被冠层高度估算模型具有图像特征自学习能力,可通过非线性运算和多次下采样获得图像的低-中-高层次特征,并通过FT模块(FPN及Transformer)进行有效特征的筛选与融合。相较于现有技术,本发明提高了植被冠层高度预测的整体精度。The present invention discloses an ultra-high-resolution canopy height estimation method. By establishing an ultra-high-resolution vegetation canopy height estimation model using a deep residual convolutional neural network as an encoder and FPN and Transformer as decoders, and Multi-source remote sensing data is used to train it, so that the ultra-high-resolution vegetation canopy height estimation model has image feature self-learning capabilities, and can obtain low-medium-high-level features of the image through nonlinear operations and multiple downsampling. , and filter and fuse effective features through the FT module (FPN and Transformer). Compared with the existing technology, the present invention improves the overall accuracy of vegetation canopy height prediction.

附图说明Description of the drawings

图1为本发明实施例1中一种超高分辨率冠层高度估算方法的流程示意图;Figure 1 is a schematic flow chart of an ultra-high-resolution canopy height estimation method in Embodiment 1 of the present invention;

图2为本发明实施例1中超高分辨率植被冠层高度估算模型的数据处理流程示意图;Figure 2 is a schematic diagram of the data processing flow of the ultra-high-resolution vegetation canopy height estimation model in Embodiment 1 of the present invention;

图3为本发明实施例2中一种超高分辨率冠层高度估算系统的结构示意图。Figure 3 is a schematic structural diagram of an ultra-high-resolution canopy height estimation system in Embodiment 2 of the present invention.

具体实施方式Detailed ways

本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。The terms "first", "second", etc. in the description and claims of this application and the above-mentioned drawings are used to distinguish similar objects and are not necessarily used to describe a specific order or sequence. It should be understood that the terms so used are interchangeable under appropriate circumstances, and are merely a way of distinguishing objects with the same attributes in describing the embodiments of the present application. Furthermore, the terms "include" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, such that a process, method, system, product or apparatus comprising a series of elements need not be limited to those elements, but may include not explicitly other elements specifically listed or inherent to such processes, methods, products or equipment.

附图仅用于示例性说明,不能理解为对本专利的限制;The drawings are for illustrative purposes only and should not be construed as limitations of this patent;

为了更好说明本实施例,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸;In order to better illustrate this embodiment, some components in the drawings will be omitted, enlarged or reduced, which does not represent the size of the actual product;

对于本领域技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。It is understandable to those skilled in the art that some well-known structures and their descriptions may be omitted in the drawings.

为便于本领域技术人员实施,对本发明实施例中涉及的部分概念说明如下:In order to facilitate implementation by those skilled in the art, some concepts involved in the embodiments of the present invention are described as follows:

1.深度残差卷积神经网络(ResNet)1. Deep residual convolutional neural network (ResNet)

ResNet是一种深度卷积神经网络,通常包括卷积模块(卷积层、批处理层、ReLU激活层及最大池化层)、4个结构相似的残差模块以及分类模块(均值池化层、全连接层及Softmax分类层)。ResNet is a deep convolutional neural network, which usually includes a convolution module (convolution layer, batch layer, ReLU activation layer and maximum pooling layer), four residual modules with similar structures, and a classification module (mean pooling layer). , fully connected layer and Softmax classification layer).

输入图像在经过第一个卷积模块后,尺寸变为原来的1/2,而每次经过一个残差模块,图像的尺寸也逐次变为原来的1/2,最终得到大小为原图1/32的特征图After passing through the first convolution module, the size of the input image becomes 1/2 of its original size. Each time it passes through a residual module, the size of the image also gradually changes to 1/2 of its original size. Finally, the size of the original image is 1 Feature map of /32

具体地,根据网络深度,有ResNet18、ResNet34、ResNet50、ResNet101、ResNet152。Specifically, according to the network depth, there are ResNet18, ResNet34, ResNet50, ResNet101, and ResNet152.

其中,对于Resnet50、ResNet101、ResNet152,每个残差模块由一个投影卷积块(Projection shortcut)和连续多个恒等卷积块(Identity shortcut)构成。投影卷积块能够加倍特征图数量以及按1/2的比例缩小特征图,而恒等卷积块则不改变输入输出的尺寸和特征数量。Among them, for Resnet50, ResNet101, and ResNet152, each residual module consists of a projection convolution block (Projection shortcut) and multiple consecutive identity convolution blocks (Identity shortcut). The projected convolution block can double the number of feature maps and reduce the feature map by 1/2, while the identity convolution block does not change the size and number of features of the input and output.

此外,卷积层(convolutional layer)是一个特征提取器,学习用来表示输入图像的特征。卷积层由若干个卷积单元(神经元)组成,每个特征面由多个神经元组成,同一特征平面的神经元共享权值,每个单元的参数由网络反向传播算法优化计算得到,其功能是获得输入的不同特征,如边缘、直角、纹理等信息。给定特征图Xl-1作为卷积层的输入,使用第k个滤波器对输入特征图进行式(1)处理,获得输出特征图:In addition, the convolutional layer is a feature extractor that learns features to represent the input image. The convolutional layer is composed of several convolution units (neurons). Each feature surface is composed of multiple neurons. Neurons on the same feature plane share weights. The parameters of each unit are optimized and calculated by the network backpropagation algorithm. , its function is to obtain different features of the input, such as edges, right angles, textures and other information. Given the feature map X l-1 as the input of the convolutional layer, use the kth filter Process the input feature map with equation (1) to obtain the output feature map:

式中,其中是卷积运算后得到的特征图,*是卷积运算,/>是l层的第k个偏置向量,公式(1)在能够大大减少神经网络参数量的同时,获得每一个神经元所对应的特征结果。In the formula, where is the feature map obtained after the convolution operation, * is the convolution operation,/> is the k-th bias vector of layer l. Formula (1) can greatly reduce the amount of neural network parameters and at the same time obtain the characteristic result corresponding to each neuron.

批归一化/批处理(Batch Norm,BN)层的作用是避免神经网络出现梯度消失或梯度爆炸。在BN层中,每个输入批次所进行的归一化过程转化如下:The function of the batch normalization/batch processing (Batch Norm, BN) layer is to avoid gradient disappearance or gradient explosion in the neural network. In the BN layer, the normalization process performed on each input batch is transformed as follows:

其中,为尺度变换和偏移后的结果,γl是归一化尺度参数,βl是偏移参数,通过式(2)的归一化处理,能够将所有输入集中在0附近,使每层输入不会产生太大变化。in, is the result after scale transformation and offset, γ l is the normalized scale parameter, and β l is the offset parameter. Through the normalization process of equation (2), all inputs can be concentrated near 0, so that the input of each layer It won't make much of a difference.

激活函数层是为了控制正向信号变换的神经元的激活水平。以BN层得到的结果作为输入,采用修正线性单元(ReLU)激活函数来执行输入特征的非线性映射。The activation function layer is to control the activation level of neurons for forward signal transformation. Taking the results obtained from the BN layer as input, the rectified linear unit (ReLU) activation function is used to perform nonlinear mapping of the input features.

池化层(Pooling Layer)主要用于抽象输入特征,通常使用最大池化或平均池化获取下采样特征图;池化层的目的是缩小特征面,简化模型复杂度,减少模型的参数,同时实现空间不变性。池化层紧跟在卷积层之后,同样由多个特征面组成,它的每一个特征面对应于其上一层的一个特征面,不会改变特征面的个数,通常有均值采样(mean pooling)和最大值采样(max pooling)两种形式:均值采样的卷积核中每个权重都是0.25。若卷积核在输入图像(Input X)上的滑动的步长为2,均值子采样的效果相当于把原图模糊缩减至原来的1/4;最大值采样的卷积核中各权重值中只有一个为1,其余均为0,卷积核中为1的位置对应输入图像被卷积核覆盖部分值最大的位置。若卷积核在输入图像上的滑动步长为2,大小为2*2,最大值采样的效果是把原图缩减至原来的1/4,并保留每个2*2区域的最强输入。The Pooling Layer is mainly used to abstract input features. Maximum pooling or average pooling is usually used to obtain downsampled feature maps; the purpose of the Pooling Layer is to reduce the feature surface, simplify the model complexity, and reduce the parameters of the model. At the same time Achieve spatial invariance. The pooling layer follows the convolution layer and is also composed of multiple feature surfaces. Each feature surface corresponds to a feature surface of the previous layer and does not change the number of feature surfaces. It usually has mean sampling. There are two forms (mean pooling) and maximum sampling (max pooling): each weight in the convolution kernel of mean sampling is 0.25. If the sliding step of the convolution kernel on the input image (Input X) is 2, the effect of mean subsampling is equivalent to reducing the original image blur to 1/4 of the original; Only one of them is 1, and the rest are all 0. The position of 1 in the convolution kernel corresponds to the position with the largest value of the part of the input image covered by the convolution kernel. If the sliding step of the convolution kernel on the input image is 2 and the size is 2*2, the effect of maximum sampling is to reduce the original image to 1/4 of the original image and retain the strongest input of each 2*2 area. .

2.哥白尼数字高程模型数据2. Copernicus digital elevation model data

哥白尼数字高程模型数据是公认的最佳全球性开源DEM之一,其绝对高程精度和水平精度在全球性的开源DEM中最佳,同时有着最佳的地形细节表现,实际效果也优于30m(米)分辨率,一致性好。Copernicus Digital Elevation Model data is recognized as one of the best global open source DEMs. Its absolute elevation accuracy and horizontal accuracy are the best among global open source DEMs. It also has the best terrain detail performance, and the actual effect is also better than 30m (meter) resolution, good consistency.

下面结合附图和实施例对本发明的技术方案做进一步的说明。The technical solution of the present invention will be further described below with reference to the accompanying drawings and examples.

实施例1Example 1

本实施例提出一种超高分辨率植被冠层高度估算方法,参阅图1,包括:This embodiment proposes an ultra-high-resolution vegetation canopy height estimation method, see Figure 1, which includes:

获取针对样本区域的野外超高分辨率航空RGB影像数据、LIDAR点云数据、哥白尼数字高程模型数据与合成孔径雷达卫星数据,并组合为第一多源数据集;Obtain field ultra-high-resolution aerial RGB image data, LIDAR point cloud data, Copernicus digital elevation model data and synthetic aperture radar satellite data for the sample area, and combine them into the first multi-source data set;

对所述第一多源数据集进行数据预处理,得到第二多源数据集;Perform data preprocessing on the first multi-source data set to obtain a second multi-source data set;

以深度残差卷积神经网络作为编码器、以FPN和Transformer作为解码器,建立初始的超高分辨率植被冠层高度估算模型,利用所述第二多源数据集训练所述超高分辨率植被冠层高度估算模型;Using the deep residual convolutional neural network as the encoder, FPN and Transformer as the decoder, establish an initial ultra-high resolution vegetation canopy height estimation model, and use the second multi-source data set to train the ultra-high resolution Vegetation canopy height estimation model;

获取针对目标区域的植被图像,利用所述超高分辨率植被冠层高度估算模型得到目标区域的植被冠层高度信息。Obtain a vegetation image of the target area, and use the ultra-high-resolution vegetation canopy height estimation model to obtain vegetation canopy height information of the target area.

该实施例中基于深度学习技术,建立了具有编码器-解码器结构的超高分辨率植被冠层高度估算模型(ARFTNet),具体地,编码器由深度残差卷积神经网络(ResNet)构成,用于学习输入图像的低-中-高层次特征,解码器则包括FPN及Transformer,用于进行有效特征的筛选与融合,并采用多源遥感数据对模型进行训练,使其最终可以生成高质量的植被冠层高度信息。相较于现有技术,利用本实施所述超高分辨率植被冠层高度估算模型进行植被冠层高度估算,可提高植被冠层高度预测结果的整体精度。In this embodiment, based on deep learning technology, an ultra-high-resolution vegetation canopy height estimation model (ARFTNet) with an encoder-decoder structure is established. Specifically, the encoder is composed of a deep residual convolutional neural network (ResNet) , used to learn low-medium-high-level features of the input image. The decoder includes FPN and Transformer, which are used to screen and fuse effective features, and use multi-source remote sensing data to train the model so that it can eventually generate high-level features. Quality vegetation canopy height information. Compared with the existing technology, using the ultra-high-resolution vegetation canopy height estimation model described in this implementation to estimate the vegetation canopy height can improve the overall accuracy of the vegetation canopy height prediction results.

在一些示例中,所述合成孔径雷达卫星数据为10m空间分辨率的哨兵1号(Sentinel-1)数据,其能够提供高分辨率的雷达影像,对地表特征进行表征,有利于去除地表特征对植被冠层高度估算的影响。In some examples, the synthetic aperture radar satellite data is Sentinel-1 data with a spatial resolution of 10m, which can provide high-resolution radar images to characterize surface features, which is beneficial to removing the effects of surface features. Effects of vegetation canopy height estimation.

需要说明的是,所述针对目标区域的植被图像包括针对目标区域的野外超高分辨率航空RGB影像数据及LIDAR点云数据。It should be noted that the vegetation images for the target area include wild ultra-high-resolution aerial RGB image data and LIDAR point cloud data for the target area.

在一些示例中,所述样本区域为目标区域的一部分;In some examples, the sample area is part of the target area;

在一具体实施过程中,随机挑选针对目标区域的野外超高分辨率航空RGB影像数据及LIDAR点云数据中的三分之一作为模型训练数据使用。In a specific implementation process, one-third of the wild ultra-high-resolution aerial RGB image data and LIDAR point cloud data for the target area were randomly selected as model training data.

还需要说明的是,本实施例中所述超高分辨率指空间分辨率达到1m。It should also be noted that the ultra-high resolution mentioned in this embodiment means that the spatial resolution reaches 1m.

在一些优选实施例中,所述获取针对样本区域的野外超高分辨率航空RGB影像数据、LIDAR点云数据、哥白尼数字高程模型数据与合成孔径雷达卫星数据,包括:In some preferred embodiments, the acquisition of field ultra-high-resolution aerial RGB image data, LIDAR point cloud data, Copernicus digital elevation model data and synthetic aperture radar satellite data for the sample area includes:

采用区域网布点方式布设像控点;The image control points are laid out using the regional mesh point layout method;

基于预设航线,采用航摄方式采集样本区域的初始航空RGB影像数据和初始LIDAR点云数据;Based on the preset route, aerial photography is used to collect initial aerial RGB image data and initial LIDAR point cloud data of the sample area;

根据所述像控点,对所述超高分辨率初始航空RGB影像数据和所述初始LIDAR点云数据进行控制点加密、空中三角测量运算和精度监测,并同时解算得出每张影像的外方位元素,得到经过空三加密的野外超高分辨率航空RGB影像数据和LIDAR点云数据;According to the image control points, the ultra-high-resolution initial aerial RGB image data and the initial LIDAR point cloud data are subjected to control point encryption, aerial triangulation calculation and accuracy monitoring, and at the same time, the outer surface of each image is calculated. For the azimuth element, we obtain ultra-high-resolution aerial RGB image data and LIDAR point cloud data in the wild that have been encrypted by aerial triangulation;

获取哥白尼数字高程模型数据与合成孔径雷达卫星数据,并将其与所述野外超高分辨率航空RGB影像数据及所述LIDAR点云数据,共同组合为所述第一多源数据集。Copernicus digital elevation model data and synthetic aperture radar satellite data are obtained, and combined with the field ultra-high-resolution aerial RGB image data and the LIDAR point cloud data to form the first multi-source data set.

该优选实施例通过像控点布设和空中三角测量方式对野外航空采集数据进行校正,保证了野外采集数据的准确性。This preferred embodiment corrects field aerial collection data through image control point layout and aerial triangulation, ensuring the accuracy of field collection data.

在一具体实施过程中,布设像控点时,当受地形等条件限制时,采用不规则区域网布点,并在凹角转折处或凸角转折处应布设像控点。像控点在航线方向的跨度以12条基线为标准,当遇到像主点、标准点位落水,水滨和岛屿地区等特殊情况,不能按正常情况布设像控点时,视具体情况以满足空中三角测量和成图要求为原则布设像控点。像控点测量首选VRS模式进行观测,在不能接收网络RTK服务时,采用基于GPS静态测量方法施测。像控点的刺点、整饰工作全部在原始影像数据上进行,使用PhotoShop软件在相片数据上添加整饰信息。In a specific implementation process, when laying out image control points, when restricted by terrain and other conditions, irregular area mesh points are used, and image control points should be laid out at the turning points of concave corners or convex corner turns. The span of image control points in the route direction is based on 12 baselines. When encountering special circumstances such as main points and standard points falling into the water, waterside and island areas, etc., and image control points cannot be laid out according to normal conditions, the span will be determined based on the specific circumstances. Image control points are laid out based on the principle of meeting aerial triangulation and mapping requirements. The VRS mode is preferred for image control point measurement. When the network RTK service cannot be received, the static measurement method based on GPS is used. The punctuation and retouching of image control points are all performed on the original image data, and PhotoShop software is used to add retouching information to the photo data.

在一具体实施过程中,敷设航线时,根据航摄区域的地形和飞行安全等确定基准面高度,并按照航空摄影技术涉及基本参数表设置航线的间距和飞行的方向,利用航线生成软件自动生成航线。In a specific implementation process, when laying the route, the height of the datum is determined based on the terrain and flight safety of the aerial photography area, and the spacing and flight direction of the route are set according to the basic parameter table involved in aerial photography technology, and the route generation software is used to automatically generate route.

在一具体实施过程中,进行空中三角测量时,通过航摄影像数据和野外控制点文件,依据摄影测量原理,在室内进行控制点加密、空中三角测量运算和精度监测,并同时解算得出每张影像的外方位元素,最终输出经过空三加密的野外超高分辨率航空RGB影像数据和LIDAR点云数据。In a specific implementation process, when performing aerial triangulation, through aerial photography data and field control point files, based on the principle of photogrammetry, control point encryption, aerial triangulation calculation and accuracy monitoring are performed indoors, and each calculation is performed simultaneously. The outer azimuth elements of the image are finally output to the wild ultra-high-resolution aerial RGB image data and LIDAR point cloud data that have been encrypted by aerial triangulation.

在一具体实施过程中,利用飞行装置进行航摄时,飞行要尽可能平稳,旋偏角、航偏角不能超过规范要求,按照预设航线和航摄质量要求进行野外超高分辨率航空RGB影像数据和机载LIDAR点云数据采集。In a specific implementation process, when using a flying device for aerial photography, the flight should be as smooth as possible, and the yaw and yaw angles should not exceed the specification requirements. Field ultra-high-resolution aerial RGB should be carried out in accordance with the preset route and aerial photography quality requirements. Image data and airborne LIDAR point cloud data collection.

在一些可选实施例中,所述第二多源数据集包括冠层高度模型数据CHM、数字正射影像数据、重采样哥白尼数字高程模型数据和重采样合成孔径雷达卫星数据;In some optional embodiments, the second multi-source data set includes canopy height model data CHM, digital orthophoto data, resampled Copernicus digital elevation model data, and resampled synthetic aperture radar satellite data;

所述对所述第一多源数据集进行数据预处理,包括:The data preprocessing of the first multi-source data set includes:

对于所述LIDAR点云数据:For the LIDAR point cloud data:

分别对所述LIDAR点云数据中的噪声点、地面点和非地面点进行滤波处理,然后采用自然邻域插值法提取出数字高程模型数据和数字表面模型数据;Filter the noise points, ground points and non-ground points in the LIDAR point cloud data respectively, and then use the natural neighborhood interpolation method to extract the digital elevation model data and digital surface model data;

根据所述数字高程模型数据(DEM)与所述数字表面模型数据(DSM)的差值,得到冠层高度模型数据(CHM);According to the difference between the digital elevation model data (DEM) and the digital surface model data (DSM), canopy height model data (CHM) is obtained;

对于所述野外超高分辨率航空RGB影像数据:For the ultra-high-resolution aerial RGB image data in the wild:

根据所述哥白尼数字高程模型数据及空三结果,对所述野外超高分辨率航空RGB影像数据进行重采样,按逐片、逐像元的校正误差,将中心投影转换为正射投影,得到数字正射影像数据;According to the Copernicus digital elevation model data and aerial triangulation results, the field ultra-high-resolution aerial RGB image data is resampled, and the central projection is converted into an orthographic projection according to the correction errors on a slice-by-slice and pixel-by-pixel basis. , obtain digital orthophoto data;

对于所述哥白尼数字高程模型数据和所述合成孔径雷达卫星数据:For the Copernicus digital elevation model data and the synthetic aperture radar satellite data:

采用空间插值法对所述哥白尼数字高程模型数据和所述合成孔径雷达卫星数据进行重采样,使所述哥白尼数字高程模型数据、所述合成孔径雷达卫星数据的分辨率与所述野外超高分辨率航空RGB影像数据相同。The spatial interpolation method is used to resample the Copernicus digital elevation model data and the synthetic aperture radar satellite data, so that the resolutions of the Copernicus digital elevation model data and the synthetic aperture radar satellite data are consistent with the The data of ultra-high-resolution aerial RGB images in the wild are the same.

该可选实施例中,通过对所述野外超高分辨率航空RGB影像数据按逐片、逐像元的校正误差,可以减少由于地面起伏、航摄装置倾斜等因素所引起的误差。In this optional embodiment, by correcting the errors of the wild ultra-high-resolution aerial RGB image data on a slice-by-slice and pixel-by-pixel basis, errors caused by factors such as ground fluctuations and tilt of the aerial photography device can be reduced.

在一些示例中,基于获取的机载LIDAR点云数据进行1m分辨率的数字高程模型(DEM)、数字表面模型(DSM)和冠层高度模型数据(CHM)生产。In some examples, 1m resolution digital elevation model (DEM), digital surface model (DSM) and canopy height model data (CHM) are produced based on acquired airborne LIDAR point cloud data.

在一些示例中,所述对所述LIDAR点云数据中的噪声点、地面点和非地面点进行滤波处理,包括:将明显低于地面的点或点群(低点)和明显高于地表目标的点或点群(高点)进行分离;将点云的地面点和非地面点进行滤波处理。In some examples, filtering noise points, ground points and non-ground points in the LIDAR point cloud data includes: filtering points or point groups (low points) that are significantly lower than the ground and those that are significantly higher than the ground. The target points or point groups (high points) are separated; the ground points and non-ground points of the point cloud are filtered.

在一些示例中,进行数字正射影像数据生产时,本领域技术人员可参考《基础地理信息数字成果1:500 1:1000 1:2000数字正射影像图》执行。In some examples, when producing digital orthoimage data, those skilled in the art can refer to "Basic Geographic Information Digital Results 1:500 1:1000 1:2000 Digital Orthoimage Map".

在一些示例中,采用双线性内插法对哥白尼数字高程模型数据和合成孔径雷达卫星数据进行重采样,生成更平滑的表面;通过在Y方向做一次内插(或X方向),再在X方向(或Y方向)内插一次,取采样点到周围4邻域像元的距离加权计算得到该像元的栅格值;以航空RGB影像的分辨率为基础,将哥白尼数字高程模型数据和合成孔径雷达卫星数据重采样到与航空RGB影像相同的分辨率。In some examples, bilinear interpolation is used to resample Copernicus digital elevation model data and synthetic aperture radar satellite data to generate a smoother surface; by doing one interpolation in the Y direction (or X direction), Then interpolate once in the X direction (or Y direction), and weight the distance from the sampling point to the surrounding 4 neighboring pixels to calculate the raster value of the pixel; based on the resolution of the aerial RGB image, the Copernicus Digital elevation model data and synthetic aperture radar satellite data are resampled to the same resolution as aerial RGB imagery.

进一步地,所述利用所述第二多源数据集训练所述超高分辨率植被冠层高度估算模型,包括:Further, using the second multi-source data set to train the ultra-high-resolution vegetation canopy height estimation model includes:

将所述冠层高度模型数据、重采样哥白尼数字高程模型数据、重采样合成孔径雷达卫星数据,分别与所述数字正射影像数据的红、绿与蓝波段进行叠置组合,得到原始特征组合图像;The canopy height model data, resampled Copernicus digital elevation model data, and resampled synthetic aperture radar satellite data are overlaid and combined with the red, green and blue bands of the digital orthophoto data to obtain the original Feature combination image;

将所述冠层高度模型数据作为标签图像,与所述原始特征组合图像组合为图像对;Use the canopy height model data as a label image and combine it with the original feature combination image into an image pair;

将所述图像对进行有监督的数据增强后,划分为训练集与验证集;After performing supervised data enhancement on the image pairs, they are divided into a training set and a verification set;

将所述训练集作为所述编码器的输入,令所述超高分辨率植被冠层高度估算模型在所述训练集上学习图像的多层次特征,并利用所述验证集对所述超高分辨率植被冠层高度估算模型进行参数调优。Using the training set as the input of the encoder, the ultra-high resolution vegetation canopy height estimation model learns the multi-level features of the image on the training set, and uses the verification set to estimate the ultra-high height Parameter tuning of the resolution vegetation canopy height estimation model.

该实施例中,基于LIDAR点云数据生产的冠层高度模型数据被用作模型训练的标签数据,通过进行有监督的数据增强,使得每次输入模型的土相对都有不同的组合,增加了训练样本数据集的多样性,避免网络训练出现过拟合现象,并增加模型的泛化能力。In this embodiment, the canopy height model data produced based on LIDAR point cloud data is used as label data for model training. Through supervised data enhancement, the soil relative input to the model has different combinations each time, increasing the The diversity of training sample data sets avoids overfitting in network training and increases the generalization ability of the model.

在一些示例中,按照70%:30%的比例划分训练集和验证集。In some examples, the training and validation sets are split at a ratio of 70%:30%.

在一些示例中,将所述标签图像与所述原始特征组合图像分别裁剪成128*128大小的图像后组成图像对。In some examples, the label image and the original feature combination image are respectively cropped into images of 128*128 size to form an image pair.

在一些示例中,在所述超高分辨率植被冠层高度估算模型的训练过程中,设置基础学习率为0.001,学习率衰减率设置为0.7,学习率采用Adam随机优化方法进行自适应更新,最大迭代设置为800。In some examples, during the training process of the ultra-high-resolution vegetation canopy height estimation model, the basic learning rate is set to 0.001, the learning rate decay rate is set to 0.7, and the learning rate is adaptively updated using the Adam random optimization method, The maximum iterations are set to 800.

更进一步地,所述数据增强包括几何变换、添加噪声、填充、擦除和/或形成数据对(SamplePairing)。Furthermore, the data enhancement includes geometric transformation, adding noise, filling, erasing and/or forming data pairs (SamplePairing).

在一些可选实施例中,采用航摄方式采集初始航空RGB影像数据和初始LIDAR点云数据时,按照摄区边界航向覆盖要超出摄区边界线3条基线;旁向覆盖超出摄区边界线为像幅的50%,航向重叠不小于70%、旁向重叠不小于55%的要求进行航线质量控制。In some optional embodiments, when aerial photography is used to collect initial aerial RGB image data and initial LIDAR point cloud data, the directional coverage of the photographic area boundary should exceed the photographic area boundary line by 3 baselines; the side coverage should exceed the photographic area boundary line. Route quality control is required to be 50% of the image frame, with a heading overlap of not less than 70% and a side overlap of not less than 55%.

在一些优选实施例中,所述建立初始的超高分辨率植被冠层高度估算模型,参阅图2,包括:In some preferred embodiments, establishing an initial super-resolution vegetation canopy height estimation model, see Figure 2, includes:

以Resnet50、Resnet101与Resnet152中的任一种所述深度残差卷积神经网络为基础建立编码器,将所述深度残差卷积神经网络的分类模块替换为输出特征数为256的卷积层,用于学习输入图像的多层次特征,并输出关于植被冠层高度的高层次特征结果;Establish an encoder based on the deep residual convolutional neural network described in any one of Resnet50, Resnet101 and Resnet152, and replace the classification module of the deep residual convolutional neural network with a convolutional layer with an output feature number of 256 , used to learn multi-level features of input images and output high-level feature results about vegetation canopy height;

将所述深度残差卷积神经网络的第一个卷积模块的输入特征数设置为64,并在所述深度残差卷积神经网络前嵌入一个输出特征数量为64、大小不变的3*3卷积模块,用于接收输入图像;Set the input feature number of the first convolution module of the deep residual convolutional neural network to 64, and embed an output feature number of 64 and a constant size 3 in front of the deep residual convolutional neural network. *3 convolution module, used to receive input images;

以两个FPN模块、两个Transformer模块及多个上采样层为基础建立解码器,用于根据不同残差模块的提取到的特征结果、所述高层次特征结果,输出冠层高度预测结果作为所述超高分辨率植被冠层高度估算模型的输出。A decoder is established based on two FPN modules, two Transformer modules and multiple upsampling layers, and is used to output canopy height prediction results based on the extracted feature results of different residual modules and the high-level feature results as The output of the ultra-high resolution vegetation canopy height estimation model.

需要说明的是,该优选实施例中,通过多个上采样层逐次对图像进行2倍上采样操作,最终可以得到与原始输入图像相同大小的冠层高度预测结果。It should be noted that in this preferred embodiment, the image is sequentially upsampled 2 times through multiple upsampling layers, and finally a canopy height prediction result of the same size as the original input image can be obtained.

在一具体实施过程中,以ResNet101为基础建立编码器,以学习图像的多层次特征,通过多次卷积运算或最大池化运算得到尺寸大小为原始图像1/32但具有深层特征的高维图像(即高层次特征提取结果)。相较于ResNet101,ResNet50的深度较小,提取出的深层特征不够明显,而ResNet152模型网络层数更多,需要更多的计算资源。In a specific implementation process, an encoder was established based on ResNet101 to learn the multi-level features of the image. Through multiple convolution operations or maximum pooling operations, a high-dimensional image with a size of 1/32 of the original image but with deep features was obtained. image (i.e., high-level feature extraction results). Compared with ResNet101, ResNet50 has a smaller depth and the extracted deep features are not obvious enough, while the ResNet152 model has more network layers and requires more computing resources.

需要说明的是,在所述深度残差卷积神经网络前嵌入一个新的卷积模块,使得所述超高分辨率植被冠层高度估算模型可以接收多波段(不限于三波段)的图像输入,其输出特征数量为64且大小与输入图像一致。改进后的编码器可通过非线性运算和多次下采样获得输入图像中的低-中-高层次特征。It should be noted that a new convolution module is embedded in front of the deep residual convolutional neural network so that the ultra-high-resolution vegetation canopy height estimation model can receive multi-band (not limited to three-band) image inputs , the number of output features is 64 and the size is consistent with the input image. The improved encoder can obtain low-medium-high-level features in the input image through nonlinear operations and multiple downsampling.

本领域技术人员应当理解,Transformer模块通过使用自注意力机制来实现对输入序列的编码和表示学习,捕捉序列中不同位置之间的依赖。Those skilled in the art should understand that the Transformer module implements encoding and representation learning of the input sequence by using a self-attention mechanism to capture the dependencies between different positions in the sequence.

在一些可选实施例中,所述以两个FPN模块、两个Transformer模块及多个上采样层为基础建立解码器,包括:In some optional embodiments, the decoder is established based on two FPN modules, two Transformer modules and multiple upsampling layers, including:

在所述深度残差卷积神经网络的第二个残差模块和第三个残差模块后分别连接第一FPN模块和第二FPN模块以进行卷积运算,并令所述编码器输出的高层次特征结果进行上采样之后依次与两个FPN模块的输出结果进行相加和上采样,以得到特征数为256的特征结果;After the second residual module and the third residual module of the deep residual convolutional neural network, the first FPN module and the second FPN module are respectively connected to perform convolution operations, and the encoder output The high-level feature results are upsampled and then added and upsampled to the output results of the two FPN modules to obtain a feature result with a feature number of 256;

设置第一Transformer模块与第二Transformer模块,其输入分别为经过第一FPN模块后的上采样结果与经过第二FPN模块后的上采样结果;令所述编码器输出的高层次特征结果进行上采样之后依次与两个Transformer模块的输出进行叠置;设置上采样层、卷积层和Relu激活层对叠置结果进行处理,以得到与输入图像相同大小的冠层高度预测结果。Set up a first Transformer module and a second Transformer module, whose inputs are respectively the upsampling result after passing through the first FPN module and the upsampling result after passing through the second FPN module; let the high-level feature results output by the encoder be upsampled. After sampling, it is overlapped with the output of the two Transformer modules in sequence; an upsampling layer, a convolution layer and a Relu activation layer are set to process the overlay results to obtain a canopy height prediction result of the same size as the input image.

本领域技术人员应当理解,该可选实施例中所述解码器部分,对于编码器部分输出的高级特征结果,通过FPN模块首先将不同残差模块得到的特征结果进行卷积运算,得到256个波段的特征结果,并与改进后的编码器的输出结果进行上采样之后的结果进行相加和上采样;Transformer模块对经过FPN模块的高级特征结果进行优化;再将经过Transformer模块的高级特征结果与上采样结果进行叠置;最后将叠置结果进行上采样和经过一个卷积层和ReLU激活层,得到尺寸和输入图像相同的冠层高度预测结果。应当理解,在对目标区域的植被图像进行预测是,其输出为目标区域的植被冠层高度信息。Those skilled in the art should understand that in the decoder part of this optional embodiment, for the high-level feature results output by the encoder part, the FPN module first performs a convolution operation on the feature results obtained by different residual modules to obtain 256 The feature results of the band are added and upsampled with the output results of the improved encoder; the Transformer module optimizes the high-level feature results that have passed through the FPN module; and then the high-level feature results that have gone through the Transformer module are Overlay with the upsampling result; finally, the overlay result is upsampled and passed through a convolution layer and ReLU activation layer to obtain a canopy height prediction result with the same size as the input image. It should be understood that when predicting the vegetation image of the target area, the output is the vegetation canopy height information of the target area.

在一具体实施过程中,利用测试集来对比评估前述的超高分辨率植被冠层高度估算模型(ARFTNet)与现有方案UNet及SegNet的精度,具体地,采用决定系数(R2)、均方根误差(RMSE)、平均绝对误差(MAE)和偏差值(B)四个精度评价指标,实验数据如表1所示:In a specific implementation process, the test set was used to compare and evaluate the accuracy of the aforementioned ultra-high-resolution vegetation canopy height estimation model (ARFTNet) and the existing solutions UNet and SegNet. Specifically, the coefficient of determination (R 2 ), average There are four accuracy evaluation indicators: root square error (RMSE), mean absolute error (MAE) and deviation value (B). The experimental data are shown in Table 1:

表1对比实验数据Table 1 Comparative experimental data

R2能够对估算植被冠层高度进行评价和验证以及能够表示预测树高的可信度,采用RMSE和MAE来表示植被冠层高度估算值与测试集植被冠层高度真实值的偏差,使用偏差值(Bias)来确定模型估算的植被冠层高度信息相对于测试集植被冠层真实高度值被高估或低估了多少;其中,R2越高代表模型估算结果越好,RMSE、MAE和Bias越低代表模型估算结果与真实结果越接近。R 2 can evaluate and verify the estimated vegetation canopy height and can express the credibility of the predicted tree height. RMSE and MAE are used to represent the deviation between the estimated value of the vegetation canopy height and the true value of the test set vegetation canopy height. Use the deviation value (Bias) to determine how much the vegetation canopy height information estimated by the model is overestimated or underestimated relative to the true height value of the vegetation canopy in the test set; among them, the higher R 2 represents the better the model estimation results, RMSE, MAE and Bias The lower the value, the closer the model estimation results are to the real results.

可以看出,ARFTNet的RMSE低于UNET和SegNet,其R2高于UNET和SegNet,其MAE低于UNET和SegNet,其Bias的绝对值也低于UNET和SegNet。整体来看,本实施例前述超高分辨率植被冠层高度估算模型表现较优,精度较高。It can be seen that ARFTNet's RMSE is lower than UNET and SegNet, its R2 is higher than UNET and SegNet, its MAE is lower than UNET and SegNet, and the absolute value of its Bias is also lower than UNET and SegNet. Overall, the ultra-high-resolution vegetation canopy height estimation model described above in this embodiment performs better and has higher accuracy.

实施例2Example 2

本实施例提出一种超高分辨率植被冠层高度估算系统,应用实施例1所述方法,参阅图3,包括:This embodiment proposes an ultra-high-resolution vegetation canopy height estimation system, applying the method described in Embodiment 1, see Figure 3, including:

多源数据获取模块,用于获取针对样本区域的野外超高分辨率航空RGB影像数据、LIDAR点云数据、哥白尼数字高程模型数据与合成孔径雷达卫星数据,并组合为第一多源数据集;还用于对所述第一多源数据集进行数据预处理,得到第二多源数据集;还用于获取针对目标区域的植被图像;The multi-source data acquisition module is used to obtain field ultra-high-resolution aerial RGB image data, LIDAR point cloud data, Copernicus digital elevation model data and synthetic aperture radar satellite data for the sample area, and combine them into the first multi-source data set; also used to perform data preprocessing on the first multi-source data set to obtain a second multi-source data set; and also used to obtain vegetation images for the target area;

模型训练模块,用于建立以深度残差卷积神经网络作为编码器、以FPN和Transformer作为解码器的初始的超高分辨率植被冠层高度估算模型,利用所述第二多源数据集训练所述超高分辨率植被冠层高度估算模型;Model training module, used to establish an initial ultra-high-resolution vegetation canopy height estimation model using a deep residual convolutional neural network as the encoder and FPN and Transformer as the decoder, and train using the second multi-source data set The ultra-high resolution vegetation canopy height estimation model;

植被冠层高度信息估算模块,用于搭载训练完成所述超高分辨率植被冠层高度估算模型;还用于根据所述目标区域的植被图像,利用所述超高分辨率植被冠层高度估算模型得到目标区域的植被冠层高度信息。A vegetation canopy height information estimation module is used to carry out training to complete the ultra-high-resolution vegetation canopy height estimation model; and is also used to estimate the ultra-high-resolution vegetation canopy height based on the vegetation image of the target area. The model obtains vegetation canopy height information in the target area.

可以理解,本实施例的系统对应于上述实施例1的方法,上述实施例1中的可选项同样适用于本实施例,故在此不再重复描述。It can be understood that the system of this embodiment corresponds to the method of the above-mentioned Embodiment 1, and the options in the above-mentioned Embodiment 1 are also applicable to this embodiment, so the description will not be repeated here.

实施例3Example 3

本实施例提出一种计算机可读存储介质,所述存储介质上存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行,使得所述处理器执行实施例1中所述方法的部分或全部步骤。This embodiment proposes a computer-readable storage medium on which at least one instruction, at least one program, code set or instruction set is stored. The at least one instruction, at least one program, code set or instruction set is processed by The processor is loaded and executed, so that the processor executes some or all steps of the method described in Embodiment 1.

可以理解,所述存储介质可以是瞬时性的,也可以是非瞬时性的。示范性地,所述存储介质包括但不限于U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机访问存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。It can be understood that the storage medium may be transient or non-transient. Exemplarily, the storage medium includes but is not limited to U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc., which can store various data. The medium for program code.

示范性地,所述处理器可以为中央处理器(Central ProcessingUnit,CPU)、微处理器(Microprocessor Unit,MPU)、数字信号处理器(Digital SignalProcessor,DSP)或现场可编程门阵列(Field Programmable Gate Array,FPGA)等。Exemplarily, the processor may be a Central Processing Unit (CPU), a Microprocessor Unit (MPU), a Digital Signal Processor (DSP) or a Field Programmable Gate Array (Field Programmable Gate). Array, FPGA), etc.

在一些示例中提供一种计算机程序产品,具体可以通过硬件、软件或其结合的方式实现。作为非限制性示例,所述计算机程序产品可以体现为所述存储介质,还可以体现为软件产品,例如SDK(Software Development Kit,软件开发包)等。In some examples, a computer program product is provided, which may be implemented by hardware, software, or a combination thereof. As a non-limiting example, the computer program product can be embodied as the storage medium, or can also be embodied as a software product, such as an SDK (Software Development Kit), etc.

在一些示例中提供一种计算机程序,包括计算机可读代码,在所述计算机可读代码在计算机设备中运行的情况下,所述计算机设备中的处理器执行用于实现所述方法中的部分或全部步骤。In some examples, a computer program is provided, including computer readable code. When the computer readable code is executed in a computer device, a processor in the computer device executes a part for implementing the method. or all steps.

本实施例还提出一种电子设备,包括存储器和处理器,所述存储器存储有至少一条指令、至少一段程序、代码集或指令集,所述处理器执行所述至少一条指令、至少一段程序、代码集或指令集时实现如实施例1中所述方法的部分或全部步骤。This embodiment also provides an electronic device, including a memory and a processor. The memory stores at least one instruction, at least a program, a code set or an instruction set, and the processor executes the at least one instruction, at least a program, A code set or instruction set implements part or all of the steps of the method described in Embodiment 1.

在一些示例中提供一种所述电子设备的硬件实体,包括:处理器、存储器和通信接口;其中,所述处理器通常控制所述电子设备的总体操作;所述通信接口用于使所述电子设备通过网络与其他终端或服务器通信;所述存储器配置为存储由处理器可执行的指令和应用,还可以缓存待处理器以及电子设备中各模块待处理或已经处理的数据(包括但不限于图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(RAM,Random Access Memory)实现。In some examples, a hardware entity of the electronic device is provided, including: a processor, a memory, and a communication interface; wherein the processor generally controls the overall operation of the electronic device; and the communication interface is used to enable the The electronic device communicates with other terminals or servers through the network; the memory is configured to store instructions and applications executable by the processor, and can also cache data to be processed or processed by the processor and each module in the electronic device (including but not Limited to image data, audio data, voice communication data and video communication data), it can be implemented through flash memory (FLASH) or random access memory (RAM, Random Access Memory).

进一步地,处理器、通信接口和存储器之间可以通过总线进行数据传输,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器和存储器的各种电路连接在一起。Further, data can be transmitted between the processor, communication interface and memory through a bus. The bus can include any number of interconnected buses and bridges. The bus connects various circuits of one or more processors and memories together.

可以理解,上述实施例1中的可选项同样适用于本实施例,故在此不再重复描述。It can be understood that the options in the above-mentioned Embodiment 1 are also applicable to this embodiment, so the description will not be repeated here.

附图中描述位置关系的用语仅用于示例性说明,不能理解为对本专利的限制;The terms used to describe positional relationships in the drawings are only for illustrative purposes and should not be construed as limitations to this patent;

显然,本发明的上述实施例仅仅是为清楚地说明本发明所作的举例,而并非是对本发明的实施方式的限定。应理解,在本公开的各种实施例中,上述各步骤/过程的序号的大小并不意味着执行顺序的先后,各步骤/过程的执行顺序应以其功能和内在逻辑确定,而不应对实施例的实施过程构成任何限定。还应理解,以上所描述的系统/设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明权利要求的保护范围之内。Obviously, the above-mentioned embodiments of the present invention are only examples to clearly illustrate the present invention, and are not intended to limit the implementation of the present invention. It should be understood that in various embodiments of the present disclosure, the size of the serial numbers of the above steps/processes does not mean the order of execution. The execution order of each step/process should be determined by its functions and internal logic, and should not be The execution of the examples does not constitute any limitations. It should also be understood that the system/device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components can be combined, or can be integrated into another system, or some features can be omitted, or not implemented. In addition, the coupling, direct coupling, or communication connection between the various components may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be electrical, mechanical, or other forms. For those of ordinary skill in the art, other different forms of changes or modifications can be made based on the above description. An exhaustive list of all implementations is not necessary or possible. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention shall be included in the protection scope of the claims of the present invention.

Claims (10)

1. A method for estimating the height of a canopy of vegetation at ultra-high resolution, comprising:
acquiring field ultra-high resolution aviation RGB image data, LIDAR point cloud data, golgi digital elevation model data and synthetic aperture radar satellite data aiming at a sample area, and combining the data into a first multi-source data set;
performing data preprocessing on the first multi-source data set to obtain a second multi-source data set;
establishing an initial ultra-high resolution vegetation canopy height estimation model by taking a depth residual convolutional neural network as an encoder and taking FPN and a transducer as decoders, and training the ultra-high resolution vegetation canopy height estimation model by using the second multi-source data set;
and acquiring a vegetation image aiming at the target area, and obtaining vegetation canopy height information of the target area by using the ultra-high resolution vegetation canopy height estimation model.
2. The method of claim 1, wherein the acquiring field ultra-high resolution aerial RGB image data, LIDAR point cloud data, cobweb digital elevation model data, and synthetic aperture radar satellite data for the sample area comprises:
arranging image control points in a region mesh point mode;
based on a preset route, acquiring initial aviation RGB image data and initial LIDAR point cloud data of a sample area in a aerial shooting mode;
according to the image control points, performing control point encryption, aerial triangulation operation and precision monitoring on the ultra-high resolution initial aviation RGB image data and the initial LIDAR point cloud data, and simultaneously calculating to obtain external azimuth elements of each image to obtain field ultra-high resolution aviation RGB image data and LIDAR point cloud data subjected to air-to-three encryption;
and acquiring the Goinby digital elevation model data and the synthetic aperture radar satellite data, and combining the Goinbred digital elevation model data, the outdoor ultra-high resolution aviation RGB image data and the LIDAR point cloud data into the first multi-source data set.
3. The ultra-high resolution vegetation canopy height estimation method of claim 2, wherein the second multisource dataset comprises canopy height model data, digital orthophoto data, resampled gothic digital elevation model data, and resampled synthetic aperture radar satellite data;
the data preprocessing of the first multi-source data set includes:
for the LIDAR point cloud data:
respectively carrying out filtering treatment on noise points, ground points and non-ground points in the LIDAR point cloud data, and then extracting digital elevation model data and digital surface model data by adopting a natural neighborhood interpolation method;
obtaining canopy height model data according to the difference value between the digital elevation model data and the digital surface model data;
for the field ultra-high resolution aviation RGB image data:
resampling the field ultra-high resolution aviation RGB image data according to the Goinby digital elevation model data and the blank three results, and converting the central projection into the orthographic projection according to correction errors of the pixels on a piece-by-piece basis to obtain digital orthographic image data;
for the cobicini digital elevation model data and the synthetic aperture radar satellite data:
and resampling the Golbini digital elevation model data and the synthetic aperture radar satellite data by adopting a spatial interpolation method, so that the resolution of the Golbini digital elevation model data and the synthetic aperture radar satellite data is the same as that of the outdoor ultra-high resolution aviation RGB image data.
4. A method of ultra-high resolution vegetation canopy height estimation according to claim 3, wherein the training the ultra-high resolution vegetation canopy height estimation model using the second multi-source dataset comprises:
overlapping and combining the canopy height model data, resampling the Goinbred digital elevation model data and resampling the synthetic aperture radar satellite data with red, green and blue wave bands of the digital orthophoto data respectively to obtain an original feature combined image;
combining the canopy height model data as a label image with the original feature combined image into an image pair;
after the image pair is subjected to supervised data enhancement, dividing the image pair into a training set and a verification set;
and taking the training set as the input of the encoder, enabling the ultra-high resolution vegetation canopy height estimation model to learn multi-level features of images on the training set, and performing parameter tuning on the ultra-high resolution vegetation canopy height estimation model by utilizing the verification set.
5. The ultra-high resolution vegetation canopy height estimation method of claim 4, wherein the data enhancement comprises geometric transformation, noise addition, padding, erasure, and/or data pair formation.
6. The ultra-high resolution vegetation canopy height estimation method according to claim 2, wherein when the initial aviation RGB image data and the initial LIDAR point cloud data are acquired by aerial photography, 3 baselines exceeding a border line of a photographed area are covered according to a border course of the photographed area; and the lateral coverage exceeds the shot boundary line by 50% of the image frame, the course overlap is not less than 70%, and the lateral overlap is not less than 55% for performing route quality control.
7. The method of any one of claims 1-6, wherein the establishing an initial ultra-high resolution vegetation canopy height estimation model comprises:
establishing an encoder based on the depth residual convolutional neural network of any one of Resnet50, resnet101 and Resnet152, replacing a classification module of the depth residual convolutional neural network with a convolutional layer with 256 output feature numbers, and learning multi-level features of an input image and outputting a high-level feature result about vegetation canopy height;
setting the input feature number of a first convolution module of the depth residual convolution neural network as 64, and embedding a 3*3 convolution module with the output feature number of 64 and unchanged size in front of the depth residual convolution neural network for receiving an input image;
and a decoder is established based on the two FPN modules, the two transducer modules and the plurality of upsampling layers and is used for outputting a canopy height prediction result as the output of the ultrahigh resolution vegetation canopy height estimation model according to the extracted characteristic results of different residual modules and the high-level characteristic results.
8. The method of claim 7, wherein the building a decoder based on two FPN modules, two transform modules, and a plurality of upsampling layers comprises:
the second residual error module and the third residual error module of the depth residual error convolution neural network are respectively connected with the first FPN module and the second FPN module to carry out convolution operation, and the high-level characteristic result output by the encoder is subjected to up-sampling and then sequentially added and up-sampled with the output results of the two FPN modules to obtain a characteristic result with 256 characteristic numbers;
setting a first transducer module and a second transducer module, wherein the inputs of the first transducer module and the second transducer module are respectively an up-sampling result after passing through the first FPN module and an up-sampling result after passing through the second FPN module; the high-level characteristic result output by the encoder is subjected to up-sampling and then sequentially overlapped with the output of the two transducer modules; and setting an upsampling layer, a convolution layer and a Relu activation layer to process the superposition result so as to obtain a canopy height prediction result with the same size as the input image.
9. A ultra-high resolution vegetation canopy height estimation system employing the method of any of claims 1-8, comprising:
the multi-source data acquisition module is used for acquiring field ultra-high resolution aviation RGB image data, LIDAR point cloud data, goinbred digital elevation model data and synthetic aperture radar satellite data aiming at a sample area, and combining the data into a first multi-source data set; the data preprocessing module is also used for carrying out data preprocessing on the first multi-source data set to obtain a second multi-source data set; and is also used for acquiring a vegetation image aiming at the target area;
the model training module is used for establishing an initial ultra-high resolution vegetation canopy height estimation model by taking a depth residual convolutional neural network as an encoder and taking FPN and a transducer as decoders, and training the ultra-high resolution vegetation canopy height estimation model by utilizing the second multi-source data set;
the vegetation canopy height information estimation module is used for carrying and training the ultra-high resolution vegetation canopy height estimation model; and the vegetation canopy height information of the target area is obtained by utilizing the ultra-high resolution vegetation canopy height estimation model according to the vegetation image of the target area.
10. A computer readable storage medium having stored thereon at least one instruction, at least one program, code set, or instruction set, the at least one instruction, at least one program, code set, or instruction set being loaded and executed by a processor to implement the method of any of claims 1-8.
CN202311747135.XA 2023-12-18 2023-12-18 Ultra-high resolution vegetation canopy height estimation method Pending CN117726943A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311747135.XA CN117726943A (en) 2023-12-18 2023-12-18 Ultra-high resolution vegetation canopy height estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311747135.XA CN117726943A (en) 2023-12-18 2023-12-18 Ultra-high resolution vegetation canopy height estimation method

Publications (1)

Publication Number Publication Date
CN117726943A true CN117726943A (en) 2024-03-19

Family

ID=90201177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311747135.XA Pending CN117726943A (en) 2023-12-18 2023-12-18 Ultra-high resolution vegetation canopy height estimation method

Country Status (1)

Country Link
CN (1) CN117726943A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118710832A (en) * 2024-06-26 2024-09-27 南京信息工程大学 A LiDAR prior-guided satellite image DEM generation method, medium and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118710832A (en) * 2024-06-26 2024-09-27 南京信息工程大学 A LiDAR prior-guided satellite image DEM generation method, medium and device
CN118710832B (en) * 2024-06-26 2025-03-14 南京信息工程大学 LiDAR priori guided satellite image DEM generation method, medium and equipment

Similar Documents

Publication Publication Date Title
CN109934153B (en) Building extraction method based on gating depth residual error optimization network
CN115564926B (en) Three-dimensional patch model construction method based on image building structure learning
CN110070025A (en) Objective detection system and method based on monocular image
CN112288637B (en) Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method
CN110991430B (en) Ground feature identification and coverage rate calculation method and system based on remote sensing image
WO2016099378A1 (en) Method and system for classifying a terrain type in an area
CN113962889B (en) Method, device, equipment and medium for removing thin clouds from remote sensing images
CN107918776A (en) A kind of plan for land method, system and electronic equipment based on machine vision
CN116402942A (en) Large-scale building three-dimensional reconstruction method integrating multi-scale image features
CN118115732B (en) A semantic segmentation method and device integrating optical and SAR channel correlation
CN118212129A (en) Multi-source fusion super-resolution method for hyperspectral remote sensing images based on bilinear unmixing
JP7028336B2 (en) Learning equipment, learning methods and learning programs
CN117726943A (en) Ultra-high resolution vegetation canopy height estimation method
CN120339481B (en) A terrain surveying and mapping method and system based on artificial intelligence
CN115862010A (en) High-resolution remote sensing image water body extraction method based on semantic segmentation model
CN112149711B (en) Hydrographic and terrain data generation method, device, computer equipment and storage medium
Won et al. An experiment on image restoration applying the cycle generative adversarial network to partial occlusion Kompsat-3A image
CN114092803A (en) Cloud detection method, device, electronic device and medium based on remote sensing image
CN118037596A (en) Methods, equipment, media and products for repairing highlight areas of UAV vegetation images
CN118628370A (en) An image processing method and system for detailed marine land space planning
CN118674912A (en) Weak and small target identification and positioning method for airborne wide-field investigation load
CN117092647A (en) Method and system for manufacturing regional satellite-borne optical and SAR image DOM
CN117372638A (en) Small-scale digital outcrop lithology mapping method based on semantic segmentation algorithm
Risso et al. Building damage assessment in conflict zones: A deep learning approach using geospatial sub-meter resolution data
CN113011294A (en) Method, computer equipment and medium for identifying circular sprinkling irrigation land based on remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination