[go: up one dir, main page]

CN111006645A - Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction - Google Patents

Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction Download PDF

Info

Publication number
CN111006645A
CN111006645A CN201911342979.XA CN201911342979A CN111006645A CN 111006645 A CN111006645 A CN 111006645A CN 201911342979 A CN201911342979 A CN 201911342979A CN 111006645 A CN111006645 A CN 111006645A
Authority
CN
China
Prior art keywords
point cloud
coordinates
rmse
motion
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911342979.XA
Other languages
Chinese (zh)
Inventor
宋娟
董丽
刘培学
张玉良
刘纪新
姜宝华
曹爱霞
邵瑞影
刘娜
李华光
曾实现
杨晓白
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Huanghai University
Original Assignee
Qingdao Huanghai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Huanghai University filed Critical Qingdao Huanghai University
Priority to CN201911342979.XA priority Critical patent/CN111006645A/en
Publication of CN111006645A publication Critical patent/CN111006645A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction, and particularly relates to the field of extreme terrain surveying and mapping and reconstruction. The mapping method performs mapping in three aspects: horizontal direction camera shots, 45 degree oblique camera shots and a combination of the two. To generate the point cloud and the orthoimage, the final photogrammetry results are obtained from the combined image set. In addition, a software program for drawing contour lines and vertical sections is developed to represent the geometric features of the mapped terrain. The result shows that the method is more accurate and efficient in the aspect of topographic form characterization. The test flying height is about 100 meters, the standard error (RMSE) of X, Y and the Z direction is 0.055m, 0.071m and 0.062m respectively, and the implementation difficulty of the measurement project is obviously higher than that of the similar method.

Description

Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction
Technical Field
The invention relates to the field of surveying and mapping and reconstruction of extreme terrains, in particular to an unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction.
Background
In recent years, digital elevation modeling related technologies have been rapidly developed. For most geomorphic applications, the topographic survey is primarily done by a robotic total station or a differential Global Navigation Satellite System (GNSS). New technologies such as ground laser scanning (TLS) and Airborne Laser Scanning (ALS) have significantly improved the accuracy of Digital Elevation Models (DEMs), but these new technologies are often time consuming and costly and are not suitable for use on extremely complex or dynamic terrain.
In order to overcome the technical limitation, researchers combine computer vision and image analysis to provide a motion and structure reconstruction (SfM) technology, which can automatically solve the problems of scene geometry and camera position and orientation. And the UAV imaging and SfM technology is used for surveying and mapping the landform and the terrain, so that the effect is better. Compared with the traditional manned aircraft and satellite, the UAV has the advantages of low cost, flexible operation, higher spatial and temporal resolution and obvious advantages, and has great significance in the aspect of dangerous region exploration. At present, some research results exist, such as comparison of different SfM-derived terrain data sets, which are from the same observation, and the measurement accuracy in most cases is centimeter level, which indicates that UAV terrain reconstruction based on the SfM algorithm has reproducibility. By using the flying height of 50m and 10 ground control points, DSM and an ortho image are obtained through UAV photogrammetry, and the influence of factors such as the flying height, the number of the ground control points, the landform shape and the like on the DSM and the ortho image is analyzed.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction. The method performs mapping in three ways: the combination of horizontal camera shots, 45 degree oblique camera shots and both the foregoing results in a final photogrammetric result from a merged image set in order to generate a point cloud and an orthographic image.
The invention specifically adopts the following technical scheme:
an unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction is characterized by comprising the following steps:
step 1, adopting an unmanned aerial vehicle to acquire images, loading a flight plan into the unmanned aerial vehicle, and executing two times of flight with different camera axes by the unmanned aerial vehicle, wherein the camera axes of the flight are respectively in the horizontal direction and have 45-degree downward inclination;
step 2, processing the collected image, wherein the specific processing process is as follows:
2.1, calculating a preliminary 3D model, wherein the result comprises initial camera calibration parameters, the relative position and orientation of a camera corresponding to each image and 3D relative coordinates of the sparse point cloud;
2.2, realizing the densification of the point cloud, obtaining a 3D model more detailed than 2.1, and using the measured ground control point and control point coordinates as the geographic coordinate reference of the point cloud;
2.3, generating a grid DSM with a specific grid size and outputting an orthoimage with a preselected resolution;
step 3, contour lines and cross sections are generated from original point clouds generated by a vertical plane and a plane vertical to a fitting plane of any point of the target area;
and 4, evaluating the coordinates of the control points.
Preferably, in step 1, the flight of the horizontal photography axis comprises 4 flights of length 150m at different heights, with 90% and 60% overlap between images, respectively; the flight with 45-degree inclination of the axis comprises 2 flights with the length of 150m, the image resolution is adjusted to 4240 multiplied by 2832 pixels, and the ground sampling interval is 1.86 cm.
Preferably, the preliminary 3D model calculated in step 2.1 includes initial camera calibration parameters, relative camera position and orientation for each image, and 3D relative coordinates of the sparse point cloud.
Preferably, in step 2.3, 5 measurement points are used as ground control points, and the other 18 points are used as control points.
Preferably, in step 2, three different photogrammetry plans are set:
72 images taken using the horizontal axis direction;
36 images taken using a 45 ° tilt axis;
merging all the 108 images of the horizontal and inclined axes;
and (3) adjusting the plane to the surface of the slope and projecting the surface to establish an orthoimage, and fitting the topographic point cloud obtained in the step (2) to determine the plane.
Preferably, in step 4, for each control point, the coordinates of the interpolation points of the four closest points of the dense point cloud generated in the photogrammetry process are compared with the measured coordinates of the control point, and the accuracy evaluation is performed on the east, north and high three directions to obtain the RMSE respectivelyX、 RMSEYAnd RMSEZAnd (3) measuring the precision: RMSEXAs shown in formula (1), RMSEYAs shown in formula (2), RMSEZAs shown in formula (3),
Figure BDA0002332116880000021
Figure RE-GDA0002399696360000022
Figure RE-GDA0002399696360000023
wherein n represents the number of control points, Xsi、YsiAnd ZsiX, Y and Z coordinates, X, respectively representing the ith control point of the total station measurementci、YciAnd ZciRepresenting X, Y and the Z coordinate, respectively, of the interpolated point from the point cloud.
The invention has the following beneficial effects:
the mapping method performs mapping in three aspects: horizontal direction camera shooting, 45 degree oblique camera shooting, and a combination of the two. To generate the point cloud and the orthoimage, the final photogrammetry results are obtained from the combined image set. In addition, a software program for drawing contour lines and vertical sections is developed to represent the geometric features of the mapped terrain. The result shows that the method is more accurate and efficient in the aspect of topographic form characterization. The test flying height is about 100 meters, the standard error (RMSE) of X, Y and the Z direction is 0.055m, 0.071m and 0.062m respectively, and the difficulty of implementing the measurement project is obviously higher than that of the similar method.
Drawings
FIG. 1 is a flow chart of a method for mapping unmanned aerial vehicles based on motion and structural reconstruction;
FIG. 2 is a three-dimensional block diagram of an area of interest;
FIG. 3 is a horizontal cross-section and a vertical cross-section defined by a section plane;
FIG. 4a is an error plot of a horizontal axis measurement term;
FIG. 4b is an error map of the tilt axis measurement term;
FIG. 4c is an error graph of the merging of two measurement items;
FIG. 5a is a graph showing the results of a cross section of CS 1;
FIG. 5b is a graph showing the results of the CS2 cross section.
Detailed Description
The following description of the embodiments of the present invention will be made with reference to the accompanying drawings:
an unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction is characterized by comprising the following steps:
step 1, an Unmanned Aerial Vehicle (UAV) is adopted for image acquisition, MikroKopter-Tool software is adopted for loading a flight plan into the UAV, the UAV carries out two times of flight with different camera axes, and the camera axes of the flight are respectively in the horizontal direction and have 45-degree downward inclination; during flight, the UAV was kept in a vertical plane at a distance of about 50 meters from the target surface, the flight of the horizontal filming axis comprising 4 flights of length 150m at different heights ((20m, 50m, 80m and 100m)), for a total of 72 images selected for photogrammetric processing, with overlap rates between the images of 90% and 60%, respectively; the flight with 45-degree inclination on the axis comprises the flights with the heights of 50m and 100m and the length of 150m, the image resolution is adjusted to 4240 multiplied by 2832 pixels, the ground sampling interval is 1.86cm, and 36 images are selected in total for photogrammetric processing. The three-dimensional coordinates of 26 points distributed on the target surface were measured using a total station without a reflecting prism.
In the method, a total station is used for measuring the coordinates of 18 points on a cutting slope. Points on the slope are very difficult to locate, but can be identified on the photograph and serve as a geo-reference for the point cloud. The height of these points cannot exceed the road level by 35m, otherwise the angle of the total station telescope is very high, and the total station telescope cannot be used for observation.
Step 2, image processing is carried out on the collected image by adopting Pix4Dmap Pro (version 3.1), and the specific processing process is as follows:
2.1, calculating a preliminary 3D model, wherein the result comprises initial camera calibration parameters, the relative position and orientation of a camera corresponding to each image and 3D relative coordinates of the sparse point cloud; the calculated preliminary 3D model includes initial camera calibration parameters, the relative camera position and orientation corresponding to each image, and the 3D relative coordinates of the sparse point cloud.
2.2, realizing the densification of the point cloud, obtaining a 3D model more detailed than 2.1, and using the measured ground control point and control point coordinates as the geographic coordinate reference of the point cloud;
2.3 generating a grid DSM with a specific grid size and outputting an orthoimage with a pre-selected resolution, preferably using more than 3 ground control points for optimum accuracy, this method uses 5 measurement points as ground control points and the other 18 points as control points.
Three different photogrammetry plans are set up:
(1) 72 images taken using the horizontal axis direction;
(2) 36 images taken using a 45 ° tilt axis;
(3) merging all the 108 images of the horizontal and inclined axes;
and (3) adjusting the plane to the surface of the slope and projecting the surface to establish an orthoimage, and fitting the topographic point cloud obtained in the step (2) to determine the plane.
Since the target surface is nearly vertical, the resulting orthographic image projected to the horizontal does not provide valuable information, or even confusing information, since in some areas, there may be 2 or more Z coordinates for a given X and Y coordinate. To avoid this, the plane is adjusted to the slope surface and used for projection to create an orthoimage. And (3) fitting the terrain point cloud obtained in the step (2) to determine the plane. In this task, only points located in the region of interest are considered, similar to the study method of terrestrial photogrammetry or close-range photogrammetry
Step 3, contour lines and cross sections are generated from original point clouds generated by a vertical plane and a plane vertical to a fitting plane of any point of the target area; the DSM standard program generated from the point cloud provides only 1Z coordinate for each feature point (X, Y). Complex terrain cannot be accurately characterized. Furthermore, current commercial software packages are unable to generate corresponding contours and cross sections by clicking on a certain point in the orthoimage. Therefore, in order to obtain information in the point cloud to the maximum extent, the code is transplanted into an embedded development board, a TI development board is adopted, and C + + and matlab mixed programming is adopted, so that the running speed is improved. Contour lines and cross sections are generated from an original point cloud generated from a vertical plane and a plane perpendicular to a plane fitted to any point of the target area. Such characterization can generate very real surface shapes, and is of great significance to planning research on complex terrain. The proposed software program contains two main parts:
contour lines and cross sections are extracted from the point cloud. First, the photogrammetric items are included in (X)MAX,YMAX,ZMAX) And (X)MIN,YMIN, ZMIN) The original point cloud generated by the item is represented as (X) in the defined three-dimensional framemax,Ymax,ZmaxB) and (X)min,Ymin,Zmin) A defined smaller three-dimensional box centered on the target location. Then, contour lines and cross sections are generated. ω represents the fitted plane fitted to the point cloud,
Figure BDA0002332116880000041
generating contour lines for the general horizontal section; piiIs a common vertical profile from which a cross-section is created. In addition, can be applied to the vertical section piiAnd horizontal section
Figure BDA0002332116880000042
Is adjusted to include a sufficient number of points in the corresponding cross-section. This adjustment is very important because the interface accuracy depends on the value. If the numerical value is too low, the number of extracted points is very small, and the cross section cannot be defined well. On the other hand, if the numerical value is too large, the number of extracted points is too large, and a mixed result is generated. Therefore, to obtain optimal values, a software program was developed herein to compare these results, taking into account a plurality of profile width values
And 4, evaluating the coordinates of the control points.
For each control point, the coordinates of the interpolation points of the four closest points of the dense point cloud generated in the photogrammetry process are compared with the measured coordinates of the control point, the precision evaluation is carried out in the east (X), north (Y) and high (Z) directions, and the RMSE is obtained respectivelyX、RMSEYAnd RMSEZAnd (3) measuring the precision: RMSEXAs shown in formula (1), RMSEYAs shown in formula (2), RMSEZAs shown in formula (3),
Figure BDA0002332116880000051
Figure RE-GDA0002399696360000052
Figure RE-GDA0002399696360000053
wherein n represents the number of control points, Xsi、YsiAnd ZsiX, Y and Z coordinates, X, respectively representing the ith control point of the total station measurementci、YciAnd ZciRepresenting X, Y and the Z coordinate, respectively, of the interpolated point from the point cloud.
And (3) carrying out error analysis on the mapping result obtained by the mapping method:
the boundary coordinates containing the original point cloud are (540117, 4074712, 0) and (540453, 4074967, 143), and the corresponding size is 336 × 255 × 143 m. Since the area of the region of interest is smaller than the total coverage area, the proposed method (fig. 2 and 3) is used to reduce this area. The boundary coordinates of the target region are (Xmin-540220, Ymin-4074750, Zmin-20) and (Xmax-540350, Ymax-4074800, Zmax-90), corresponding to a size of 130 × 50 × 70 m. Considering the reduced three-dimensional box, the number of points in the generated point cloud is 2640231, 220192 and 2933590 for the horizontal axis item, the 45 ° tilt axis item and the merged item, respectively. Therefore, the number of the points to be processed is reduced, and the execution time of the point cloud task is shortened.
From fig. 4a-4c and table 1, it can be seen that the errors of X, Y and Z coordinates for each control point for each photogrammetric item (horizontal axis, tilt axis and the combination of the two), take into account the measured and estimated coordinates of the point cloud described previously, the error variation range (maximum error minus minimum error), the average error, and the RMSE of X, Y and Z directions. For these three components, the error in the combined term is the smallest and the error in the oblique term is the largest. If the items are analyzed separately, the error variation ranges of the three coordinates are similar. For the merged project, the RMSE for X, Y and Z directions were 0.055m, 0.071m and 0.062m, respectively; for the horizontal axis of photography item, the RMSE for X, Y and the Z direction was 0.075m, 0.090m, and 0.079m, respectively; and the RMSE in the X, Y and Z directions for oblique photographic axis items were 0.093m, 0.097m, and 0.101m, respectively. Thus, the best accuracy is achieved for the merged project that contains both horizontal and oblique images.
TABLE 1
Figure BDA0002332116880000061
The software interface based on this mapping method comprises 3 graphical windows. Wherein, the main window is an orthoimage and is projected on a fitting plane; the second window is a contour line and is above the orthoimage window; the third window is a cross section. When clicking on the slope image, a cross appears at the clicking position, and contour lines and cross sections are directly drawn in respective windows. And gives the boundary coordinates and contour elevations for each window. In addition, when the cursor is on the orthographic image, the terrain coordinates are displayed at the bottom.
As described in the point cloud management section, a key adjustment in the software program developed herein is the selection of the sectioning surface width to generate the transverse section. As described previously, extraction was performed at widths of 0.5cm, 1cm, 2cm and 5 cm.
The point of intersection between the point cloud and the sectioning plane. The results for 2 different cross sections (CS1 and CS2) are given in fig. 5a, 5 b. The axis is not scaled because the figure is intended to compare different numbers of points included in the same section. The number at the bottom of each section represents the number of points extracted from the point cloud. When a width of 0.5cm is selected, the representation of the cross-section does not reach sufficient accuracy due to the excessive gap of the extracted points (103 and 129 points for CS1 and CS2, respectively). When a width of 5cm is chosen, a representative cross-section is constructed using a large number of points extracted (1140 and 1232 points for CS1 and CS2, respectively), but the cross-section is not well defined.
Furthermore, the difference between sections using 1cm width (219 and 259 points for CS1 and CS2, respectively) and 2cm width (434 and 501 points for CS1 and CS2, respectively) was not significant, and both gave well-defined representative sections. Thus in the analysis herein, a width of 1cm was chosen to generate contour lines and cross sections.
Comparing this mapping method with a similar method, it was found that the RMSE in X, Y and Z directions were 0.055m, 0.071m and 0.062m, respectively, in the merged project with the highest accuracy. The accuracy of the photogrammetry project using only the horizontal axis or only the oblique axis image is poor compared to the merged project. Therefore, by comparing the error range and the RMSE value: 1) for complex terrain, images need to be taken in different axial directions; 2) the results obtained by combining the photogrammetric items can meet the technical item destination diagram information requirement.
The proposed method gives RMSE values close to or better than those obtained by other methods under similar conditions. Comprehensive comparison as shown in table 2, the flying height is 50m above the ground plane, and geometric accuracies of 0.060m and 0.064m are achieved in the plane measurement and Z component, respectively. The accuracy of the digital surface model is studied, orthoimages of different forms of terrain are deduced through UAV photogrammetry, and results of different numbers of ground control points at different flight heights are compared. At 50m flying height, 5 ground control points were used, with RMSE values for the X, Y and Z components between 0.050m and 0.060 m. But with the use of ground control points spread over the target surface, the task difficulty is significantly less than here.
Although a large number of ground control points spread over the surface helps to achieve greater accuracy, a sufficient number of ground control points are not available for extremely complex terrain (i.e. using a mirrorless total station) because representative points may not be identified in the image. The method can realize higher precision in the complex terrain, and can meet the requirements of slope restoration planning.
TABLE 2
Figure BDA0002332116880000071
The image acquisition method of the method has the advantages that different branch heights and different flight angles and combinations are fully utilized. The flying heights include 20m, 50m, 80m and 100 m. The overlap ratio between the images in the vertical flight direction and the horizontal flight direction was 90% and 60%, respectively. Three shooting combinations: 1) shooting by a camera in the horizontal direction; 2) camera shooting with 45-degree inclination; 3) a combination of the foregoing two. Therefore, the image acquisition method is diversified and comprehensive. The literature mostly adopts a single image acquisition method, and at most two methods are combined. The flying height is fixed, and the flying height is only 50m at most.
The methodology presented herein enables efficient reconstruction of extremely complex terrain (e.g., rock cliffs), providing valuable information of complex terrain surfaces to engineers, geologists, or other technicians. The images processed in the photogrammetry project were obtained by two flights: 1) the photographic axis is vertical to the target surface; 2) the shooting axis and the target surface are inclined by 45 degrees, and the precision of the obtained point cloud and the orthographic image is better than that of only considering the direction of a single axis. For the merged project, the RMSEs obtained for X, Y and the Z direction were 0.055m, 0.071m and 0.062m, respectively, to the highest accuracy that other methods can achieve under similar conditions. Furthermore, DSMs derived from point clouds using standard software may not reliably characterize certain very complex terrain surfaces, necessitating the generation of useful information directly from the point clouds. In the future this will improve the proposed method by analyzing the number and distribution of ground control points, image resolution etc.
It is to be understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art may make modifications, alterations, additions or substitutions within the spirit and scope of the present invention.

Claims (6)

1.基于运动与结构重建的无人机测绘方法,其特征在于,包括以下步骤:1. The UAV surveying and mapping method based on motion and structural reconstruction, is characterized in that, comprises the following steps: 步骤1,采用无人机进行图像采集,将飞行计划加载到无人机中,无人机执行两次摄像轴不同的飞行,飞行的摄像轴分别为水平方向和轴下倾斜45度;Step 1, use the UAV for image acquisition, load the flight plan into the UAV, the UAV performs two flights with different camera axes, and the flight camera axes are respectively horizontal and tilted 45 degrees below the axis; 步骤2,对采集的图像进行图像处理,具体处理过程如下:Step 2, image processing is performed on the collected images, and the specific processing process is as follows: 2.1,计算初步的3D模型,其结果包括初始相机标定参数,对应每幅图像的相机相对位置和朝向,以及稀疏点云的3D相对坐标;2.1. Calculate the preliminary 3D model, and the results include the initial camera calibration parameters, the relative position and orientation of the camera corresponding to each image, and the 3D relative coordinates of the sparse point cloud; 2.2,实现点云的致密化,并得到比2.1更详细的3D模型,利用测得的地面控制点和控制点坐标作为点云的地理坐标参考;2.2, realize the densification of the point cloud, and obtain a more detailed 3D model than 2.1, using the measured ground control point and control point coordinates as the geographic coordinate reference of the point cloud; 2.3,以特定网格大小生成网格DSM,并以预先选择的分辨率输出正射影像;2.3, generate a grid DSM with a specific grid size, and output an orthophoto at a pre-selected resolution; 步骤3,从垂直平面和与目标区域任意点的拟合平面相垂直的平面所生成的原始点云中生成等高线和横断面;Step 3, generating contour lines and cross sections from the original point cloud generated by the vertical plane and the plane perpendicular to the fitting plane of any point in the target area; 步骤4,对控制点坐标进行评价。Step 4: Evaluate the coordinates of the control point. 2.如权利要求1所述的基于运动与结构重建的无人机测绘方法,其特征在于,步骤1中,水平摄影轴的飞行包括不同高度处的4次长度为150m的飞行,图像之间的重叠率分别为90%和60%;轴下倾斜45度的飞行中包括2次长度为150m的飞行,图像分辨率为调整为4240×2832像素,地面采样间距为1.86cm。2. The unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction as claimed in claim 1, wherein in step 1, the flight of the horizontal photography axis includes 4 flights with a length of 150m at different heights, and between the images. The overlap ratios are 90% and 60%, respectively; the flight with a 45-degree off-axis tilt includes 2 flights with a length of 150m, the image resolution is adjusted to 4240×2832 pixels, and the ground sampling spacing is 1.86cm. 3.如权利要求1所述的基于运动与结构重建的无人机测绘方法,其特征在于,步骤2.1中计算的初步的3D模型,包括初始相机标定参数,对应每幅图像的相机相对位置和朝向,以及稀疏点云的3D相对坐标。3. The UAV surveying and mapping method based on motion and structure reconstruction as claimed in claim 1, wherein the preliminary 3D model calculated in step 2.1 includes initial camera calibration parameters, the relative position of the camera corresponding to each image and orientation, and the relative 3D coordinates of the sparse point cloud. 4.如权利要求1所述的基于运动与结构重建的无人机测绘方法,其特征在于,步骤2.3中,使用5个测量点作为地面控制点,其他18个点作为控制点。4. The UAV surveying and mapping method based on motion and structure reconstruction according to claim 1, wherein in step 2.3, 5 measurement points are used as ground control points, and other 18 points are used as control points. 5.如权利要求1所述的基于运动与结构重建的无人机测绘方法,其特征在于,步骤2中,设定了三个不同的摄影测量计划:5. The UAV surveying and mapping method based on motion and structure reconstruction as claimed in claim 1, wherein in step 2, three different photogrammetry plans are set: 使用水平轴方向拍摄的72幅图像;72 images taken using the horizontal axis direction; 使用45°倾斜轴拍摄的36幅图像;36 images taken with a 45° tilted axis; 合并所有水平和倾斜轴的108幅图像;Merge 108 images for all horizontal and tilt axes; 将平面调整到边坡表面并用于投影,以建立正射影像,通过对步骤2得到的地形点云进行拟合,以确定该平面。The plane is adjusted to the slope surface and used for projection to establish an orthophoto, and the plane is determined by fitting the terrain point cloud obtained in step 2. 6.如权利要求1所述的基于运动与结构重建的无人机测绘方法,其特征在于,步骤4中,对于每个控制点,将摄影测量过程生成的密集点云的四个最近点的内插点坐标与控制点测量坐标相比较,对东、北和高三个方向进行精度评价,分别得到RMSEX、RMSEY和RMSEZ精度度量:RMSEX如式(1),RMSEY如式(2),RMSEZ如式(3),6. The unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction as claimed in claim 1, wherein in step 4, for each control point, the four nearest points of the dense point cloud generated by the photogrammetry process are The coordinates of the interpolation point are compared with the measured coordinates of the control point, and the accuracy is evaluated in the three directions of east, north and high, and the RMSE X , RMSE Y and RMSE Z accuracy measures are obtained respectively: RMSE X is as formula (1), RMSE Y is as formula ( 2), RMSE Z is as formula (3),
Figure RE-FDA0002399696350000021
Figure RE-FDA0002399696350000021
Figure RE-FDA0002399696350000022
Figure RE-FDA0002399696350000022
Figure RE-FDA0002399696350000023
Figure RE-FDA0002399696350000023
其中,n表示控制点数量,Xsi、Ysi和Zsi分别表示全站仪测量的第i个控制点的X、Y和Z坐标,Xci、Yci和Zci分别表示来自点云的内插点的X、Y和Z坐标。Among them, n represents the number of control points, X si , Y si and Z si represent the X, Y and Z coordinates of the ith control point measured by the total station, respectively, and X ci , Y ci and Z ci represent the data from the point cloud, respectively The X, Y and Z coordinates of the interpolation point.
CN201911342979.XA 2019-12-23 2019-12-23 Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction Pending CN111006645A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911342979.XA CN111006645A (en) 2019-12-23 2019-12-23 Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911342979.XA CN111006645A (en) 2019-12-23 2019-12-23 Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction

Publications (1)

Publication Number Publication Date
CN111006645A true CN111006645A (en) 2020-04-14

Family

ID=70116077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911342979.XA Pending CN111006645A (en) 2019-12-23 2019-12-23 Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction

Country Status (1)

Country Link
CN (1) CN111006645A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112729260A (en) * 2020-12-15 2021-04-30 广州极飞科技股份有限公司 Surveying and mapping system and surveying and mapping method
CN112947307A (en) * 2021-01-22 2021-06-11 青岛黄海学院 Control method for surface appearance of high-speed cutting workpiece
CN113759953A (en) * 2021-11-09 2021-12-07 四川格锐乾图科技有限公司 Flight attitude photo correction method based on open source DEM data
CN119107348A (en) * 2024-11-07 2024-12-10 哈尔滨工业大学(威海) Robotic arm grasping method and device based on point cloud completion

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009077128A1 (en) * 2007-12-14 2009-06-25 Hülsta-Werke Hüls Gmbh & Co. Kg Panel
CN201740777U (en) * 2010-04-06 2011-02-09 长江水利委员会长江科学院 Rapid measurement system for measuring slope soil erosion
CN102914501A (en) * 2012-07-26 2013-02-06 南京大学 Method for calculating extinction coefficients of three-dimensional forest canopy by using laser-point cloud
CN103824330A (en) * 2014-03-03 2014-05-28 攀钢集团矿业有限公司 Method for building ore body middle-section layered graph and three-dimensional model
EP2472221B1 (en) * 2011-01-04 2014-11-19 Kabushiki Kaisha Topcon Flight Control System for Flying Object
CN105549060A (en) * 2015-12-15 2016-05-04 大连海事大学 Target positioning system based on position and attitude of airborne photoelectric pod
CN105865427A (en) * 2016-05-18 2016-08-17 三峡大学 Individual geological disaster emergency investigation method based on remote sensing of small unmanned aerial vehicle
CN107514993A (en) * 2017-09-25 2017-12-26 同济大学 Data acquisition method and system for single building modeling based on UAV
US20180082472A1 (en) * 2016-09-16 2018-03-22 Topcon Corporation Image processing device, image processing method, and image processing program
CN109357617A (en) * 2018-10-25 2019-02-19 东北大学 A UAV-based Displacement and Deformation Monitoring Method for High Steep Rock Slopes
CN109612432A (en) * 2018-12-24 2019-04-12 东南大学 A Fast Method to Obtain Contour Lines of Complex Terrain
JP2019074532A (en) * 2017-10-17 2019-05-16 有限会社ネットライズ Method for giving real dimensions to slam data and position measurement using the same
CN109920028A (en) * 2019-03-12 2019-06-21 中国电建集团中南勘测设计研究院有限公司 A kind of width is averaged the landform modification method of facade two dimensional model
CN110111414A (en) * 2019-04-10 2019-08-09 北京建筑大学 A kind of orthography generation method based on three-dimensional laser point cloud
CN110375722A (en) * 2019-06-25 2019-10-25 北京工业大学 A method of duct pieces of shield tunnel joint open is extracted based on point cloud data

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009077128A1 (en) * 2007-12-14 2009-06-25 Hülsta-Werke Hüls Gmbh & Co. Kg Panel
CN201740777U (en) * 2010-04-06 2011-02-09 长江水利委员会长江科学院 Rapid measurement system for measuring slope soil erosion
EP2472221B1 (en) * 2011-01-04 2014-11-19 Kabushiki Kaisha Topcon Flight Control System for Flying Object
CN102914501A (en) * 2012-07-26 2013-02-06 南京大学 Method for calculating extinction coefficients of three-dimensional forest canopy by using laser-point cloud
CN103824330A (en) * 2014-03-03 2014-05-28 攀钢集团矿业有限公司 Method for building ore body middle-section layered graph and three-dimensional model
CN105549060A (en) * 2015-12-15 2016-05-04 大连海事大学 Target positioning system based on position and attitude of airborne photoelectric pod
CN105865427A (en) * 2016-05-18 2016-08-17 三峡大学 Individual geological disaster emergency investigation method based on remote sensing of small unmanned aerial vehicle
US20180082472A1 (en) * 2016-09-16 2018-03-22 Topcon Corporation Image processing device, image processing method, and image processing program
CN107514993A (en) * 2017-09-25 2017-12-26 同济大学 Data acquisition method and system for single building modeling based on UAV
JP2019074532A (en) * 2017-10-17 2019-05-16 有限会社ネットライズ Method for giving real dimensions to slam data and position measurement using the same
CN109357617A (en) * 2018-10-25 2019-02-19 东北大学 A UAV-based Displacement and Deformation Monitoring Method for High Steep Rock Slopes
CN109612432A (en) * 2018-12-24 2019-04-12 东南大学 A Fast Method to Obtain Contour Lines of Complex Terrain
CN109920028A (en) * 2019-03-12 2019-06-21 中国电建集团中南勘测设计研究院有限公司 A kind of width is averaged the landform modification method of facade two dimensional model
CN110111414A (en) * 2019-04-10 2019-08-09 北京建筑大学 A kind of orthography generation method based on three-dimensional laser point cloud
CN110375722A (en) * 2019-06-25 2019-10-25 北京工业大学 A method of duct pieces of shield tunnel joint open is extracted based on point cloud data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
VERHOEVEN, G: "Taking Computer Vision Aloft - Archaeological Three-dimensional Reconstructions from Aerial Photographs with PhotoScan", 《ARCHAEOLOGICAL PROSPECTION》 *
ZHOU, W ET AL.: "Automated extraction of 3D vector topographic feature line from terrain point cloud", 《GEOCRTO INTERNATIONAL》 *
姜三 等: "面向无人机倾斜影像的高效SfM重建方案", 《武汉大学学报 信息科学版》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112729260A (en) * 2020-12-15 2021-04-30 广州极飞科技股份有限公司 Surveying and mapping system and surveying and mapping method
CN112947307A (en) * 2021-01-22 2021-06-11 青岛黄海学院 Control method for surface appearance of high-speed cutting workpiece
CN113759953A (en) * 2021-11-09 2021-12-07 四川格锐乾图科技有限公司 Flight attitude photo correction method based on open source DEM data
CN119107348A (en) * 2024-11-07 2024-12-10 哈尔滨工业大学(威海) Robotic arm grasping method and device based on point cloud completion

Similar Documents

Publication Publication Date Title
CN111724477B (en) Method for constructing multi-level three-dimensional terrain model by multi-source data fusion
Agüera-Vega et al. Reconstruction of extreme topography from UAV structure from motion photogrammetry
Zeybek Accuracy assessment of direct georeferencing UAV images with onboard global navigation satellite system and comparison of CORS/RTK surveying methods
CN102506824B (en) Method for generating digital orthophoto map (DOM) by urban low altitude unmanned aerial vehicle
US20090154793A1 (en) Digital photogrammetric method and apparatus using intergrated modeling of different types of sensors
CN111006645A (en) Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction
Rahman et al. Volumetric calculation using low cost unmanned aerial vehicle (UAV) approach
Lee et al. Enhancement of low-cost UAV-based photogrammetric point cloud using MMS point cloud and oblique images for 3D urban reconstruction
Jebur et al. Assessing the performance of commercial Agisoft PhotoScan software to deliver reliable data for accurate3D modelling
Wu et al. Geometric integration of high-resolution satellite imagery and airborne LiDAR data for improved geopositioning accuracy in metropolitan areas
WO2022104251A1 (en) Image analysis for aerial images
CN110207676A (en) The acquisition methods and device of a kind of field ditch pool parameter
Sammartano et al. Oblique images and direct photogrammetry with a fixed wing platform: first test and results in Hierapolis of Phrygia (TK)
CN117572455B (en) Mountain reservoir topographic map mapping method based on data fusion
Sani et al. 3D reconstruction of building model using UAV point clouds
Rebelo et al. Building 3D city models: Testing and comparing Laser scanning and low-cost UAV data using FOSS technologies
AGUILAR et al. 3D coastal monitoring from very dense UAV-Based photogrammetric point clouds
CN115979125A (en) Method, system, device and storage medium for BIM combined with UAV to measure earthwork
Chonpatathip Earthwork volume measurement in road construction using unmanned aerial vehicle (UAV)
Dursun et al. 3D city modelling of Istanbul historic peninsula by combination of aerial images and terrestrial laser scanning data
Kern et al. An accurate real-time UAV mapping solution for the generation of orthomosaics and surface models
Luh et al. High resolution survey for topographic surveying
CN115900655A (en) A method for planning inspection routes
Wang et al. Grid algorithm for large-scale topographic oblique photogrammetry precision enhancement in vegetation coverage areas
Sahin et al. Terrestrial Backpack Laser Scanner Usage in Mobile Surveying: A Case Study for Cadastral Surveying

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200414