CN112288637B - Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method - Google Patents
Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method Download PDFInfo
- Publication number
- CN112288637B CN112288637B CN202011316356.8A CN202011316356A CN112288637B CN 112288637 B CN112288637 B CN 112288637B CN 202011316356 A CN202011316356 A CN 202011316356A CN 112288637 B CN112288637 B CN 112288637B
- Authority
- CN
- China
- Prior art keywords
- image
- module
- splicing
- matching
- triangular mesh
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The invention provides a rapid splicing device and a rapid splicing method for aerial images of an unmanned aerial vehicle, wherein the device comprises a preprocessing module, a display processing module and a post-processing module; the preprocessing module receives video data and POS data shot by the unmanned aerial vehicle, preprocesses the video data and the POS data and sends the preprocessed video data and POS data to the display processing module; the display processing module displays the preprocessed video data and POS data on the spherical model in real time, so as to prepare for splicing; and the post-processing module performs splicing processing on the images displayed in real time, and corrects and fuses the overlapping areas. Compared with the traditional offline three-dimensional modeling method, the unmanned aerial vehicle aerial image rapid splicing device and the rapid splicing method have the advantages that the processing time of an hour level is not needed, and the real-time effect can be achieved. The traditional direct image-based stitching has dislocation distortion in stitching effect. The orthogram obtained by the method has no dislocation and obvious distortion, and achieves the effect of similar precision of three-dimensional reconstruction.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a rapid splicing device and a rapid splicing method for aerial images of an unmanned aerial vehicle.
Background
With the development of technology, unmanned aerial vehicle aerial photography is widely applied. The unmanned aerial vehicle aerial photography has the characteristics of high efficiency, flexibility, rapidness and low cost. The digital camera and the digital video camera carried on the machine can acquire high-resolution images. Unmanned aerial vehicle aerial photography application fields are extensive, including agriculture, forestry, electric power, homeland resources, urban planning and the like. The technology uses a vertical camera and four inclined cameras to be carried on the unmanned aerial vehicle, and acquires images synchronously through five visual angles, thereby obtaining abundant high-resolution textures on the top surface and side view of a building. And generating a three-dimensional model offline by using the acquired images, and further deriving a spliced digital orthophoto map. The digital orthophoto map is abbreviated DOM (Digital Orthophoto Map). The digital orthophoto map is a plan view of a digital aerial photograph or remote sensing image which is scanned by using a DEM (Digital Elevation Model ), and is subjected to radiation correction, differential correction and mosaic by pixel, and image data generated by cutting according to a specified picture range, and has a kilometer grid, a figure profile (inner and outer) finishing and annotation.
The existing oblique photography technology has the problems of high hardware requirement, high time delay, long three-dimensional model output period and the like, and cannot meet the real-time requirement in the disaster prevention process.
Disclosure of Invention
The invention aims to provide a rapid splicing device and a rapid splicing method for aerial images of an unmanned aerial vehicle, which can solve the problems of high image splicing time delay and long splicing result output period in the prior art.
The invention aims at realizing the following technical scheme:
In a first aspect, the invention provides a rapid splicing device for aerial images of an unmanned aerial vehicle, which comprises a preprocessing module, a display processing module and a post-processing module; the preprocessing module receives video data and POS data shot by the unmanned aerial vehicle, preprocesses the video data and the POS data and sends the preprocessed video data and POS data to the display processing module; the display processing module displays the preprocessed video data and POS data on a spherical model in real time, so as to prepare for splicing; and the post-processing module performs splicing processing on the images displayed in real time, and corrects and fuses the overlapping areas.
Further, the preprocessing module includes:
The sampling module is used for sampling video data and POS data of the unmanned aerial vehicle at fixed time, and scaling the sampled single-frame image to a set scale and then storing the single-frame image;
The feature extraction module is used for extracting features of the single-frame image;
the space matching module is used for matching a k frame image forming image matching pair nearest to the current frame image by using the GPS information and using a kd tree;
The feature matching module is used for carrying out feature matching and error matching filtering by utilizing feature points among the images;
And the generating point cloud module is used for generating a track by utilizing the matching relation among the characteristic points, performing triangulation on the generated track to generate a new three-dimensional space point and performing error adjustment on the three-dimensional space point.
Further, the display processing module includes:
the first 2D triangular mesh generation module is used for triangulating the characteristic points by using the matching result of the characteristic matching module to generate a 2D triangular mesh;
the 3D triangular mesh generation module is used for triangulating the three-dimensional space points to generate a 3D triangular mesh;
the second 2D triangular mesh generation module is used for removing the elevation dimension of the 3D triangular mesh to generate a second 2D triangular mesh;
The image segmentation module is used for cutting a single frame image into a plurality of image blocks by using a first 2D triangular grid;
a DSM generation module for generating a DSM elevation map using the 3D triangle mesh;
and the DOM generation module is used for generating digital DOM and image four-point information by using the second 2D triangular mesh and the image segmentation result.
Further, the post-processing module includes:
the DSM splicing module is used for directly splicing the plurality of DSMs generated by the DSM generating module by using four-point information to form a complete DSM image;
the complete DSM processing module is used for carrying out smooth gradual change processing on the overlapped area of the complete DSM image;
The DOM splicing module is used for directly splicing the DOM generated by the DOM generating module by using the four-point information to form a complete DOM image;
And the complete DOM processing module is used for carrying out smooth gradual change processing on the overlapped area of the complete DOM image.
In a second aspect, the invention provides a rapid splicing method for aerial images of an unmanned aerial vehicle, comprising the following steps:
s1, receiving video data and POS data and preprocessing the video data and the POS data;
Step S2, displaying the preprocessed video data and POS data in real time;
And S3, performing splicing processing on the images displayed in real time, and correcting and fusing the spliced overlapping areas.
Further, the step of preprocessing in the step S1 includes:
step S101, video data and POS data of the unmanned aerial vehicle are sampled at fixed time, and a sampled single-frame image is scaled to a set scale and then stored;
step S102, extracting features of a single frame image;
step S103, matching out k frame images with the nearest time of the current frame image to form an image matching pair;
Step S104, performing feature matching by utilizing feature points among the images and filtering mismatching;
And step 105, generating a track by utilizing the matching relation among the characteristic points, performing triangulation on the generated track to generate a new three-dimensional space point, and performing error adjustment on the three-dimensional space point.
Further, the step S2 includes:
Step S201, triangulating feature points used by feature matching by using the feature matching result of the step S104 to generate a first 2D triangular mesh;
step S202, cutting a single frame image into a plurality of image blocks by using a first 2D triangular grid;
step S203, triangulating the three-dimensional space points generated in the step S105 by using a first 2D triangular mesh to generate a 3D triangular mesh;
Step S204, removing the elevation dimension of the 3D triangular mesh to generate a second 2D triangular mesh;
step S205, generating DSM;
And S206, generating DOM and image four-point information and sending the DOM and image four-point information to the spherical model for real-time display.
Further, the step S3 includes:
Step 301, directly splicing the plurality of DSMs generated in step 205 by using four-point information to form a complete DSM image;
step S302, performing smooth gradual change processing on the overlapped area of the complete DSM image;
step S303, directly splicing the DOM generated in the step S206 by using four-point information to form a complete DSM image;
and S304, performing smooth gradual change processing on the overlapped area of the complete DOM image.
Further, feature extraction is performed using the ORB method.
Compared with the traditional offline three-dimensional modeling method, the unmanned aerial vehicle aerial image rapid splicing device and the rapid splicing method have the advantages that the processing time of an hour level is not needed, and the real-time effect can be achieved. The traditional direct image-based stitching has dislocation distortion in stitching effect. The orthogram obtained by the method has no dislocation and obvious distortion, and achieves the effect of similar precision of three-dimensional reconstruction.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, are incorporated in and constitute a part of this specification. The drawings and their description are illustrative of the application and are not to be construed as unduly limiting the application. In the drawings:
fig. 1 is a schematic structural diagram of an unmanned aerial vehicle aerial image rapid splicing device;
fig. 2 is a step diagram of the method for quickly splicing aerial images of the unmanned aerial vehicle.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the application herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the present application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal" and the like indicate an azimuth or a positional relationship based on that shown in the drawings. These terms are only used to better describe the present application and its embodiments and are not intended to limit the scope of the indicated devices, elements or components to the particular orientations or to configure and operate in the particular orientations.
Also, some of the terms described above may be used to indicate other meanings in addition to orientation or positional relationships, for example, the term "upper" may also be used to indicate some sort of attachment or connection in some cases. The specific meaning of these terms in the present application will be understood by those of ordinary skill in the art according to the specific circumstances.
In addition, the term "plurality" shall mean two as well as more than two.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
The invention relates to a rapid splicing device for unmanned aerial vehicle aerial images, which is shown in figure 1. The system comprises a preprocessing module, a display processing module and a post-processing module. The preprocessing module receives video data and POS data shot by a camera carried by the unmanned aerial vehicle, preprocesses the video data and the POS data and sends the preprocessed video data and POS data to the display processing module. The display processing module displays the preprocessed video data and POS data on the spherical model in real time, and prepares for splicing. And the post-processing module performs splicing processing on the images displayed in real time, and corrects and fuses the overlapping areas.
When the unmanned aerial vehicle is in flight operation, the acquired unmanned aerial vehicle image usually carries matched POS data. Thus, the image can be processed more conveniently in the processing. POS data mainly includes GPS data and IMU data, i.e., external orientation elements in oblique photogrammetry: latitude, longitude, elevation, heading angle (Phi), pitch angle (Omega), roll angle (Kappa). Latitude, longitude, and altitude are GPS data, generally indicated by X, Y, Z, representing the geographic location of the aircraft at the time of exposure in flight. The IMU data mainly includes heading angle (Phi): angle between the longitudinal axis of the aircraft and space aircraft and the north pole of the earth, pitch angle (Omega): vector parallel to the fuselage axis and pointing forward of the aircraft, angle with the ground, roll angle (Kappa): the roll angle is the angle between the zb axis of the machine body coordinate system and the vertical plane passing through the xb axis of the machine body, and the machine body rolls rightwards to be positive, and vice versa.
Further, in a preferred embodiment of the present application, the preprocessing module includes:
The sampling module (new frame) is mainly used for receiving video data and POS data of the unmanned aerial vehicle, performing timing sampling, scaling a sampled single-frame image to a set scale and then storing the single-frame image. And converting the format of the POS data.
The feature extraction module (feature extraction) is mainly used for extracting features of a single frame image. In order to cope with the characteristic of low time delay, an ORB method with higher speed is used for extracting the characteristics, and the extracted characteristic points are subjected to rasterization processing, so that the characteristic points can be distributed uniformly as much as possible. ORB is a short for Oriented Fast and Rotated Brief and can be used to quickly create feature vectors for keypoints in an image, which can be used to identify objects in the image. Wherein Fast and Brief are the feature detection algorithm and the vector creation algorithm, respectively. The ORB will first find a special region from the image called a keypoint. Key points are small areas of prominence in the image, such as corner points, for example, which have a sharp change in pixel value from light to dark. The ORB would then calculate a corresponding feature vector for each keypoint. The feature vectors created by the ORB algorithm contain only 1 and 0, called binary feature vectors. The order of 1 and 0 may vary depending on the particular keypoint and the surrounding pixel region. The vector represents the intensity pattern around the keypoint, so multiple feature vectors can be used to identify a larger area, even a specific object in the image. ORB is characterized by an ultrafast speed and is not affected to some extent by noise and image transformations, such as rotation and scaling transformations.
And the space matching module (SPATIAL MATCH) mainly uses GPS information and a kd tree to select k frames of image forming image matching pairs nearest to the current frame to send to the next characteristic matching stage.
And a feature matching module (feature match) for performing feature matching and filtering mismatching by using feature points between images.
And the generating point cloud module (pointcloud) is used for generating a track mainly by utilizing the matching relation among the characteristic points, and performing triangulation on the generated track to generate a new three-dimensional space point3D. The point3D was error-adjusted using the beam adjustment method (bundle adjustment). The point cloud is a massive point set expressing the target space distribution and the target surface characteristics under the same space reference system, and after the space coordinates of each sampling point of the object surface are obtained, the point set is obtained and is called as 'point cloud'. The beam adjustment method is to optimize a plurality of camera motion matrixes and non-coding element three-dimensional structures in the shadow space. The greatest feature of this optimization method is that it can handle the data loss situation and provide a true maximum likelihood estimate.
Further, in a preferred embodiment of the present application, the presentation processing module includes:
The first 2D triangular mesh generation module (2D mesh1) mainly uses the matching result of the feature matching module to triangulate feature points used by feature matching to generate a 2D triangular mesh.
The 3D triangle mesh generation module (3D mesh) mainly uses pointcloud and 2D Mesh1 results to triangulate the point3D point generated by the current frame to generate a 3D triangle mesh.
The second 2D triangular Mesh generating module (2D Mesh2) mainly removes the dimension of the elevation of the 3D triangular Mesh generated by the 3D Mesh, only retains the dimension information of x and y, and generates a new 2D triangular Mesh.
An image segmentation module (IMAGE PATCH) cuts the single frame image into image blocks using primarily the first 2D triangle mesh. Each image block is a grid size.
The DSM generation module (Local DSM) mainly uses a 3D triangular grid to generate a DSM elevation map, pixels do not have corresponding elevations, and interpolation calculation is carried out in the corresponding triangular grid. DSM: digital Surface Model digital surface model refers to a ground elevation model comprising the heights of surface buildings, bridges, trees and the like.
And the DOM generation module (Local Ortho Mosaic) is mainly used for generating digital orthographic image DOM and image four-point information by using the results of the 2D Mesh2 and IMAGE PATCH and sending the digital orthographic image DOM and the image four-point information to the spherical model for real-time display.
Further, in a preferred embodiment of the present application, the post-processing module includes:
And the DSM splicing module is used for directly splicing the plurality of DSMs generated by the DSM generating module by using the four-point information to form a complete DSM image.
The complete DSM processing module has an overlapping area after a plurality of DSMs are directly spliced. The chromatic aberration of the overlapping area has abrupt change, and after two adjacent DSMs are downsampled by using a Gaussian pyramid, the two DSMs are weighted and overlapped by using a smoothly gradual weight in different frequency bands.
And the DOM splicing module is used for directly splicing the plurality of DOMs generated by the DOM generating module by using the four-point information to form a complete DOM image.
And the complete DOM processing module has an overlapping area after a plurality of DOMs are directly spliced. The chromatic aberration of the overlapping area is abrupt, and after two adjacent DOMs are downsampled by using a Gaussian pyramid, the two are weighted and overlapped by using a smoothly gradual weight in different frequency bands.
The invention discloses a rapid splicing method of unmanned aerial vehicle aerial images, which comprises the following steps:
and S1, the preprocessing module receives the video data and the POS data and preprocesses the video data and the POS data.
The video data and POS data are captured by a camera mounted on the unmanned aerial vehicle.
Further, in a preferred embodiment of the present application, the preprocessing module preprocesses the video data and the POS data includes:
And step S101, regularly sampling video data and POS data of the unmanned aerial vehicle, scaling the sampled single-frame image to a set scale, and storing the scaled single-frame image.
The POS data needs to be converted in format, some POS data is stored in WGS84 coordinate system, some POS data is stored in CGCS2000 coordinate system, some columns of latitude and longitude are arranged differently, and the like, so that some data preprocessing operations need to be performed to ensure the uniformity of input data.
And step S102, extracting the characteristics of the single frame image.
In order to cope with the characteristic of low time delay, an ORB method with higher speed is used for extracting the characteristics, and the extracted characteristic points are subjected to rasterization processing, so that the characteristic points can be distributed uniformly as much as possible.
Step S103, matching out k frame images with the nearest time of the current frame image to form an image matching pair.
And matching the k frame image forming image matching pair closest to the current frame time into a next feature matching stage by using the GPS information provided by the unmanned aerial vehicle and using a kd tree. k represents the number of images having an overlapping area with the current image. k is not a determined value, and is specified to be more than or equal to 2, and dynamic adjustment is carried out according to the acquired data density.
And step S104, performing feature matching by utilizing feature points among the images and filtering mismatching.
Feature extraction uses sift, surf, orb and other industry open-source algorithms to perform specific configuration according to different scene needs. For example, in areas with unobvious characteristics such as mountain areas, grasslands, water surfaces and the like, a sift characteristic extraction algorithm with strong adaptability can be used, and in areas with obvious characteristics such as cities, an orb characteristic extraction algorithm with high characteristic extraction speed and a surf characteristic extraction algorithm can be used.
Feature matching is also calculated by using an algorithm of an industry open source, and the feature matching is required to be correspondingly used according to a used feature extraction method: feature descriptors such as sift, surf are floating point number vectors that can use euclidean distance to compare the similarity of two feature points. Feature descriptors such as orb in binary form can be used to calculate the similarity between two feature points using hamming distances.
The mismatching filtering method is also calculated by using an industry open source algorithm. When the area of the aerial photo is a lawn area, a water surface area and other areas with high flatness, a ransac algorithm is adopted to estimate a homography matrix for the feature matching result, and error matching is filtered by using a minimum re-projection error. When the aerial photo area is an area with low flatness, such as a residential area, estimating an essential matrix or a basic matrix for the feature matching result by adopting a random algorithm, and filtering mismatching by using a Sampson distance.
Step S105, generating a track (track) by utilizing the matching relation among the characteristic points, triangulating the generated track to generate a new three-dimensional space point3D, and performing error adjustment on the three-dimensional space point.
Assume that there are A, B, C images, and a, b, and c are feature points on the three images, respectively. After feature matching, two matching pairs a-b and b-c are generated, and the feature points a, b and c form a track. track has the following features:
1) the track length is more than or equal to 2;
2) Each feature point in track comes from a different image;
3) Each feature point in track points to the same object in the real world and is therefore also called a homonymy point.
Error adjustment of three-dimensional space points is performed using a bundle adjustment method (bundle adjustment).
And step S2, displaying the preprocessed video data and the preprocessed POS data on the spherical model in real time, and preparing for splicing.
Further, in a preferred embodiment of the present application, step S2 includes:
step S201, triangulating feature points used by feature matching by using the feature matching result of the step S104, and generating a first 2D triangular mesh.
Step S202, cutting a single frame image into a plurality of image blocks by using a first 2D triangular grid.
The size of each image block is a grid size.
Step S203, triangulating the three-dimensional space points of the current frame image generated in the step S105 by using the first 2D triangular mesh to generate a 3D triangular mesh.
And S204, removing the elevation dimension of the 3D triangular mesh to generate a second 2D triangular mesh.
The method mainly removes the dimension of the elevation of the 3D triangular Mesh generated by the 3D Mesh, and only retains the information of the dimension of x and y.
Step S205, generating a digital surface model DSM.
The DSM elevation map is mainly generated by using a 3D triangular grid, the pixel points do not have corresponding elevations, and interpolation calculation is carried out in the corresponding triangular grid. DSM: digital Surface Model digital surface model refers to a ground elevation model comprising the heights of surface buildings, bridges, trees and the like.
And S206, generating digital orthographic image DOM and image four-point information, and sending the DOM and the image four-point information to a spherical model for real-time display.
The digital orthographic image DOM is generated mainly by using the result of the second 2D triangular mesh and the image segmentation module.
And S3, performing splicing treatment on the images displayed in real time, and correcting and fusing the spliced overlapping areas.
Further, in a preferred embodiment of the present application, step S3 includes:
Step 301, directly splicing the plurality of DSMs generated in step 205 by using the four-point information to form a complete DSM image.
Step S302, performing smooth gradual change processing on the overlapped area of the complete DSM image.
Since multiple DSMs are directly spliced, there is an overlap area. The chromatic aberration of the overlapping area has abrupt change, and after two adjacent DSMs are downsampled by using a Gaussian pyramid, the two DSMs are weighted and overlapped by using a smoothly gradual weight in different frequency bands.
The weighted superposition of the smoothly graded weights to the two uses an open source algorithm common in the art, and is not described in detail herein.
Step S303, directly splicing the DOMs generated in step S206 by using the four-point information to form a complete DSM image.
And S304, performing smooth gradual change processing on the overlapped area of the complete DOM image.
Because a plurality of DOMs are directly spliced, an overlapping area exists. The chromatic aberration of the overlapping area is abrupt, and after two adjacent DOMs are downsampled by using a Gaussian pyramid, the two are weighted and overlapped by using a smoothly gradual weight in different frequency bands.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (5)
1. The unmanned aerial vehicle aerial image rapid splicing device is characterized by comprising a preprocessing module, a display processing module and a post-processing module; the preprocessing module receives video data and POS data shot by the unmanned aerial vehicle, preprocesses the video data and the POS data and sends the preprocessed video data and POS data to the display processing module; the display processing module displays the preprocessed video data and POS data on a spherical model in real time, so as to prepare for splicing; the post-processing module performs splicing processing on the images displayed in real time, and corrects and fuses the overlapping areas;
the display processing module comprises:
the first 2D triangular mesh generation module is used for triangulating the characteristic points by using the matching result of the characteristic matching module to generate a 2D triangular mesh;
the 3D triangular mesh generation module is used for triangulating the three-dimensional space points to generate a 3D triangular mesh;
the second 2D triangular mesh generation module is used for removing the elevation dimension of the 3D triangular mesh to generate a second 2D triangular mesh;
The image segmentation module is used for cutting a single frame image into a plurality of image blocks by using a first 2D triangular grid;
a DSM generation module for generating a DSM elevation map using the 3D triangle mesh;
the DOM generation module is used for generating digital DOM and image four-point information by using the second 2D triangular mesh and the image segmentation result;
The post-processing module comprises:
the DSM splicing module is used for directly splicing the plurality of DSMs generated by the DSM generating module by using four-point information to form a complete DSM image;
the complete DSM processing module is used for carrying out smooth gradual change processing on the overlapped area of the complete DSM image;
The DOM splicing module is used for directly splicing the DOM generated by the DOM generating module by using the four-point information to form a complete DOM image;
And the complete DOM processing module is used for carrying out smooth gradual change processing on the overlapped area of the complete DOM image.
2. The rapid splicing device of unmanned aerial vehicle aerial images according to claim 1, wherein the preprocessing module comprises:
The sampling module is used for sampling video data and POS data of the unmanned aerial vehicle at fixed time, and scaling the sampled single-frame image to a set scale and then storing the single-frame image;
The feature extraction module is used for extracting features of the single-frame image;
the space matching module is used for matching a k frame image forming image matching pair nearest to the current frame image by using the GPS information and using a kd tree;
The feature matching module is used for carrying out feature matching and error matching filtering by utilizing feature points among the images;
And the generating point cloud module is used for generating a track by utilizing the matching relation among the characteristic points, performing triangulation on the generated track to generate a new three-dimensional space point and performing error adjustment on the three-dimensional space point.
3. The unmanned aerial vehicle aerial image rapid splicing method is characterized by comprising the following steps of:
s1, receiving video data and POS data and preprocessing the video data and the POS data;
Step S2, displaying the preprocessed video data and POS data in real time;
s3, performing splicing treatment on the images displayed in real time, and correcting and fusing the spliced overlapping areas;
The step S2 includes:
Step S201, triangulating feature points used by feature matching by using the feature matching result of the step S104 to generate a first 2D triangular mesh;
step S202, cutting a single frame image into a plurality of image blocks by using a first 2D triangular grid;
step S203, triangulating the three-dimensional space points generated in the step S105 by using a first 2D triangular mesh to generate a 3D triangular mesh;
Step S204, removing the elevation dimension of the 3D triangular mesh to generate a second 2D triangular mesh;
Step S205, generating a DSM elevation map by using the 3D triangular mesh;
Step S206, generating digital DOM and image four-point information by using the second 2D triangular mesh and the image segmentation result, and sending the digital DOM and image four-point information to a spherical model for real-time display;
The step S3 includes:
Step 301, directly splicing the plurality of DSMs generated in step 205 by using four-point information to form a complete DSM image;
step S302, performing smooth gradual change processing on the overlapped area of the complete DSM image;
step S303, directly splicing the DOM generated in the step S206 by using four-point information to form a complete DSM image;
and S304, performing smooth gradual change processing on the overlapped area of the complete DOM image.
4. The method for rapid stitching of aerial images of an unmanned aerial vehicle according to claim 3, wherein the step of preprocessing in step S1 comprises:
step S101, video data and POS data of the unmanned aerial vehicle are sampled at fixed time, and a sampled single-frame image is scaled to a set scale and then stored;
step S102, extracting features of a single frame image;
step S103, matching out k frame images with the nearest time of the current frame image to form an image matching pair;
Step S104, performing feature matching by utilizing feature points among the images and filtering mismatching;
And step 105, generating a track by utilizing the matching relation among the characteristic points, performing triangulation on the generated track to generate a new three-dimensional space point, and performing error adjustment on the three-dimensional space point.
5. The method for quickly stitching aerial images of an unmanned aerial vehicle according to claim 4, wherein the feature extraction is performed by using an ORB method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011316356.8A CN112288637B (en) | 2020-11-19 | 2020-11-19 | Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011316356.8A CN112288637B (en) | 2020-11-19 | 2020-11-19 | Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112288637A CN112288637A (en) | 2021-01-29 |
CN112288637B true CN112288637B (en) | 2024-10-25 |
Family
ID=74399670
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011316356.8A Active CN112288637B (en) | 2020-11-19 | 2020-11-19 | Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112288637B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113012084A (en) * | 2021-03-04 | 2021-06-22 | 中煤(西安)航测遥感研究院有限公司 | Unmanned aerial vehicle image real-time splicing method and device and terminal equipment |
CN114200958A (en) * | 2021-11-05 | 2022-03-18 | 国能电力技术工程有限公司 | Automatic inspection system and method for photovoltaic power generation equipment |
CN114170306B (en) * | 2021-11-17 | 2022-11-04 | 埃洛克航空科技(北京)有限公司 | Image attitude estimation method, device, terminal and storage medium |
CN114359045A (en) * | 2021-12-07 | 2022-04-15 | 广州极飞科技股份有限公司 | Image data processing method and processing device thereof, and unmanned equipment |
CN116894870A (en) * | 2023-08-03 | 2023-10-17 | 成都纵横大鹏无人机科技有限公司 | An image target positioning method, system, electronic device and storage medium |
CN116883251B (en) * | 2023-09-08 | 2023-11-17 | 宁波市阿拉图数字科技有限公司 | Image orientation splicing and three-dimensional modeling method based on unmanned aerial vehicle video |
CN118632072B (en) * | 2024-05-10 | 2025-02-18 | 北京卓视智通科技有限责任公司 | Multi-video cropping and splicing method, system, electronic equipment and storage medium |
CN118736128A (en) * | 2024-06-30 | 2024-10-01 | 中国人民解放军国防科技大学 | A fast 3D target modeling system based on UAV images |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106485655A (en) * | 2015-09-01 | 2017-03-08 | 张长隆 | A kind of taken photo by plane map generation system and method based on quadrotor |
CN107220926A (en) * | 2016-03-22 | 2017-09-29 | 中国科学院遥感与数字地球研究所 | The quick joining method of unmanned plane image based on KD trees and global BFS |
CN110310248A (en) * | 2019-08-27 | 2019-10-08 | 成都数之联科技有限公司 | A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system |
CN111693025A (en) * | 2020-06-12 | 2020-09-22 | 深圳大学 | Remote sensing image data generation method, system and equipment |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6757445B1 (en) * | 2000-10-04 | 2004-06-29 | Pixxures, Inc. | Method and apparatus for producing digital orthophotos using sparse stereo configurations and external models |
US7778491B2 (en) * | 2006-04-10 | 2010-08-17 | Microsoft Corporation | Oblique image stitching |
CN109949399B (en) * | 2019-03-15 | 2023-07-14 | 西安因诺航空科技有限公司 | Scene three-dimensional reconstruction method based on unmanned aerial vehicle aerial image |
CN110648398B (en) * | 2019-08-07 | 2020-09-11 | 武汉九州位讯科技有限公司 | Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data |
CN111060075B (en) * | 2019-12-10 | 2021-01-12 | 中国人民解放军军事科学院国防科技创新研究院 | Local area terrain ortho-image rapid construction method and system based on unmanned aerial vehicle |
-
2020
- 2020-11-19 CN CN202011316356.8A patent/CN112288637B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106485655A (en) * | 2015-09-01 | 2017-03-08 | 张长隆 | A kind of taken photo by plane map generation system and method based on quadrotor |
CN107220926A (en) * | 2016-03-22 | 2017-09-29 | 中国科学院遥感与数字地球研究所 | The quick joining method of unmanned plane image based on KD trees and global BFS |
CN110310248A (en) * | 2019-08-27 | 2019-10-08 | 成都数之联科技有限公司 | A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system |
CN111693025A (en) * | 2020-06-12 | 2020-09-22 | 深圳大学 | Remote sensing image data generation method, system and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112288637A (en) | 2021-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112288637B (en) | Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method | |
Xiang et al. | Mini-unmanned aerial vehicle-based remote sensing: Techniques, applications, and prospects | |
KR101165523B1 (en) | Geospatial modeling system and related method using multiple sources of geographic information | |
Barazzetti et al. | True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach | |
CN111383335B (en) | Crowd funding photo and two-dimensional map combined building three-dimensional modeling method | |
CN119904592A (en) | Three-dimensional reconstruction and visualization method of news scenes based on multi-source remote sensing data | |
CN108168521A (en) | One kind realizes landscape three-dimensional visualization method based on unmanned plane | |
US11972507B2 (en) | Orthophoto map generation method based on panoramic map | |
CN112529498B (en) | Warehouse logistics management method and system | |
KR102587445B1 (en) | 3d mapping method with time series information using drone | |
KR100904078B1 (en) | System and method for generating 3D spatial information using image registration of aerial photographs | |
WO2022156652A1 (en) | Vehicle motion state evaluation method and apparatus, device, and medium | |
CN118762302A (en) | A roadbed deformation monitoring method and system | |
Ioli et al. | Deep learning low-cost photogrammetry for 4D short-term glacier dynamics monitoring | |
KR20190004086A (en) | Method for generating three-dimensional object model | |
CN111683221A (en) | Real-time video monitoring method and system of natural resources embedded with vector red line data | |
CN107784666B (en) | Three-dimensional change detection and updating method for terrain and ground features based on three-dimensional images | |
Krauß et al. | Generation of coarse 3D models of urban areas from high resolution stereo satellite images | |
Feng et al. | Registration of multitemporal GF-1 remote sensing images with weighting perspective transformation model | |
CN108335321B (en) | Automatic ground surface gravel size information extraction method based on multi-angle photos | |
CN114758087B (en) | Method and device for constructing urban information model | |
Treible et al. | Learning dense stereo matching for digital surface models from satellite imagery | |
CN116612184A (en) | A Camera Pose Determination Method Based on UAV Vision | |
Wu et al. | Building facade reconstruction using crowd-sourced photos and two-dimensional maps | |
Shishido et al. | Accurate overlapping method of ultra-long interval time-lapse images for world heritage site investigation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |