[go: up one dir, main page]

CN120318464A - A high-precision pure visual 3D reconstruction method and system - Google Patents

A high-precision pure visual 3D reconstruction method and system

Info

Publication number
CN120318464A
CN120318464A CN202510797162.0A CN202510797162A CN120318464A CN 120318464 A CN120318464 A CN 120318464A CN 202510797162 A CN202510797162 A CN 202510797162A CN 120318464 A CN120318464 A CN 120318464A
Authority
CN
China
Prior art keywords
dimensional
feature
dimensional image
reconstruction
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510797162.0A
Other languages
Chinese (zh)
Inventor
刘家祥
刘家瑞
吴博剑
李春桃
杨溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202510797162.0A priority Critical patent/CN120318464A/en
Publication of CN120318464A publication Critical patent/CN120318464A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a high-precision pure-vision three-dimensional reconstruction method and a high-precision pure-vision three-dimensional reconstruction system, which belong to the technical field of three-dimensional reconstruction, wherein the method comprises the following steps of obtaining a multi-angle high-resolution two-dimensional image of an object; extracting feature points from a two-dimensional image, matching the feature points, removing feature point matching pairs which do not meet constraints through geometric verification, obtaining projection positions of the removed feature points in the two-dimensional image by using a triangulation method, preprocessing the two-dimensional image, generating dense point clouds based on the processed image, and reconstructing a curved surface based on the dense point clouds to obtain a three-dimensional surface. The invention supports incremental addition, and after the reconstruction by using the initial given two-dimensional image is completed, the two-dimensional image can be still added continuously for increasing the accuracy or breadth of the initial reconstruction.

Description

High-precision pure vision three-dimensional reconstruction method and system
Technical Field
The invention belongs to the technical field of three-dimensional reconstruction, and particularly relates to a high-precision pure vision three-dimensional reconstruction method and system.
Background
Three-dimensional reconstruction has wide application fields including volume measurement, stereoscopic display, 3D printing, object model building, and the like. The three-dimensional reconstruction technology based on vision mainly obtains real information of a scene through a vision sensor, extracts three-dimensional information of an object by utilizing a vision processing algorithm or a projection model, and is a computer technology for recovering the three-dimensional information from two-dimensional projection. The techniques can be classified into active vision and passive vision according to the data acquisition mode. Active vision detects the target position and makes measurements by projecting a structured light source onto the scene and analyzing the projected information of the light source in the scene. The method has low dependence on environment, can realize high-precision reconstruction under controlled conditions, but generally has high equipment cost and certain requirements on hardware and experimental environment. Passive vision relies on the reflection information of an external light source to perform three-dimensional measurements. Compared with the active vision method, the passive vision method has smaller dependence on environment and equipment, lower cost and wider application range. Passive vision methods can be classified into monocular vision, binocular vision and multi-eye vision according to the number of cameras.
Currently, motion restoration structure and multi-view stereoscopic (SFM-MVS) flow are the mainstream solutions for three-dimensional reconstruction based on the multi-view method. Compared with the method of the nerve radiation field (NeRF) which is emerging in recent years, the SFM-MVS has the remarkable advantages of low calculation cost and high calculation efficiency, can generate high-density three-dimensional point cloud, and has stronger reduction capability on scene information. However, since SFM-MVS has a limitation in light treatment, it is more suitable for three-dimensional reconstruction of indoor scenes. The procedure includes sparse reconstruction based on a motion restoration Structure (SFM) procedure and dense reconstruction based on a multi-view stereo vision (MVS) procedure. Currently, many established three-dimensional reconstruction software (e.g., agisoft Photoscan/METASHAPE, 3DF Zephyr, contextCapture, and Pix4 Dmapper) and open source tools (e.g., COLMAP, openMVG and VisualSFM) employ the sfM-MVS procedure, however, the system complexity, cost, and difficulty of data management of the sfM-MVS also increase significantly. And due to the lack of input to adjacent positions between the multiple pictures, erroneous reconstruction is easily generated for articles with special textures such as repeated patterns.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides the following scheme:
A high-precision pure vision three-dimensional reconstruction method comprises the following steps:
Acquiring a multi-angle high-resolution two-dimensional image of an object;
Extracting feature points of the two-dimensional image, and matching the feature points;
Removing feature point matching pairs which do not accord with constraint through geometric verification, and obtaining projection positions of the removed feature points in the two-dimensional image by using a triangulation method;
Preprocessing the two-dimensional image, and generating a dense point cloud based on the processed image;
and carrying out curved surface reconstruction based on the dense point cloud to obtain a three-dimensional surface.
Preferably, the method for extracting the feature points comprises the following steps:
Utilizing a SIFT feature detector to perform image feature points on the two-dimensional image;
and encoding the image feature points by using SIFT feature descriptors to obtain encoded feature points.
Preferably, the method for generating the dense point cloud comprises the following steps:
mapping all points in one two-dimensional image into a coordinate system of the other two-dimensional image, and generating an initial depth map;
performing depth estimation on the initial depth map to obtain occlusion pixels;
and carrying out post-processing optimization on the shielding pixels to generate the dense point cloud.
Preferably, the curved surface reconstruction method comprises the following steps:
constructing a hidden function according to the points in the dense point cloud and the corresponding normal vectors;
and generating the three-dimensional surface from the hidden function by using an implicit surface mesh generation algorithm.
The invention also provides a high-precision pure vision three-dimensional reconstruction system, which adopts the reconstruction method of any one of the above, and comprises a two-dimensional image acquisition module, a feature matching module, a feature point processing module, a point cloud generating module and a surface reconstruction module;
The two-dimensional image acquisition module is used for acquiring a multi-angle high-resolution two-dimensional image of an object;
the feature matching module is used for extracting feature points of the two-dimensional image and matching the feature points;
The feature point processing module eliminates feature point matching pairs which do not accord with constraint through geometric verification, and obtains projection positions of the eliminated feature points in the two-dimensional image by using a triangulation method;
The point cloud generation module is used for preprocessing the two-dimensional image and generating dense point cloud based on the processed image;
and the surface reconstruction module performs curved surface reconstruction based on the dense point cloud to obtain a three-dimensional surface.
Preferably, in the feature matching module, the feature point extraction process includes:
Utilizing a SIFT feature detector to perform image feature points on the two-dimensional image;
and encoding the image feature points by using SIFT feature descriptors to obtain encoded feature points.
Preferably, in the point cloud generating module, the process of generating the dense point cloud includes:
mapping all points in one two-dimensional image into a coordinate system of the other two-dimensional image, and generating an initial depth map;
performing depth estimation on the initial depth map to obtain occlusion pixels;
and carrying out post-processing optimization on the shielding pixels to generate the dense point cloud.
Preferably, the workflow of the surface reconstruction module includes:
constructing a hidden function according to the points in the dense point cloud and the corresponding normal vectors;
and generating the three-dimensional surface from the hidden function by using an implicit surface mesh generation algorithm.
Compared with the prior art, the invention has the beneficial effects that:
The invention supports increment addition, and after the reconstruction by using the initial given two-dimensional image is finished, the two-dimensional image can be continuously added for increasing the accuracy or breadth of the initial reconstruction;
Secondly, the invention supports progressive reconstruction, namely, a few two-dimensional images can be used for integral reconstruction or regional reconstruction firstly, and then the result of integral reconstruction is used for further fine reconstruction or regional reconstruction is expanded until the reconstruction is completed;
The method can additionally accept a two-dimensional characteristic image for extracting characteristic points of the two-dimensional image, and the method has better effect than the software when the object is in a single tone or a repeated pattern;
Fourthly, the invention supports the self-defined reference picture, and can still accurately reconstruct when the reconstructed target pattern is repeated;
and fifthly, after the sparse reconstruction is completed, the two-dimensional image matching operation is executed again by using the estimated camera pose and sparse point information, so that an optimal characteristic point matching result is obtained, and the reconstruction effect of the dense reconstruction stage is enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the embodiments are briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
SFM (structure-from-motion) is a technique for reconstructing a three-dimensional scene from multiple two-dimensional images. The SFM process can generate a sparse three-dimensional point cloud model, and the point cloud density of the model is low, so that further dense estimation is needed to recover the high-density three-dimensional point cloud. The dense reconstruction is to calculate the three-dimensional point coordinates corresponding to each pixel point in the image pixel by pixel on the premise of knowing the pose of the camera, so as to generate a three-dimensional point cloud with dense surfaces of the scene objects. This process is typically achieved by MVS (multi-view stereoscopic) methods.
By combining the SFM and the MVS, the camera pose and sparse point cloud information obtained in the SFM process can be fully utilized, the precision of generating dense point cloud by the MVS process is improved, and a complete and high-precision three-dimensional reconstruction process is finally constructed.
Example 1
In this embodiment, as shown in fig. 1, a high-precision pure-vision three-dimensional reconstruction method includes the following steps:
S1, acquiring a multi-angle high-resolution two-dimensional image of an object.
S2, extracting feature points of the two-dimensional image, and performing feature point matching on the obtained feature points by using a K-NN method.
The feature point extraction method comprises the steps of utilizing a SIFT feature detector to code image feature points of a two-dimensional image, and utilizing SIFT feature descriptors to code the image feature points to obtain coded feature points.
In the present embodiment, feature point (also referred to as interest point or key point) extraction is a core step of the entire SFM (structure from motion) flow, aiming at extracting representative feature points from each image and calculating their descriptors. A feature detector is an algorithm for detecting feature points in an image, and typically identifies salient features such as corners, spots, edges, connection points, and lines in the image. In this embodiment, a Scale Invariant Feature Transform (SIFT) feature detector is used to extract image features. The SIFT feature detector can effectively identify key points in the image under different scales, so that good stability is maintained under different sizes and viewing angles.
After feature detection, the extracted feature points are further encoded by descriptors describing the pattern of pixels around the points, a process called feature description. The feature description sub-algorithm encodes the detected feature points in a mathematical representation manner so that the same feature points can be accurately matched in different images. In order to realize high-precision feature matching and three-dimensional reconstruction tasks, the embodiment adopts a SIFT descriptor based on floating point numbers. An important advantage of SIFT description sub-algorithm is that it has a strong invariance to scale, rotation, affine transformation, etc. This makes SIFT descriptors particularly suitable for scenes that require high precision matching, especially when dealing with images of complex view angles or scale variations, where SIFT descriptors can provide more stable and accurate feature matching.
The key of feature matching is to calculate the distance between two descriptors, and the difference is usually represented by a single number as a similarity measure. Common distance measurement methods include manhattan distance, euclidean distance, and hamming distance. Here we use a string-based SIFT descriptor and thus choose euclidean distance as a metric.
Common methods include brute force matching (Brute Force Matching, BFM) and fast nearest neighbor searching (Fast Library for Approximate Nearest Neighbors, FLANN) when looking for feature matching pairs. Brute force matching is the simplest direct matching method, the basic idea being to find the nearest neighbor or best match in a given feature descriptor set by calculating the distance between each descriptor and all descriptors in the other set. Brute force matching is applicable to smaller scale feature sets, particularly as is often used in local feature descriptor (e.g., SIFT) matching. Therefore, the present embodiment selects the most basic brute force matching method. Common methods in selecting matching pairs are cross-checking, nearest Neighbor (NN) and K nearest neighbor (K-NN) methods. Since the K-NN method is generally used for matching local feature descriptors, the present embodiment selects the K-NN as a method for matching pair selection, so as to further improve accuracy and robustness of matching. The K-NN method can consider a plurality of nearest neighbors, so that the matching process is optimized, and the possibility of mismatching is reduced.
S3, eliminating feature point matching pairs which do not accord with constraint through geometric verification, and obtaining projection positions of the feature points in the two-dimensional image after elimination by using a triangulation method.
In this embodiment, by feature point matching, the geometric transformation relationship between different images can be inferred, thereby estimating the pose of the camera, including the position and direction of the camera. A valid transformation can be considered geometrically consistent if it can successfully map a sufficient number of feature points between different images. However, outliers (false matches) often occur in feature matching, which affect the accuracy of the geometric transformation, and therefore require the elimination of non-constrained matching pairs by geometric verification. The geometry verification method used in this embodiment is a random sample consensus algorithm (RANSAC). RANSAC is an iterative algorithm that estimates parameters of a base matrix or an essential matrix by randomly selecting a subset of data from the preliminarily matched feature points (e.g., using a normalized eight-point method). The basic idea of RANSAC is to identify and reject outliers by continually randomly extracting samples, computing transformation models, and verifying the consistency of these models at all matching points. The geometric verification can obviously improve the quality of feature matching by effectively eliminating error matching, thereby enhancing the accuracy and the robustness of the subsequent three-dimensional reconstruction. By the RANSAC method, only matching pairs conforming to geometric constraints can be ensured to be reserved, so that a reliable data basis is provided for camera attitude estimation and subsequent three-dimensional reconstruction.
The triangulation method mainly calculates coordinates of feature points in a three-dimensional space through camera parameters obtained by decomposing a basic matrix and an essential matrix in geometric verification and projection positions of the feature points in an image. Here, triangulation contains 2 algorithmic strategies, namely incremental and global. In incremental SFM, an algorithm that gradually processes each new image, adds one image at a time, and continuously updates the estimation of camera pose and scene structure, typically begins with an initial set of visual data, and then gradually introduces new images, each of which matches the previously processed image, and estimates the new camera pose and scene structure by triangulation or other methods. Incremental methods may better utilize memory and computing resources when processing large data sets. However, may be subject to error accumulation and sensitive to the selection and order of the initial visual data set. For this purpose, the present embodiment incorporates a global SFM. Global SFM is a method based on the entire dataset that optimizes camera pose and scene structure simultaneously over the entire dataset. This approach typically creates a global optimization problem in which the relationship between camera pose and scene structure is modeled as a large optimization strategy, optimized by minimizing the re-projection errors. The global method performs optimization of cluster adjustment (BA) uniformly after the reconstruction of every two views is completed, so that the overall perception of the visual data set and lower overall error are realized.
S4, preprocessing the two-dimensional image, and generating dense point cloud based on the processed image.
The method for generating the dense point cloud comprises the steps of mapping all points in one two-dimensional image into a coordinate system of another two-dimensional image, generating an initial depth map, carrying out depth estimation on the initial depth map to obtain shielding pixels, and carrying out post-processing optimization on the shielding pixels to generate the dense point cloud.
In this embodiment, a MVS algorithm based on depth map fusion is selected. The method comprises the steps of generating a depth map for each image, and then carrying out global fusion on all the depth maps to generate a complete three-dimensional scene. The depth map fusion method can better process the shielding problem, generate a reconstruction result with smooth surface and coherent structure, and has excellent performance in terms of precision and robustness.
The method specifically comprises the following steps:
Preprocessing a two-dimensional image by using a homography (Homography) matrix provided by SFM, mapping all points in one image into a coordinate system of another image, generating an initial depth map, so that images with different visual angles can have uniform geometric references, and directly performing pixel-by-pixel comparison in the following depth estimation to make a bedding.
The initial depth map is then depth estimated, and a more classical and practical depth estimation algorithm is PATCHMATCH, consisting mainly of four steps, initializing the depth map, initializing the patch, propagation optimization of the patch, and post-processing of occlusion pixels. In order to obtain the tangential plane of the reconstructed curved surface, an initial depth map is required to be initialized, namely, sparse points obtained through random initialization in a depth range are projected onto a corresponding frame, triangulation and interpolation are carried out on the obtained sparse depth map, and then a relatively complete depth map is obtained. And then, carrying out iterative steps such as initialization, cost propagation, random search and the like on each patch by utilizing the initialized depth map to find a matched pixel block. The pixel undergoes three processes in each iteration, namely spatial propagation, viewing angle propagation, plane optimization. All pixels of the left image are processed first and then all pixels of the right image are processed. In even iterations, the pixel row in the upper left corner is processed from left to right and top to bottom, and in odd iterations, the order just described is reversed. The convergence of depth values is facilitated, and the stability of an algorithm is improved. Finally, carrying out post-processing optimization on the occlusion pixels, generating dense point clouds, projecting each pixel in the depth map to the world coordinate system, and then back projecting to other neighborhood frames (k frames). A depth value for a current pixel is considered reliable if the depth value is similar to the depth of at least k neighboring frames. This avoids deviations in the depth map due to noise or erroneous estimates. Thereby producing an accurate dense point cloud.
S5, reconstructing the curved surface based on the dense point cloud to obtain the three-dimensional surface.
The curved surface reconstruction method comprises the steps of constructing an implicit function according to points in a dense point cloud and corresponding normal vectors, and generating a three-dimensional surface from the implicit function by using an implicit curved surface grid generation algorithm.
In this embodiment, a Smooth, crack-free three-dimensional surface is reconstructed using a Smooth SIGNED DISTANCE Surface Reconstruction (SSD) reconstruction, such that a set of point clouds with normal vector information are reconstructed. The specific process comprises the step that the point cloud consists of a plurality of three-dimensional coordinate points, and each point also has a normal vector in one direction (namely, the direction pointing to the outside of the object), so that a hidden function can be constructed according to the points and the normal vector to represent the surfaces of the points. To this end, the present embodiment defines a symbolic distance function f (x) such that the value of this function on the surface is 0, whereas the further away from the surface the value will be biased towards 1 or-1, which may represent the distance of each point from the surface and which enables a smooth transition. In order to be able to ensure that the constructed surface is smooth and conforms to the actual point cloud data, the present embodiment introduces a "energy" concept. This energy can be understood as a way of measuring the error, the smaller the energy means the more the reconstructed surface conforms to the actual data. By continuously adjusting parameters of the implicit function, the surface is made to be closer to the normal vector of the point cloud while avoiding excessive distortion.
In actual computation, the space is divided into many small blocks and the values of the surface function are computed on the vertices of these small blocks. These calculations depend on the distribution of the point cloud and normal vector. Meanwhile, the octree is used, so that the calculation is more efficient. Once the complete hidden function is derived, an algorithm of an implicit surface mesh generation algorithm (Dual Marching Cubes) is used to generate a three-dimensional surface consisting of triangular meshes from the hidden function.
Example two
In the embodiment, the high-precision pure-vision three-dimensional reconstruction system comprises a two-dimensional image acquisition module, a feature matching module, a feature point processing module, a point cloud generating module and a surface reconstruction module.
The two-dimensional image acquisition module is used for acquiring multi-angle high-resolution two-dimensional images of the object.
The feature matching module is used for extracting feature points of the two-dimensional image and matching the feature points.
In the feature matching module, the feature point extraction process comprises the steps of utilizing a SIFT feature detector to code image feature points of a two-dimensional image and utilizing SIFT feature descriptors to code the image feature points to obtain coded feature points.
The feature point processing module eliminates feature point matching pairs which do not accord with constraint through geometric verification, and obtains projection positions of the eliminated feature points in the two-dimensional image by using a triangulation method.
The point cloud generation module is used for preprocessing the two-dimensional image and generating dense point cloud based on the processed image.
The process of generating the dense point cloud in the point cloud generating module comprises the steps of mapping all points in one two-dimensional image into a coordinate system of another two-dimensional image, generating an initial depth map, carrying out depth estimation on the initial depth map to obtain shielding pixels, and carrying out post-processing optimization on the shielding pixels to generate the dense point cloud.
And the surface reconstruction module performs curved surface reconstruction based on the dense point cloud to obtain a three-dimensional surface.
The workflow of the surface reconstruction module comprises the steps of constructing an implicit function according to points in the dense point cloud and corresponding normal vectors, and generating a three-dimensional surface from the implicit function by utilizing an implicit surface mesh generation algorithm.
The above embodiments are merely illustrative of the preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but various modifications and improvements made by those skilled in the art to which the present invention pertains are made without departing from the spirit of the present invention, and all modifications and improvements fall within the scope of the present invention as defined in the appended claims.

Claims (8)

1. The high-precision pure-vision three-dimensional reconstruction method is characterized by comprising the following steps of:
Acquiring a multi-angle high-resolution two-dimensional image of an object;
Extracting feature points of the two-dimensional image, and matching the feature points;
Removing feature point matching pairs which do not accord with constraint through geometric verification, and obtaining projection positions of the removed feature points in the two-dimensional image by using a triangulation method;
Preprocessing the two-dimensional image, and generating a dense point cloud based on the processed image;
and carrying out curved surface reconstruction based on the dense point cloud to obtain a three-dimensional surface.
2. The high-precision pure vision three-dimensional reconstruction method according to claim 1, wherein the feature point extraction method comprises the following steps:
Utilizing a SIFT feature detector to perform image feature points on the two-dimensional image;
and encoding the image feature points by using SIFT feature descriptors to obtain encoded feature points.
3. A high-precision pure visual three-dimensional reconstruction method according to claim 1, wherein the method of generating the dense point cloud comprises:
mapping all points in one two-dimensional image into a coordinate system of the other two-dimensional image, and generating an initial depth map;
performing depth estimation on the initial depth map to obtain occlusion pixels;
and carrying out post-processing optimization on the shielding pixels to generate the dense point cloud.
4. The high-precision pure vision three-dimensional reconstruction method according to claim 1, wherein the curved surface reconstruction method comprises the following steps:
constructing a hidden function according to the points in the dense point cloud and the corresponding normal vectors;
and generating the three-dimensional surface from the hidden function by using an implicit surface mesh generation algorithm.
5. A high-precision pure-vision three-dimensional reconstruction system, which is applied to the reconstruction method of any one of claims 1-4, and is characterized by comprising a two-dimensional image acquisition module, a feature matching module, a feature point processing module, a point cloud generation module and a surface reconstruction module;
The two-dimensional image acquisition module is used for acquiring a multi-angle high-resolution two-dimensional image of an object;
the feature matching module is used for extracting feature points of the two-dimensional image and matching the feature points;
The feature point processing module eliminates feature point matching pairs which do not accord with constraint through geometric verification, and obtains projection positions of the eliminated feature points in the two-dimensional image by using a triangulation method;
The point cloud generation module is used for preprocessing the two-dimensional image and generating dense point cloud based on the processed image;
and the surface reconstruction module performs curved surface reconstruction based on the dense point cloud to obtain a three-dimensional surface.
6. The high-precision pure vision three-dimensional reconstruction system according to claim 5, wherein in the feature matching module, the feature point extraction process comprises:
Utilizing a SIFT feature detector to perform image feature points on the two-dimensional image;
and encoding the image feature points by using SIFT feature descriptors to obtain encoded feature points.
7. The high-precision pure vision three-dimensional reconstruction system according to claim 5, wherein in the point cloud generation module, the process of generating the dense point cloud comprises:
mapping all points in one two-dimensional image into a coordinate system of the other two-dimensional image, and generating an initial depth map;
performing depth estimation on the initial depth map to obtain occlusion pixels;
and carrying out post-processing optimization on the shielding pixels to generate the dense point cloud.
8. The high-precision pure vision three-dimensional reconstruction system of claim 5, wherein the workflow of the surface reconstruction module comprises:
constructing a hidden function according to the points in the dense point cloud and the corresponding normal vectors;
and generating the three-dimensional surface from the hidden function by using an implicit surface mesh generation algorithm.
CN202510797162.0A 2025-06-16 2025-06-16 A high-precision pure visual 3D reconstruction method and system Pending CN120318464A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510797162.0A CN120318464A (en) 2025-06-16 2025-06-16 A high-precision pure visual 3D reconstruction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510797162.0A CN120318464A (en) 2025-06-16 2025-06-16 A high-precision pure visual 3D reconstruction method and system

Publications (1)

Publication Number Publication Date
CN120318464A true CN120318464A (en) 2025-07-15

Family

ID=96331589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510797162.0A Pending CN120318464A (en) 2025-06-16 2025-06-16 A high-precision pure visual 3D reconstruction method and system

Country Status (1)

Country Link
CN (1) CN120318464A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 3D Reconstruction Method of Large Component Based on Image Sequence
CN115619974A (en) * 2022-10-28 2023-01-17 郑川江 Large scene three-dimensional reconstruction method, reconstruction device, equipment and storage medium based on improved PatchMatch network
CN117315138A (en) * 2023-09-07 2023-12-29 浪潮软件科技有限公司 Three-dimensional reconstruction method and system based on multi-eye vision
CN118781012A (en) * 2024-09-10 2024-10-15 南京晨新医疗科技有限公司 3D ultra-high-definition fluorescence medical endoscope imaging method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 3D Reconstruction Method of Large Component Based on Image Sequence
CN115619974A (en) * 2022-10-28 2023-01-17 郑川江 Large scene three-dimensional reconstruction method, reconstruction device, equipment and storage medium based on improved PatchMatch network
CN117315138A (en) * 2023-09-07 2023-12-29 浪潮软件科技有限公司 Three-dimensional reconstruction method and system based on multi-eye vision
CN118781012A (en) * 2024-09-10 2024-10-15 南京晨新医疗科技有限公司 3D ultra-high-definition fluorescence medical endoscope imaging method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
曹壮: "反向投影约束的三维重建算法及应用研究", 《中国优秀硕士学位论文全文数据库》, 15 March 2025 (2025-03-15), pages 2 - 23 *
杨洋: "基于深度学习的多视图像三维重建方法研究", 《中国优秀硕士学位论文全文数据库》, 15 May 2025 (2025-05-15), pages 8 - 28 *
王鑫等: "《遥感影像智能处理与分析》", 30 September 2022, 南京:河海大学出版社, pages: 31 - 35 *

Similar Documents

Publication Publication Date Title
EP3695384B1 (en) Point cloud meshing method, apparatus, device and computer storage media
Kamencay et al. Improved Depth Map Estimation from Stereo Images Based on Hybrid Method.
CN111063021A (en) A method and device for establishing a three-dimensional reconstruction model of a space moving target
CN113178009A (en) Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair
Dick et al. Automatic 3D Modelling of Architecture.
CN104318552B (en) The Model registration method matched based on convex closure perspective view
Fua et al. Reconstructing surfaces from unstructured 3d points
Yuan et al. 3D reconstruction of background and objects moving on ground plane viewed from a moving camera
CN114494589A (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium
CN116883590A (en) Three-dimensional face point cloud optimization method, medium and system
Ling et al. A dense 3D reconstruction approach from uncalibrated video sequences
McKinnon et al. A semi-local method for iterative depth-map refinement
CN111739158A (en) A 3D Scene Image Restoration Method Based on Erasure Code
Yuan et al. DVP-MVS++: Synergize Depth-Normal-Edge and Harmonized Visibility Prior for Multi-View Stereo
CN120318464A (en) A high-precision pure visual 3D reconstruction method and system
Zhang et al. A robust multi‐view system for high‐fidelity human body shape reconstruction
Wang et al. Quasi-dense matching algorithm for close-range image combined with feature line constraint
Hlubik et al. Advanced point cloud estimation based on multiple view geometry
Xiao et al. Image completion using belief propagation based on planar priorities.
CN118967974B (en) Automatic reconstruction method, product, medium and equipment for human body three-dimensional model based on explicit space
Peng et al. 3D Reconstruction Cost Function Algorithm Based on Stereo Matching in the Background of Digital Museums
Yan et al. A Hierarchical image matching method for stereo satellite imagery
Fua et al. From points to surfaces
Chen et al. Comparison and utilization of point cloud tools for 3D reconstruction of small assets in built environment using a monocular camera
Liu et al. Multi-View 3D Reconstruction Algorithm Based on SAM Plane Prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination