WO2021081783A1 - Procédé de fusion de nuage de points, appareil et système de détection - Google Patents
Procédé de fusion de nuage de points, appareil et système de détection Download PDFInfo
- Publication number
- WO2021081783A1 WO2021081783A1 PCT/CN2019/114221 CN2019114221W WO2021081783A1 WO 2021081783 A1 WO2021081783 A1 WO 2021081783A1 CN 2019114221 W CN2019114221 W CN 2019114221W WO 2021081783 A1 WO2021081783 A1 WO 2021081783A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- target point
- target
- area
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- This application relates to the technical field of data processing, and in particular to a point cloud fusion method, device and detection system.
- 3D reconstruction technology has matured day by day.
- stereo vision, laser scanning, or depth sensor scanning can be used to obtain the three-dimensional information of each physical object in the target scene, and construct a three-dimensional point cloud of the target scene.
- the most primitive 3D point cloud obtained generally has a huge amount of data and contains a lot of noise, the rendering speed of the point cloud in the later stage is slow, which affects the user experience. Therefore, the point cloud can be streamlined.
- One way is to merge some similar points in the point cloud to obtain a streamlined point cloud.
- the existing point cloud fusion method cannot well retain the details of the target scene. Therefore, it is necessary to improve the point cloud fusion method.
- this application provides a point cloud fusion method, device and detection system.
- a point cloud fusion method includes:
- the first target point and the second target point are merged.
- a point cloud fusion device the device includes a processor, a memory, and a computer program stored on the memory, and the processor executes the computer program to implement the following steps:
- the first target point and the second target point are merged.
- a detection system includes a sensor and a point cloud fusion device, the sensor is used to obtain a first target point and a second target point, and the point cloud fusion device includes a processor , A memory, and a computer program stored on the memory, and the processor executes the computer program to implement the point cloud fusion method described in any one of the embodiments of the present application.
- Fig. 1 is a schematic diagram of determining a fusion point of a reference point according to a fixed search range provided by the prior art.
- Fig. 2 is a schematic diagram of a scene in which key points are merged provided by the prior art.
- Fig. 3 is a flowchart of a point cloud fusion method provided by an embodiment of the present invention.
- Fig. 4 is a schematic diagram of determining a target area of a second target point according to an embodiment of the present invention.
- Fig. 5 is a schematic diagram of determining a designated area of a first target point according to an embodiment of the present invention.
- Fig. 6 is a schematic diagram of determining whether the second target point can be merged according to the positional relationship between the target area of the second target point and the first target point according to an embodiment of the present invention.
- Fig. 7 is another schematic diagram of determining whether the second target point can be fused according to the positional relationship between the target area of the second target point and the first target point according to an embodiment of the present invention.
- Fig. 8 is a schematic structural diagram of a point cloud fusion device provided by an embodiment of the present invention.
- Fig. 9 is a schematic diagram of a detection system provided by an embodiment of the present invention.
- 3D reconstruction technology has matured day by day.
- the three-dimensional information of each physical object in the target scene can be obtained through stereo vision methods, laser scanning, or depth sensor scanning, and a three-dimensional point cloud of the target scene can be constructed. Since the most primitive 3D point cloud obtained generally has a huge amount of data and contains a lot of noise, the rendering speed of the point cloud in the later stage is slow, which affects the user experience. Therefore, the point cloud can be streamlined. One way is to merge some similar points in the point cloud to obtain a streamlined point cloud.
- the existing point cloud fusion method generally uses a fixed search range to perform point cloud fusion, as shown in Figure 1, that is, first determine the reference point 11 at each point 13 in the search range 12, and then determine each point in the search range 12 Whether 13 meets the preset fusion standard, that is, it is judged whether two points are similar points. If they are similar points, the point 13 in the search range is merged with the reference point 11.
- the search range is generally set to be relatively small and relatively conservative, resulting in some points that can be further expanded and merged cannot be merged.
- the two points A and B are similar points, and if A is the reference point, then B will be merged. However, it can be seen from Figure 2 that B is at the corner. If it is blended, the scene will become smooth at the corner and lose its original right-angle characteristics. This is also the case for the two points C and D. If C is within the search range of D, if D is used as the reference point, C will be fused by D. If C is fused by D, the corners of the lower part of the ladder will also become rounded. If the merged points are used to form the mesh model, this smooth trend will be more obvious, resulting in the lack of some key details in the final 3D point cloud, which cannot reflect the real 3D scene well.
- this application proposes a point cloud fusion method, which can effectively fuse point clouds and well preserve the details of the three-dimensional scene.
- S306 Determine the target area of the second target point according to the similar point distribution of the second target point
- the point cloud fusion method of this application can be used to fuse and streamline the original point cloud after acquiring the original three-dimensional point cloud data, or it can be used to construct a point cloud through multiple depth maps of the three-dimensional scene. Point clouds are integrated and streamlined scenes.
- the first target point may be obtained from a large number of points used to construct the target point cloud, and then the second target point similar to the first target point may be determined.
- the first target point and the second target point may be various points that can be used to construct a point cloud, for example, they may be points in the original three-dimensional point cloud, or points on the depth map.
- the first target point and the second target point may be points in a three-dimensional space.
- the first target point and the second target point may be points in an original relatively complex three-dimensional point cloud.
- the point cloud fusion may be the fusion of the original point cloud obtained.
- the laser scanning can directly obtain the original point cloud, so the first target point and the second target point can be obtained by laser scanning.
- the first target point and the second target point may also be points on a two-dimensional plane that carry depth information.
- the points on these two-dimensional planes are fused to obtain a streamlined point cloud.
- the points carrying depth information on the two-dimensional plane can be pixels on the depth map.
- a point cloud can be constructed based on multiple depth maps of the acquired three-dimensional scene.
- the point cloud can be fused.
- the depth map can be directly obtained by scanning a depth sensor, for example, it can be obtained by a Kinect sensor.
- the points carrying depth information on the two-dimensional plane may also be two-dimensional plane points formed after a three-dimensional point cloud is projected onto the plane.
- the three-dimensional points in the point cloud can be directly fused in the three-dimensional space.
- the three-dimensional point cloud can also be projected onto a two-dimensional reference plane first, and then the two-dimensional reference plane The points on the above are merged.
- the two-dimensional reference plane may be a plane constructed in any two directions of the X, Y, and Z directions in the three-dimensional space.
- the target area of each second target point can be determined according to the distribution of similar points around the second target point.
- the target area can be The plane area can also be a three-dimensional space area. If the first target point and the second target point are points in a three-dimensional space, the target area is a three-dimensional space area. If the first target point and the second target point are carried on a two-dimensional plane For points with depth information, the target area is a flat area.
- the target area of the second target point may be used to determine which points can be merged with the second target point, and the target area may include similar points of the second target point.
- the second target point when determining the target area of the second target point, may be taken as the center of the circle, and then at least one similar point between the second target point and the second target point is used to determine the radius, and the radius is determined by the center and the second target point. The determined radius gets the target area.
- the target area may be an area obtained by using the second target point as the center of the circle, and the distance between the second target point and the closest similar point of the second target point as the radius.
- the target area of the present application is not limited to the area obtained by taking the second target point as the center and the distance between the closest similar points of the second target point as the radius, and may also be the area obtained in other ways.
- the second target point may be taken as the center of the circle, and the intermediate value of the distance formed by the second target point and the similar points of the second target point may be used as the radius to obtain the area.
- the shortest distance between the second target point and the boundary point of the similar point distribution area of the second target point may be used as the radius to determine the target area, so the points in the target area are all the first
- the similarity of the two target points is determined by the positional relationship between the target area and the first target point to determine whether the first target point can be merged with the second target point, so as to avoid some key points reflecting the details of the three-dimensional scene from being merged.
- the target area may be a spherical area obtained by taking the second target point as a circle center. In some embodiments where the second target point is a point carrying depth information on a two-dimensional plane, the target area may be a circular area obtained by taking the second target point as the center of the circle.
- the second target point is a point carrying depth information on a two-dimensional plane.
- the second target point is P
- the solid line area in the figure is the similar point distribution area of the second target point.
- the area formed by the multiple similar points of the target point specifically, may be an area formed by the similar points of the second target point in an area with the second target point as the center and the designated length as the radius.
- the second target point can be taken as the center of the circle, and the point closest to the second target point can be found on the boundary of the similar point distribution area of the second target point.
- the minimum distance formed by the boundary points of the similar point distribution area is the radius, and a circular area is obtained, that is, the target area of the second target point.
- the similarity in this application means that some characteristics between two points are relatively similar, and the similarity of a point refers to a point that is relatively similar to the point in some characteristics. These characteristics can be depth value, gray value, Normal vector and other parameters. Similar points can be neighboring points. For example, points that are in a small area of a three-dimensional object have similar characteristics. Of course, similar points can also be points in overlapping areas on multiple depth maps.
- the similarity can be determined by some preset evaluation parameters, and the evaluation parameter can be some parameters used to characterize the similarity of the similarity.
- the evaluation parameter may be one or more of depth value, gray value, normal vector, reflectivity, reprojection error between two points, and angle between two points.
- the evaluation parameters can be: depth value, normal vector, gray value, reprojection error between two points, etc.
- the target point or the second target point is a scene in which a three-dimensional point cloud point is obtained by laser scanning, and the evaluation parameters can be: depth value, angle between two points, reflectivity, etc.
- the included angle between the two points may be the included angle between the line connecting the two points and the reference point, and the reference point may be the position point of the sensor, for example, the position point where the lidar is located.
- the evaluation parameters can be flexibly set according to actual application scenarios, and one or more can be selected.
- thresholds for each evaluation parameter such as a depth value threshold, can be preset. When the difference in depth value between two points is less than the set depth value threshold, the two points are considered similar.
- the second target point may be a similar point of the first target point in the designated area.
- the designated area can be used to determine the potential fusion point of the first target point, that is, when looking for a point where the first target point can be fused, it is mainly searched in the designated area.
- the designated area may be an area formed by all similar points of the first target point, so that when searching, the similar points of the target point can be searched out as much as possible.
- the designated area is determined by the similarity of the first target point, so as to determine the second target point similar to the first target point from the designated area, that is, the potential fusion point of the first target point, and the designated area can be adjusted adaptively The size of the area in order to consider all the potential fusion points of the first target point as much as possible. Compared with the method of setting a fixed search range for each point, it will be more flexible.
- the potential of the first target point is determined
- the integration point is also more comprehensive.
- the area to completely contain all the similar points of the first target point may be relatively large. In this way, many points in the area are points that cannot be merged by the first target point, so the designated area is set If it is too large, it will increase the time-consuming search and make the efficiency lower. Therefore, in some embodiments, the designated area may be an area obtained by using the first target point as the center and the radius of the designated length. By limiting the search radius, you can avoid the specified area from being too large.
- the designated area can be a spherical three-dimensional area with the first target point as the center and a radius of a designated length, or it can cover the first target point. A spherical three-dimensional area of all similar points.
- the designated area can be a circular area with the first target point as the center and a radius of the designated length, or A circular area covering all similar points of the first target point.
- the first target point Take the first target point as a pixel on the depth map as an example.
- Keep increasing the radius for example, the radius increases to a certain value, forming a circular area covering the surrounding 8 pixels, that is, it can be judged whether these 8 pixels are all similar points of A, if so, continue to increase
- the radius is increased to the surrounding 15 pixels, it is determined whether these 15 pixels are similar to A. If so, continue to increase the radius until no more similar points of A are added or until The radius is increased to a preset length. Then the circular area is regarded as the designated area.
- the preset positional relationship may be that the target area of the second target point covers the first target point, that is, the first target point is also in the target area of the second target point, at this time, The first target point and the second target point can be merged, and if the first target point is not in the target area of the second target point, the two cannot be merged.
- meeting the preset positional relationship may also be that the target area of the second target point covers the corresponding point of the first target point.
- the first target point and the second target point are points on the depth map.
- the second target point may be a similar point of the first target point on the same depth map as the first target point, or may be a similar point on the adjacent depth map.
- the first target point and the target area may conform to the preset positional relationship.
- the target area of the target point covers the first target point. If the second target point is a point similar to the first target point on the depth map adjacent to the depth map where the first target point is located. That is, the first target point and the second target point are points representing the same three-dimensional object on different depth maps. At this time, satisfying the preset position relationship can be that the target area of the second target point covers the corresponding point of the first target point. This corresponding point is the point on the depth map where the first target point is projected onto the second target point.
- the adjacent depth map of the depth map where the first target point is located may be a depth map that is adjacent to the depth map location where the first target point is located, or may be a depth map that is temporally adjacent to the depth map where the first target point is located.
- the neighboring depth map can be the depth map corresponding to the photos taken by the camera device in the neighboring position, or the camera device at the neighboring moment The captured photo corresponds to the depth map, so when determining the adjacent depth map, the adjacent depth map can be determined by calculating the position of the camera device corresponding to the depth map.
- the depth map A and the depth map B are adjacent depth maps.
- the adjacent depth maps can also be determined according to the shooting time. For example, two depth maps whose shooting time interval is less than a certain threshold can also be considered as adjacent depth maps.
- the present application is not limited to the method of determining the neighboring depth map in the above manner, and other methods can also be used, as long as the depth map that has an overlapping area with a depth map can be obtained.
- the following methods can be used to determine whether the two can be merged: First, find the designated area in the designated area of the first target point In all second target points that are similar to the first target point, determine the target area for each second target point.
- the target area can be centered on the second target point, and the second target point is the same as the second target point.
- the shortest distance between the boundary points of the similar point distribution area of the target point is a circular area obtained by the radius, and then it is judged whether the target area of the second target point covers the first target point. If it covers, the first target point and The second target point is fused. If it is not covered, then no fusion is performed.
- A is the first target point
- area 1 is the similar point distribution area of the first target point A
- point B and point C are both second target points similar to A
- Area 2 is the similar point distribution area of the second target point B
- area 3 is the target area of the second target point B
- the target area is determined according to the distribution of the similar points of the second target point B
- area 4 is the second target point The similar point distribution area of C
- Area 5 is the target area of the second target point C. The target area is determined according to the distribution of similar points of the second target point C.
- the target area of the similar point B and the similar point C covers the first target point A. It can be seen from FIG. 6 that the target area of the point B does not cover the first target point A, so the point B cannot be merged by A. The target area of point C covers the first target point A, so point C can be merged by the first target point A.
- Figure 7 shows a stepped scene composed of a series of point clouds. According to the existing technology If it is determined that B is similar to A, then B will be fused by A, and B happens to be at a right angle. After being fused, the right angle characteristics of the three-dimensional scene cannot be reflected, so the constructed three-dimensional scene will lose some details.
- a target area is determined for point B. The target area is centered at point B, and the shortest distance between the boundary points of the similar point distribution area between point B and point B is the radius to obtain a circular area.
- point A is not in the target area of point B, so point B will not be Point A is fused, and the right angle characteristic can be retained.
- the first target point can be determined first
- the corresponding point on the adjacent depth map is then determined according to whether the corresponding point is in the target area of the second target point.
- the corresponding point is the projection point of the first target point on the adjacent depth map.
- the corresponding point can be determined by projecting the first target point onto the adjacent depth map. For example, after the coordinates of the first target point are known, the corresponding point can be determined according to the first target point.
- the first target point when fusing the first target point and the second target point, can be selected from multiple points one by one in a certain order, for example, for the first target point on a two-dimensional plane Point, you can select the point on the plane as the first target point in the order from left to right and top to bottom. Of course, the first target point can also be selected in sequence from right to left or bottom to top.
- the points in the three-dimensional space can be projected onto the two-dimensional plane, and then the order of each point can be determined according to the rows and columns after projection. Of course, other methods can also be adopted to determine the order of selecting the first target point.
- the first target points are selected one by one in order to avoid missing some points, resulting in points that can be merged not being merged.
- the specific sequence in which this application does not restrict Of course, if some points are merged by the first target point previously selected, it is equivalent to that point no longer exists, so it will no longer be selected as the first target point.
- the second target point that is similar to the first target point can be determined, and the target area can be determined according to the distribution of similar points around each second target point. If the first target point matches the target area If the position relationship is preset, the first target point is used to fuse the second target point.
- the first target point and the second target point are merged, that is, the first target point and the second target point can be merged into one point.
- the parameters of the point obtained by the final fusion can be the first target point and the second target point.
- the average value of the parameters of the target point By fusing multiple similar points into one point, the number of data points in the point cloud can be greatly reduced.
- the point cloud fusion method In the existing point cloud fusion method, only the fixed search range of each point is determined, and the similar points of the reference points in the fixed search range will be merged, which causes some points that reflect the details of the three-dimensional scene to be merged.
- the point cloud fusion method provided by this application, after acquiring the first target point, the second target point that is similar to the first target point can be determined first, and then the distribution of similar points around each second target point is determined for each second target point. Two target points determine a target area. In this way, considering the distribution of the second target point itself, a large number of points on a plane will be merged, and where the shape is complex and changeable, the point cloud will be less merged, ensuring the details of the scene as much as possible.
- the present application also provides a point cloud fusion device.
- the point cloud fusion device 80 includes a processor 82, a memory 84, and a computer program stored on the memory.
- the processor 82 Executing the computer program implements the following steps:
- the first target point and the second target point are merged.
- the radius of the target area is determined based on at least one similarity between the second target point and the second target point.
- the radius of the target area is the shortest distance between the second target point and the boundary point of the similar point distribution area of the second target point, and the similar point of the second target point
- the distribution area is an area formed by multiple similar points of the second target point.
- the first target point and the second target point include:
- a point on a two-dimensional plane that carries depth information A point on a two-dimensional plane that carries depth information
- the points located on a two-dimensional plane and carrying depth information include:
- the three-dimensional point cloud points obtained by laser scanning are projected to the points on the two-dimensional plane.
- the points located in the three-dimensional space include: three-dimensional point cloud points obtained by laser scanning.
- the target area when the first target point and the second target point are points located on a two-dimensional plane that carry depth information, the target area is a circular area, and when the first target point is When the point and the second target point are points located in a three-dimensional space, the target area is a spherical area.
- the second target point is a similar point of the first target point in a designated area.
- the designated area when the first target point and the second target point are points located on a two-dimensional plane that carry depth information, the designated area is a circular area, and when the first target point is When the point and the second target point are points in a three-dimensional space, the designated area is a spherical area.
- the designated area includes:
- the preset position relationship includes:
- the target area of the second target point covers the first target point
- the target area of the second target point covers the corresponding point of the first target point.
- the second target point when the first target point is a pixel point on the depth map, the second target point is:
- a pixel point in the same depth map as the first target point or
- the adjacent depth map is a depth map adjacent to the depth map location where the first target point is located and/or a depth map adjacent to the depth map location where the first target point is located in time sequence.
- the second target point is a pixel point located in the same depth map as the first target point
- the preset positional relationship includes:
- the target area of the second target point covers the first target point.
- the second target point is a point on a depth map adjacent to the depth map where the first target point is located, and the processor is further configured to:
- the preset position relationship includes:
- the target area of the second target point covers the corresponding point of the first target point.
- the similarity points are determined based on preset evaluation parameters, and the evaluation parameters are used to characterize the similarity between two points.
- the evaluation parameter includes one or more of depth value, gray value, normal vector, reflectivity, reprojection error between two points, and angle between two points.
- the present application also provides a detection system.
- the detection system is shown in FIG. 9.
- the detection system 90 includes a sensor 92 and a point cloud fusion device 94.
- the sensor 92 is used to obtain a first target point and a second target point.
- the point cloud fusion device 94 includes a processor 942, a memory 944, and a computer program stored on the memory, and the processor executes the computer program to implement the point cloud fusion method described in any one of the embodiments of the specification of this application .
- the senor is a lidar
- the lidar scans a three-dimensional scene to obtain an original point cloud
- the first target point and the second target point are points in the original point cloud
- the senor is a camera device. After the camera device acquires multiple photos of the three-dimensional scene, it converts the multiple photos into multiple depth maps, and constructs a three-dimensional point cloud based on the pixels on the obtained depth map.
- the first target point and the second target point are pixels on multiple depth maps.
- the relevant part can refer to the part of the description of the method embodiment.
- the device embodiments described above are merely illustrative.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units.
- Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
Procédé de fusion de nuage de points, appareil et système de détection. Le procédé consiste : à acquérir un premier point cible, à déterminer un second point cible similaire au premier point cible, à déterminer une région cible du second point cible selon une distribution de points similaires du second point cible, et lorsque la région cible et le premier point cible satisfont une relation de position prédéterminée, à effectuer une fusion du premier point cible et du second point cible. Pendant la fusion de nuage de points, la détermination d'une région cible pour un second point cible selon une distribution de points similaires entourant le second point cible, et la détermination du fait de savoir si un premier point cible peut être fusionné avec le second point cible selon une relation de position entre le premier point cible et la région cible ou non, en tenant complètement compte d'une situation de distribution autour du second point cible, permet le maintient des détails d'une scène tridimensionnelle.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2019/114221 WO2021081783A1 (fr) | 2019-10-30 | 2019-10-30 | Procédé de fusion de nuage de points, appareil et système de détection |
| CN201980038694.9A CN112334952A (zh) | 2019-10-30 | 2019-10-30 | 一种点云融合方法、装置及探测系统 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2019/114221 WO2021081783A1 (fr) | 2019-10-30 | 2019-10-30 | Procédé de fusion de nuage de points, appareil et système de détection |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021081783A1 true WO2021081783A1 (fr) | 2021-05-06 |
Family
ID=74319818
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2019/114221 Ceased WO2021081783A1 (fr) | 2019-10-30 | 2019-10-30 | Procédé de fusion de nuage de points, appareil et système de détection |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN112334952A (fr) |
| WO (1) | WO2021081783A1 (fr) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107123164A (zh) * | 2017-03-14 | 2017-09-01 | 华南理工大学 | 保持锐利特征的三维重建方法及系统 |
| CN109147038A (zh) * | 2018-08-21 | 2019-01-04 | 北京工业大学 | 基于三维点云处理的管道三维建模方法 |
| CN110058237A (zh) * | 2019-05-22 | 2019-07-26 | 中南大学 | 面向高分辨率SAR影像的InSAR点云融合及三维形变监测方法 |
| EP3553745A1 (fr) * | 2018-04-09 | 2019-10-16 | BlackBerry Limited | Procédés et dispositifs de codage entropique binaire de nuages de points |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104299260B (zh) * | 2014-09-10 | 2017-05-17 | 西南交通大学 | 一种基于sift和lbp的点云配准的接触网三维重建方法 |
| CN110322492B (zh) * | 2019-07-03 | 2022-06-07 | 西北工业大学 | 一种基于全局优化的空间三维点云配准方法 |
-
2019
- 2019-10-30 WO PCT/CN2019/114221 patent/WO2021081783A1/fr not_active Ceased
- 2019-10-30 CN CN201980038694.9A patent/CN112334952A/zh active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107123164A (zh) * | 2017-03-14 | 2017-09-01 | 华南理工大学 | 保持锐利特征的三维重建方法及系统 |
| EP3553745A1 (fr) * | 2018-04-09 | 2019-10-16 | BlackBerry Limited | Procédés et dispositifs de codage entropique binaire de nuages de points |
| CN109147038A (zh) * | 2018-08-21 | 2019-01-04 | 北京工业大学 | 基于三维点云处理的管道三维建模方法 |
| CN110058237A (zh) * | 2019-05-22 | 2019-07-26 | 中南大学 | 面向高分辨率SAR影像的InSAR点云融合及三维形变监测方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112334952A (zh) | 2021-02-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20190370989A1 (en) | Method and apparatus for 3-dimensional point cloud reconstruction | |
| CN110568447B (zh) | 视觉定位的方法、装置及计算机可读介质 | |
| CN106548516B (zh) | 三维漫游方法和装置 | |
| WO2020119684A1 (fr) | Procédé, appareil et dispositif de mise à jour de carte sémantique de navigation 3d | |
| WO2022041437A1 (fr) | Procédé et appareil de génération de modèle de plante, équipement informatique et support de stockage | |
| TW201915944A (zh) | 圖像處理方法、裝置、系統和儲存介質 | |
| CN105336005B (zh) | 一种获取目标物体体征数据的方法、装置及终端 | |
| JP2022509329A (ja) | 点群融合方法及び装置、電子機器、コンピュータ記憶媒体並びにプログラム | |
| US20150138193A1 (en) | Method and device for panorama-based inter-viewpoint walkthrough, and machine readable medium | |
| US20190362544A1 (en) | Fusion of depth images into global volumes | |
| US12437476B2 (en) | Three-dimensional point cloud densification device, three-dimensional point cloud densification method, and recording medium | |
| JP7432793B1 (ja) | 三次元点群に基づくマッピング方法、装置、チップ及びモジュール機器 | |
| CN111382618B (zh) | 一种人脸图像的光照检测方法、装置、设备和存储介质 | |
| CN113223078A (zh) | 标志点的匹配方法、装置、计算机设备和存储介质 | |
| CN118154778A (zh) | 光伏场站的三维重建方法、装置及系统 | |
| CN110348351A (zh) | 一种图像语义分割的方法、终端和可读存储介质 | |
| CN109064533A (zh) | 一种3d漫游方法及系统 | |
| CN115661360A (zh) | 一种空地融合精细化三维建模方法 | |
| CN112002007A (zh) | 基于空地影像的模型获取方法及装置、设备、存储介质 | |
| CN116704151B (zh) | 三维重建方法、装置以及基于其的车辆、设备和介质 | |
| WO2025246691A1 (fr) | Procédé de génération d'image panoramique embarquée, dispositif informatique, support de stockage informatique, produit programme d'ordinateur et plateforme mobile | |
| WO2021081783A1 (fr) | Procédé de fusion de nuage de points, appareil et système de détection | |
| CN116824068B (zh) | 面向复杂动态场景中点云流的实时重建方法、装置及设备 | |
| CN116612253A (zh) | 点云融合方法、装置、计算机设备和存储介质 | |
| CN105701821B (zh) | 立体图像表面探测匹配方法及装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19951113 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19951113 Country of ref document: EP Kind code of ref document: A1 |