[go: up one dir, main page]

CN113808196B - Plane fusion positioning method, device, electronic equipment and storage medium - Google Patents

Plane fusion positioning method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113808196B
CN113808196B CN202111059313.0A CN202111059313A CN113808196B CN 113808196 B CN113808196 B CN 113808196B CN 202111059313 A CN202111059313 A CN 202111059313A CN 113808196 B CN113808196 B CN 113808196B
Authority
CN
China
Prior art keywords
point
plane
sparse
image frame
constraint condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111059313.0A
Other languages
Chinese (zh)
Other versions
CN113808196A (en
Inventor
王帅
陈丹鹏
慕翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202111059313.0A priority Critical patent/CN113808196B/en
Publication of CN113808196A publication Critical patent/CN113808196A/en
Application granted granted Critical
Publication of CN113808196B publication Critical patent/CN113808196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供了一种平面融合定位方法、装置、电子设备及存储介质,其中,该平面融合定位方法包括:采用基于当前输入的图像帧得到的稀疏点云构建稀疏平面,获取前端基于图像帧生成的目标三维网格与稀疏平面的第一关联信息;根据第一关联信息和载体当前的第一位姿,得到第一点面约束条件;根据第一点面约束条件、基于图像帧和图像帧的上一帧得到的重投影约束条件和深度约束条件,获取第一位姿的优化值。本申请实施例有利于提升SLAM系统中的定位精度。

The present application provides a plane fusion positioning method, device, electronic device and storage medium, wherein the plane fusion positioning method includes: constructing a sparse plane using a sparse point cloud obtained based on a currently input image frame, obtaining first association information between a target three-dimensional grid and a sparse plane generated by a front end based on the image frame; obtaining a first point-surface constraint condition according to the first association information and the current first pose of the carrier; obtaining an optimized value of the first pose according to the first point-surface constraint condition, a reprojection constraint condition and a depth constraint condition obtained based on the image frame and the previous frame of the image frame. The embodiments of the present application are conducive to improving the positioning accuracy in a SLAM system.

Description

Plane fusion positioning method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer vision, and in particular, to a method and apparatus for positioning planar fusion, an electronic device, and a storage medium.
Background
With the development of computer vision technology, research work of SLAM (Simultaneous Localization AND MAPPING ) technology has achieved remarkable achievement and is widely applied to AR (Augmented Reality) or VR (Virtual Reality) products, unmanned aerial vehicles, robots, and automatic driving. The construction and optimization of the structured plane are key links in SLAM technology, and in the aspect of structured plane optimization, the precision and stability of the generated plane are limited under the influence of selected characteristic points and plane constraint conditions, so that the positioning precision is lower.
Disclosure of Invention
The embodiment of the application provides a plane fusion positioning method, a plane fusion positioning device, electronic equipment and a storage medium.
The first aspect of the embodiment of the application provides a plane fusion positioning method, which comprises the following steps:
Constructing a sparse plane by using a sparse point cloud obtained based on a currently input image frame, and acquiring first association information of a target three-dimensional grid generated by a front end based on the image frame and the sparse plane;
obtaining a first point constraint condition according to the first associated information and the current first pose of the carrier;
And obtaining an optimized value of the first pose according to the first point constraint condition, the re-projection constraint condition and the depth constraint condition which are obtained based on the image frame and the last frame of the image frame.
With reference to the first aspect, in one possible implementation manner, according to the first association information and the current first pose of the carrier, obtaining a first point constraint condition includes:
Acquiring second association information of a history sparse plane constructed based on a history sparse point cloud obtained from a previous frame and a history three-dimensional grid generated by the front end based on the previous frame, and a second pose of a carrier in the previous frame;
and constructing a point-plane optimization model by adopting the first association information, the first pose, the second association information and the second pose to obtain a first point-plane constraint condition.
With reference to the first aspect, in one possible implementation manner, obtaining the optimized value of the first pose according to the first point plane constraint condition, the re-projection constraint condition obtained based on the image frame and the previous frame of the image frame, and the depth constraint condition includes:
Acquiring at least one second point-plane constraint condition, wherein the at least one second point-plane constraint condition comprises a first point-plane constraint condition, point-plane constraint conditions except the first point-plane constraint condition in the at least one second point-plane constraint condition are obtained based on target association information, and the target association information is association information obtained based on a first history frame of a preset number of image frames;
And optimizing the first pose, and taking pose estimated values meeting at least one second point-plane constraint condition, the re-projection constraint condition and the depth constraint condition as optimized values.
With reference to the first aspect, in a possible implementation manner, the method further includes:
the marginalizing is based on a point-plane constraint derived from a second history frame preceding the first history frame.
With reference to the first aspect, in a possible implementation manner, before constructing the sparse plane using the sparse point cloud obtained based on the currently input image frame, the method further includes:
uniformly sampling the image frame to obtain at least one two-dimensional characteristic point;
Determining points of at least one two-dimensional feature point in a depth map corresponding to an image frame;
And mapping points of at least one two-dimensional characteristic point in the depth map corresponding to the image frame to a three-dimensional space to obtain sparse point cloud.
With reference to the first aspect, in a possible implementation manner, after obtaining the at least one two-dimensional feature point, the method further includes:
constructing a two-dimensional grid by adopting at least one two-dimensional characteristic point;
the sparse plane is constructed by adopting sparse point cloud obtained based on the current input image frame, and the method comprises the following steps:
based on the corresponding relation between the two-dimensional characteristic points in the two-dimensional grid and the three-dimensional points in the sparse point cloud, constructing a three-dimensional grid corresponding to the two-dimensional grid by adopting the three-dimensional points;
and clustering the three-dimensional grids based on the first pose to obtain a sparse plane.
With reference to the first aspect, in one possible implementation manner, the uniformly sampling the image frame to obtain at least one two-dimensional feature point includes:
Removing edge pixel points of the image frame;
Calculating according to the size of the image frames and the preset sampling quantity to obtain a sampling distance;
And sampling the pixel points which are not removed in the image frame according to the sampling distance to obtain at least one two-dimensional characteristic point.
With reference to the first aspect, in a possible implementation manner, acquiring first association information of a target three-dimensional grid generated by a front end based on an image frame and a sparse plane includes:
acquiring a difference value between the direction of the target three-dimensional grid and the direction of the sparse plane;
And under the condition that the difference value is smaller than or equal to a preset value and at least one three-dimensional point forming the target three-dimensional grid is on the sparse plane, associating the target three-dimensional grid with the sparse plane to obtain first association information.
The second aspect of the embodiment of the application provides a plane fusion positioning device, which comprises an acquisition unit and a processing unit;
the acquisition unit is used for constructing a sparse plane by adopting sparse point clouds obtained based on the currently input image frames and acquiring first association information of a target three-dimensional grid generated by the front end based on the image frames and the sparse plane;
the processing unit is used for obtaining a first point surface constraint condition according to the first associated information and the current first pose of the carrier;
The processing unit is further configured to obtain an optimized value of the first pose according to the first point constraint condition, the re-projection constraint condition obtained based on the image frame and a previous frame of the image frame, and the depth constraint condition.
A third aspect of the embodiments of the present application provides an electronic device comprising an input device and an output device, further comprising a processor adapted to implement one or more instructions, and a computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the steps of the method as described in the first aspect above.
A fourth aspect of the embodiments of the present application provides a computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the steps of the method of the first aspect described above.
A fifth aspect of an embodiment of the application provides a computer program product, wherein the computer program product comprises a computer program operable to cause a computer to perform the steps of the method as described in the first aspect. The computer program product may be a software installation package.
It can be seen that in the embodiment of the application, a sparse plane is constructed by adopting a sparse point cloud obtained based on a currently input image frame, first association information of a target three-dimensional grid generated by a front end based on the image frame and the sparse plane is obtained, a first point plane constraint condition is obtained according to the first association information and a current first pose of a carrier, and an optimized value of the first pose is obtained according to the first point plane constraint condition, a reprojection constraint condition obtained based on the image frame and a previous frame of the image frame and a depth constraint condition. Because the currently input image frame is an RGBD image acquired by RGBD image acquisition equipment, and the RGBD image generally comprises observed two-dimensional data, such as data provided by a depth sensor, the observed two-dimensional data are relatively accurate, then, the accuracy of the sparse point cloud obtained based on the two-dimensional data is higher than that of the three-dimensional data obtained by estimation, so that the accuracy of a constructed sparse plane is higher, the accuracy of point-surface association (namely first association information) is improved, in addition, on the basis of giving a re-projection constraint and a depth constraint at the front end of an SLAM system, the point-surface constraint (namely a first point-surface constraint condition) obtained based on the point-surface association and the first pose is further added, and the pose of the carrier is enabled to meet the three constraint conditions by continuously optimizing the first pose, so that the deviation between the estimated value and the observed value of the pose is reduced, and the positioning accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of an application environment according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a planar fusion positioning method according to an embodiment of the present application;
FIG. 3 is a schematic view of a point-plane association according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of obtaining a first point constraint condition according to an embodiment of the present application;
fig. 5 is a schematic flow chart of obtaining an optimized value of a first pose according to an embodiment of the present application;
FIG. 6 is a schematic diagram of feature point sampling according to an embodiment of the present application;
FIG. 7 is a schematic flow chart of another planar fusion positioning method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a planar fusion positioning device according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
The terms "comprising" and "having" and any variations thereof, as used in the description, claims and drawings, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used for distinguishing between different objects and not for describing a particular sequential order.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the described embodiments of the application may be combined with other embodiments.
The embodiment of the application provides a plane fusion positioning method, which can be implemented based on an application environment shown in fig. 1, wherein the application environment comprises an electronic device 101, an image acquisition device 102 and a terminal device 103 which are in communication connection with the electronic device 101, as shown in fig. 1. The electronic device 101 supports operation of the front end and the back end of the SLAM system, and the electronic device 101 according to the embodiment of the present application may include various devices with program code operation capability and communication capability, for example, the electronic device 101 may be an independent physical server, a server cluster or a distributed system, a cloud server that provides cloud services, a cloud database, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, and basic cloud computing services such as big data and an artificial intelligent platform, or the electronic device 101 may also be various robots, vehicle-mounted devices, and wearable devices, and so on.
The image acquisition device 102 may be an RGBD image acquisition device, and in some scenarios, may be specifically an RGBD image acquisition device provided on the electronic device 101. The image acquisition device 102 is configured to send the acquired RGBD image data to the electronic device 101, so that the electronic device 101 may perform sparse plane construction based on the image data, correlate three-dimensional points in a three-dimensional grid (3 Dmesh) constructed at the front end with the sparse plane, construct a point-plane optimization model based on point-plane correlation information of adjacent frames and a pose (pose) of a carrier, thereby obtaining a point-plane constraint condition, and continuously optimize the pose of the carrier to enable the pose to meet the point-plane constraint condition, a re-projection constraint condition given by the front end of the SLAM system, and a depth constraint condition, so as to improve positioning accuracy in the SLAM system.
The terminal device 103 may be configured to visualize the image data acquired by the image acquisition device 102, and the positioning results in the two-dimensional grid (2 Dmesh), 3Dmesh, map, and SLAM system constructed by the electronic device 101. The terminal device 103 may be a device with a display function, such as a computer, AR (Augmented Reality) or VR (Virtual Reality) glasses, or may be a separate display.
The planar fusion positioning method provided by the embodiment of the application is explained in detail below with reference to the related drawings.
Referring to fig. 2, fig. 2 is a flowchart of a planar fusion positioning method according to an embodiment of the present application, where the method is applied to an electronic device, as shown in fig. 2, and includes steps 201 to 204:
step 201, constructing a sparse plane by using a sparse point cloud obtained based on a currently input image frame, and acquiring first association information of a target 3 Dresh generated by a front end based on the image frame and the sparse plane.
In the embodiment of the present application, the currently input image frame refers to the latest RGBD image acquired by the image acquisition device, and for the image frame, the electronic device uniformly samples the image frame to obtain at least one two-dimensional (2D) feature point, as shown in a b graph in fig. 3, it should be understood that the RGBD image generally has a corresponding depth map, the at least one 2D feature point can determine the point of the at least one 2D feature point in the corresponding depth map, and the point of the at least one 2D feature point in the corresponding depth map is mapped to a three-dimensional (3D) space, so as to obtain a sparse point cloud for constructing a sparse plane. In this embodiment, since the points in the depth map are data observed by the depth sensor, the points are relatively accurate, and a sparse plane constructed by using a sparse point cloud obtained from the points has higher precision and stability than a plane constructed by using a dense point cloud.
Illustratively, after deriving the at least one 2D feature point, the method further includes constructing a 2 Dresh using the at least one 2D feature point. For example, the at least one 2D feature point may be triangulated to construct 2Dmesh, or the 2Dmesh may be constructed by using the right-angle edge sampling distance of the 2Dmesh determined during sampling, for example, two right-angle edges are known to three 2D feature points, and then a hypotenuse is added to form a 2Dmesh.
Illustratively, constructing a sparse plane using a sparse point cloud derived based on a currently input image frame includes:
Based on the corresponding relation between the 2D characteristic points in the 2 Dresh and the 3D points in the sparse point cloud, constructing 3 Dresh corresponding to the 2 Dresh by adopting the 3D points;
and clustering the 3 Dresh based on the first pose to obtain a sparse plane.
In the embodiment of the present application, for each 2D feature point in the 2Dmesh, there is a 3D point corresponding to the 2D feature point in the sparse point cloud, for example, the point shown in the b graph and the point shown in the f graph in fig. 3, the corresponding relationship (or mapping relationship) may be represented by the same identifier, for example, the 2D feature point A, B, C forms a 2Dmesh, and then the 3D point corresponding to A, B, C may be used to form the 3Dmesh. It should be appreciated that the SLAM system may estimate the distance of any two 3Dmesh of the constructed 3Dmesh based on the pose of the carrier, and the electronic device may cluster the 3Dmesh with the distance less than or equal to a certain threshold value, resulting in a finally constructed sparse plane, which may be shown as D-graph in fig. 3 in 3D-space and f-graph in fig. 3 in 2D-space. In the embodiment, the sparse plane is constructed based on the observed 2D characteristic points with higher accuracy, so that the stability and the accuracy of the plane are improved.
The target 3Dmesh is 3Dmesh constructed by the front end of the SLAM based on the selected 3D points, and may be shown in a c diagram in fig. 3 in the 3D space, and may be shown in an e diagram in fig. 3 in the 2D space, and the front end of the SLAM system may perform feature extraction on the image frame to obtain front end 2D feature points, as shown in an a diagram in fig. 3, and then map the 2D feature points to 3D points. Wherein, the front end of the SLAM system can be a visual odometer (VisualOdometry).
Illustratively, acquiring first association information of a target 3Dmesh generated by a front end based on an image frame and a sparse plane includes:
Obtaining a difference value between the direction of the target 3 Dresh and the direction of the sparse plane;
And under the condition that the difference value is smaller than or equal to a preset value and at least one 3D point forming the target 3 Dresh is on the sparse plane, correlating the target 3 Dresh with the sparse plane to obtain first correlation information.
In the embodiment of the application, the association of planes needs to have two conditions that the directions are similar, when the difference value between the directions of the target 3 Dresh and the sparse plane is smaller than or equal to a preset value, the directions of the two are determined to be similar, wherein the preset value can be set according to an empirical value, and the second condition that at least one 3D point of the target 3 Dresh is on the sparse plane, namely, at least one 3D point of the target 3 Dresh meets the plane equation of the sparse plane. Under the condition that the two conditions are met, if the target 3 Dresh and the sparse plane belong to the same plane, the 3D point in the target 3 Dresh and the sparse plane can be associated to obtain first association information, and the first association information can be understood as a plane obtained by expanding the sparse plane. In the embodiment, the 2D characteristic points selected by the front end of the SLAM system have the characteristic of high speed, and the 3D points corresponding to the characteristic points are adopted to expand the sparse plane, so that the speed of plane association or expansion is improved.
Step 202, obtaining a first point constraint condition according to the first associated information and the current first pose of the carrier.
In the embodiment of the present application, as shown in fig. 4, according to the first association information and the current first pose of the carrier, a first point constraint condition is obtained, which includes steps 401-402:
Step 401, acquiring second association information of a history sparse plane constructed based on a history sparse point cloud obtained from a previous frame and a history 3 Dresh generated by the front end based on the previous frame, and a second pose of a carrier from the previous frame;
And 402, constructing a point-surface optimization model by adopting the first association information, the first pose, the second association information and the second pose to obtain a first point-surface constraint condition.
The history sparse point cloud is a sparse point cloud obtained based on at least one 2D feature point in the previous frame, the history sparse plane is a sparse plane constructed based on the history sparse point cloud, the history 3D mesh is a 3D mesh constructed based on the previous frame at the front end, the second association information is association information of the history sparse plane and the history 3D mesh, the acquisition mode of the second association information is the same as that of the first association information, namely, the sparse point cloud is adopted to construct a sparse plane, and then the 3 Dresh constructed at the front end is associated with the sparse plane. The first pose is a pose given by the front end of the current moment, the second pose is a pose given by the front end of the moment of the previous frame, and the front end in the SLAM system can obtain the pose of each moment. Since a point on a plane may be observed by a plurality of adjacent frames, for example, a point in the first association information may be observed in a previous frame, for example, an observation position of a P point in a current image frame is Py in fig. 3, an observation position in a previous frame is px, and based on such a relationship, the electronic device constructs a point-plane optimization initial model using the first association information, the first pose, the second association information, and the second pose as follows:
wherein, AndRepresenting the 3D coordinates of the h point in the coordinate system of the previous frame c 1 and the current image frame c 2, respectively, which may be a point observed by the same point in the first and second association information,AndRepresenting the rotation in the sub-world coordinate system W of the previous frame c 1 and the current image frame c 2 respectively,AndThe positions of the previous frame c 1 and the current image frame c 2 in the world coordinate system W are respectively represented, and can be obtained from the first pose and the second pose, n and d represent parameters of a sparse plane constructed based on the current image frame, n represents the direction of the plane, d represents the distance of the plane, E represents the identity matrix, and T represents the transpose.
It should be understood that points in adjacent frames generally satisfy the homography transform, and for the initial model, a homography matrix H may be obtained to simplify the homography matrix H, and then the simplified point-plane optimization model is as follows:
the first facet constraint can be expressed as:
Where Error represents the position Error observed in the previous frame c 1 and the current image frame c 2 at the h point in the planar coordinate system of the carrier, the electronic device may make the Error smaller (closer to 0) by continuously optimizing the initial pose (i.e., the first pose) given by the front end of the SLAM system so that the estimated value of the pose of the carrier is smaller. In the embodiment, homography Homograph constraint (namely homography matrix H) is added in the constructed point-plane constraint condition, which is favorable for improving the constraint precision, and the dependence on the 3D point precision can be reduced in algorithm positioning or tracking.
And 203, acquiring an optimized value of the first pose according to the first point surface constraint condition, the re-projection constraint condition and the depth constraint condition which are obtained based on the image frame and the previous frame of the image frame.
In the embodiment of the application, the electronic device obtains the re-projection constraint condition and the depth constraint condition based on the current image frame and the previous frame, the front end of the SLAM system can estimate the pose of the carrier through a series of observations and movements, and the observations and movements can provide constraint forces so that the pose can simultaneously meet the constraints, wherein the constraints comprise that the front end of the SLAM system provides the depth constraint condition and the re-projection constraint condition, the depth constraint condition is used for enabling the depth error of the observation point of the same point in the adjacent frame to be smaller, and the re-projection constraint condition is used for enabling the re-projection error of the observation point of the same point in the adjacent frame to be smaller.
Illustratively, as shown in fig. 5, the obtaining the optimized value of the first pose according to the first point constraint, the re-projection constraint based on the image frame and the previous frame of the image frame, and the depth constraint includes steps 501-502:
Step 501, obtaining at least one second point-plane constraint condition, wherein the at least one second point-plane constraint condition comprises a first point-plane constraint condition, point-plane constraint conditions except the first point-plane constraint condition in the at least one second point-plane constraint condition are obtained based on target association information, and the target association information is association information obtained based on a first history frame of a preset number of image frames;
step 502, optimizing the first pose, and taking pose estimated values meeting at least one second point-plane constraint condition, a re-projection constraint condition and a depth constraint condition as optimized values.
The at least one second point-plane constraint condition is obtained based on the current image frame and a preset number of first history frames nearest to the current image frame, for example, the current image frame is an 11 th frame, the at least one second point-plane constraint condition is obtained based on 1 st to 11 th frames, the first history frame is the previous 1 st to 10 th frames, and the target association information is association information obtained based on the 1 st and 2 nd. For example, a second point-to-plane constraint may be constructed based on the point-to-plane association information of the 1 st frame and the pose at the 1 st frame, the point-to-plane association information of the 2 nd frame and the pose at the 2 nd frame, and a second point-to-plane constraint may be constructed based on the point-to-plane association information of the 2 nd frame and the pose at the 2 nd frame, the point-to-plane association information of the 3 rd frame and the pose at the 3 rd frame.
In a SLAM system, a pre-set frame of a current image frame is generally considered to be a newer frame, and point-to-plane constraints constructed based on these frames still have an influence on the current positioning, so when optimizing a first pose, an estimated value of the pose should be made to satisfy the at least one second point-to-plane constraint, the re-projection constraint, and the depth constraint as much as possible, and of course, in the at least one second point-to-plane constraint, the first point-to-plane constraint needs to be satisfied mainly, and finally an optimized value capable of minimizing errors in the first point-to-plane constraint, the re-projection constraint, and the depth constraint is obtained.
Illustratively, the method further includes marginalizing a point-plane constraint derived based on a second historical frame preceding the first historical frame. The second history frame is earlier than the first history frame, for example, the current image frame is 11 th frame, the first history frame is 3 rd frame to 10 th frame, and then the frames before the 3 rd frame are all second history frames. Because the second historical frame is an older frame, the point-plane constraint conditions obtained based on the frames have little influence on the current positioning or tracking, so the second historical frame can be marginalized to ensure the timeliness of the point-plane constraint conditions.
Illustratively, uniformly sampling the image frame to obtain at least one 2D feature point, including:
Removing edge pixel points of the image frame;
Calculating according to the size of the image frames and the preset sampling quantity to obtain a sampling distance;
And sampling the pixel points which are not removed in the image frame according to the sampling distance to obtain at least one 2D characteristic point.
In the embodiment of the present application, as shown in fig. 6, assuming that the current image frame is m×n pixels, S pixels at the edge of the image frame are removed, S is an integer greater than or equal to 1, the remaining pixels are represented as (M-S) ×n-S, the preset sampling number is Q, Q is an integer greater than or equal to 3, and the sampling distance isAnd uniformly sampling the pixel points which are not removed in the image frame according to the sampling distance to obtain at least one 2D characteristic point shown in fig. 6.
It can be seen that in the embodiment of the application, a sparse plane is constructed by adopting a sparse point cloud obtained based on a currently input image frame, first association information of a target 3 Dresh generated by a front end based on the image frame and the sparse plane is obtained, a first point constraint condition is obtained according to the first association information and a current first pose of a carrier, and an optimized value of the first pose is obtained according to the first point constraint condition, a reprojection constraint condition obtained based on the image frame and a previous frame of the image frame and a depth constraint condition. Because the currently input image frame is an RGBD image acquired by the RGBD image acquisition device, the RGBD image generally comprises observed 2D data, such as data provided by a depth sensor, the observed 2D data are relatively accurate, and then the accuracy of sparse point cloud obtained based on the 2D data is higher than that of 3D data obtained by estimation, so that the accuracy of a constructed sparse plane is higher, the accuracy of point-plane association (namely first association information) is improved, in addition, on the basis of the front end of the SLAM system giving a re-projection constraint and a depth constraint, the point-plane constraint (namely a first point-plane constraint condition) obtained based on the point-plane association and the first pose is added, and the pose of the carrier is enabled to meet the three constraint conditions by continuously optimizing the first pose, so that the deviation between the estimated value and the observed value of the pose is reduced, and the positioning accuracy is improved.
Referring to fig. 7, fig. 7 is a flowchart of another planar fusion positioning method according to an embodiment of the present application, as shown in fig. 7, including steps 701-705:
701, constructing a sparse plane by using sparse point clouds obtained based on a current input image frame;
702, acquiring first association information of a target 3 Dresh generated by a front end based on an image frame and a sparse plane;
703, acquiring second association information of a history sparse plane constructed based on a history sparse point cloud obtained from a previous frame of the image frame and a history 3 Dresh generated by the front end based on the previous frame, and a second pose of a carrier in the previous frame;
704, constructing a point-surface optimization model by adopting the first association information, the first pose, the second association information and the second pose to obtain a first point-surface constraint condition;
And 705, obtaining an optimized value of the first pose according to the first point surface constraint condition, the re-projection constraint condition and the depth constraint condition which are obtained based on the image frame and the previous frame of the image frame.
The specific implementation of steps 701-705 is described in the embodiment shown in fig. 2, and will not be repeated here.
It can be seen that in the embodiment of the application, a sparse plane is constructed by adopting sparse point cloud obtained based on a currently input image frame, first association information of a target 3 Dresh generated by a front end based on the image frame and the sparse plane is obtained, second association information obtained based on a previous frame of the image frame and a second pose of a carrier of the previous frame are obtained, a point plane optimization model is constructed by adopting the first association information, the first pose, the second association information and the second pose, a first point plane constraint condition is obtained, and an optimization value of the first pose is obtained according to the first point plane constraint condition, a reprojection constraint condition and a depth constraint condition obtained based on the image frame and the previous frame of the image frame. Because the currently input image frame is an RGBD image acquired by RGBD image acquisition equipment, the RGBD image generally comprises observed 2D data, such as data provided by a depth sensor, the observed 2D data are relatively accurate, then the accuracy of sparse point cloud obtained based on the 2D data is higher than that of 3D data obtained by estimation, so that the accuracy of a constructed sparse plane is higher, the accuracy of the point-plane association of the currently input image frame and the adjacent frame (namely the previous frame) is better, the accuracy of the constructed first point-plane constraint condition is higher, in addition, the first point-plane constraint condition obtained based on the point-plane association and the first pose is added on the basis that the front end of the SLAM system gives a re-projection constraint and a depth constraint, and the pose of the carrier is enabled to meet the three constraint conditions by continuously optimizing the first pose, so that the deviation between the estimated value and the observed value of the pose is better, and the positioning accuracy is further improved.
Based on the description of the method embodiment shown in fig. 2 or fig. 7, the embodiment of the present application further provides a planar fusion positioning device, please refer to fig. 8, fig. 8 is a schematic structural diagram of the planar fusion positioning device provided in the embodiment of the present application, and as shown in fig. 8, the device includes an obtaining unit 801 and a processing unit 802;
an obtaining unit 801, configured to construct a sparse plane by using a sparse point cloud obtained based on a currently input image frame, and obtain first association information of a target 3Dmesh generated by a front end based on the image frame and the sparse plane;
A processing unit 802, configured to obtain a first point constraint condition according to the first association information and a current first pose of the carrier;
The processing unit 802 is further configured to obtain an optimized value of the first pose according to the first point constraint condition, the re-projection constraint condition obtained based on the image frame and the previous frame of the image frame, and the depth constraint condition.
In one possible implementation, the processing unit 802 is specifically configured to, according to the first association information and the current first pose of the carrier, obtain a first point constraint condition:
Acquiring second association information of a history sparse plane constructed based on a history sparse point cloud obtained from a previous frame and a history 3 Dresh generated by the front end based on the previous frame, and a second pose of a carrier from the previous frame;
and constructing a point-plane optimization model by adopting the first association information, the first pose, the second association information and the second pose to obtain a first point-plane constraint condition.
In one possible implementation, the processing unit 802 is specifically configured to obtain the optimized value of the first pose in terms of obtaining the first point plane constraint, the re-projection constraint based on the image frame and the previous frame of the image frame, and the depth constraint:
Acquiring at least one second point-plane constraint condition, wherein the at least one second point-plane constraint condition comprises a first point-plane constraint condition, point-plane constraint conditions except the first point-plane constraint condition in the at least one second point-plane constraint condition are obtained based on target association information, and the target association information is association information obtained based on a first history frame of a preset number of image frames;
And optimizing the first pose, and taking pose estimated values meeting at least one second point-plane constraint condition, the re-projection constraint condition and the depth constraint condition as optimized values.
In a possible implementation, the processing unit 802 is further configured to marginalize a point-plane constraint derived based on a second history frame preceding the first history frame.
In one possible implementation, the processing unit 802 is further configured to:
uniformly sampling the image frame to obtain at least one 2D characteristic point;
Determining points of at least one 2D characteristic point in a depth map corresponding to the image frame;
and mapping points of at least one 2D characteristic point in the depth map corresponding to the image frame to a 3D space to obtain sparse point cloud.
In a possible implementation, the processing unit 802 is further configured to construct 2Dmesh using the at least one 2D feature point;
in terms of constructing a sparse plane using a sparse point cloud derived based on a currently input image frame, the processing unit 802 is specifically configured to:
Based on the corresponding relation between the 2D characteristic points in the 2 Dresh and the 3D points in the sparse point cloud, constructing 3 Dresh corresponding to the 2 Dresh by adopting the 3D points;
and clustering the 3 Dresh based on the first pose to obtain a sparse plane.
In one possible implementation, the processing unit 802 is specifically configured to, in uniformly sampling the image frame to obtain at least one 2D feature point:
Removing edge pixel points of the image frame;
Calculating according to the size of the image frames and the preset sampling quantity to obtain a sampling distance;
And sampling the pixel points which are not removed in the image frame according to the sampling distance to obtain at least one 2D characteristic point.
In one possible implementation manner, in acquiring the first association information of the target 3Dmesh generated by the front end based on the image frame and the sparse plane, the processing unit 802 is specifically configured to:
Obtaining a difference value between the direction of the target 3 Dresh and the direction of the sparse plane;
And under the condition that the difference value is smaller than or equal to a preset value and at least one 3D point forming the target 3 Dresh is on the sparse plane, correlating the target 3 Dresh with the sparse plane to obtain first correlation information.
It can be seen that in the plane fusion positioning device shown in fig. 8, a sparse plane is constructed by using a sparse point cloud obtained based on a currently input image frame, first association information of a target 3Dmesh generated by a front end based on the image frame and the sparse plane is obtained, a first point plane constraint condition is obtained according to the first association information and a current first pose of a carrier, and an optimized value of the first pose is obtained according to the first point plane constraint condition, a reprojection constraint condition and a depth constraint condition obtained based on the image frame and a previous frame of the image frame. Because the currently input image frame is an RGBD image acquired by the RGBD image acquisition device, the RGBD image generally comprises observed 2D data, such as data provided by a depth sensor, the observed 2D data are relatively accurate, and then the accuracy of sparse point cloud obtained based on the 2D data is higher than that of 3D data obtained by estimation, so that the accuracy of a constructed sparse plane is higher, the accuracy of point-plane association (namely first association information) is improved, in addition, on the basis of the front end of the SLAM system giving a re-projection constraint and a depth constraint, the point-plane constraint (namely a first point-plane constraint condition) obtained based on the point-plane association and the first pose is added, and the pose of the carrier is enabled to meet the three constraint conditions by continuously optimizing the first pose, so that the deviation between the estimated value and the observed value of the pose is reduced, and the positioning accuracy is improved.
According to an embodiment of the present application, each unit in the planar fusion positioning device shown in fig. 8 may be separately or completely combined into one or several other units, or some unit(s) thereof may be further split into a plurality of units with smaller functions, which may achieve the same operation without affecting the implementation of the technical effects of the embodiment of the present application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the planar fusion positioning device may also include other units, and in practical applications, these functions may also be implemented with assistance from other units, and may be implemented by cooperation of multiple units.
According to another embodiment of the present application, a planar fusion positioning device apparatus as shown in fig. 8 may be constructed by running a computer program (including program code) capable of executing the steps involved in the respective methods as shown in fig. 2 or fig. 7 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like, and a storage element, and a planar fusion positioning method of an embodiment of the present application is implemented. The computer program may be recorded on, for example, a computer-readable recording medium, and loaded into and executed by the above-described computing device via the computer-readable recording medium.
Based on the description of the method embodiment and the device embodiment, the embodiment of the application also provides electronic equipment. Referring to fig. 9, the electronic device includes at least a processor 901, an input device 902, an output device 903, and a computer storage medium 904. Wherein the processor 901, input devices 902, output devices 903, and computer storage media 904 within the electronic device may be connected by a bus or other means.
The computer storage medium 904 may be stored in a memory of an electronic device, the computer storage medium 904 for storing a computer program comprising program instructions, the processor 901 for executing the program instructions stored by the computer storage medium 904. The processor 901 (or CPU (Central Processing Unit, central processing unit)) is a computing core as well as a control core of the electronic device, which is adapted to implement one or more instructions, in particular to load and execute one or more instructions to implement a corresponding method flow or a corresponding function.
In one embodiment, the processor 901 of the electronic device provided in the embodiments of the present application may be configured to perform a series of planar fused positioning processes:
Constructing a sparse plane by using a sparse point cloud obtained based on a currently input image frame, and acquiring first association information of a target 3 Dresh generated by a front end based on the image frame and the sparse plane;
obtaining a first point constraint condition according to the first associated information and the current first pose of the carrier;
And obtaining an optimized value of the first pose according to the first point constraint condition, the re-projection constraint condition and the depth constraint condition which are obtained based on the image frame and the last frame of the image frame.
In yet another embodiment, the processor 901 performs obtaining a first point constraint based on the first association information and a current first pose of the carrier, including:
acquiring a history sparse plane constructed based on a history sparse point cloud obtained from a previous frame, second associated information of a history 3 Dresh generated by the front end based on the previous frame and a second pose of a carrier from the previous frame;
and constructing a point-plane optimization model by adopting the first association information, the first pose, the second association information and the second pose to obtain a first point-plane constraint condition.
In yet another embodiment, the processor 901 performs a re-projection constraint and a depth constraint based on the first point plane constraint, the image frame and a previous frame of the image frame to obtain an optimized value for the first pose, comprising:
Acquiring at least one second point-plane constraint condition, wherein the at least one second point-plane constraint condition comprises a first point-plane constraint condition, point-plane constraint conditions except the first point-plane constraint condition in the at least one second point-plane constraint condition are obtained based on target association information, and the target association information is association information obtained based on a first history frame of a preset number of image frames;
And optimizing the first pose, and taking pose estimated values meeting at least one second point-plane constraint condition, the re-projection constraint condition and the depth constraint condition as optimized values.
In yet another embodiment, the processor 901 is further configured to marginalize a point-plane constraint derived based on a second history frame preceding the first history frame.
In yet another embodiment, the processor 901 is further configured to, prior to constructing the sparse plane using the sparse point cloud derived based on the currently input image frame:
uniformly sampling the image frame to obtain at least one 2D characteristic point;
Determining points of at least one 2D characteristic point in a depth map corresponding to the image frame;
and mapping points of at least one 2D characteristic point in the depth map corresponding to the image frame to a 3D space to obtain sparse point cloud.
In yet another embodiment, after obtaining the at least one 2D feature point, the processor 901 is further configured to:
Constructing 2 Dresh by adopting at least one 2D characteristic point;
The processor 901 performs construction of a sparse plane using a sparse point cloud derived based on a currently input image frame, including:
Based on the corresponding relation between the 2D characteristic points in the 2 Dresh and the 3D points in the sparse point cloud, constructing 3 Dresh corresponding to the 2 Dresh by adopting the 3D points;
and clustering the 3 Dresh based on the first pose to obtain a sparse plane.
In yet another embodiment, the processor 901 performs uniform sampling of the image frame to obtain at least one 2D feature point, including:
Removing edge pixel points of the image frame;
Calculating according to the size of the image frames and the preset sampling quantity to obtain a sampling distance;
And sampling the pixel points which are not removed in the image frame according to the sampling distance to obtain at least one 2D characteristic point.
In yet another embodiment, the processor 901 performs obtaining first association information of the target 3Dmesh generated by the front end based on the image frame and the sparse plane, including:
Obtaining a difference value between the direction of the target 3 Dresh and the direction of the sparse plane;
And under the condition that the difference value is smaller than or equal to a preset value and at least one 3D point forming the target 3 Dresh is on the sparse plane, correlating the target 3 Dresh with the sparse plane to obtain first correlation information.
It can be seen that in the electronic device shown in fig. 9, a sparse plane is constructed by using a sparse point cloud obtained based on a currently input image frame, first association information of a target 3Dmesh generated by a front end based on the image frame and the sparse plane is obtained, a first point constraint condition is obtained according to the first association information and a current first pose of a carrier, and an optimized value of the first pose is obtained according to the first point constraint condition, a reprojection constraint condition and a depth constraint condition obtained based on the image frame and a previous frame of the image frame. Because the currently input image frame is an RGBD image acquired by the RGBD image acquisition device, the RGBD image generally comprises observed 2D data, such as data provided by a depth sensor, the observed 2D data are relatively accurate, and then the accuracy of sparse point cloud obtained based on the 2D data is higher than that of 3D data obtained by estimation, so that the accuracy of a constructed sparse plane is higher, the accuracy of point-plane association (namely first association information) is improved, in addition, on the basis of the front end of the SLAM system giving a re-projection constraint and a depth constraint, the point-plane constraint (namely a first point-plane constraint condition) obtained based on the point-plane association and the first pose is added, and the pose of the carrier is enabled to meet the three constraint conditions by continuously optimizing the first pose, so that the deviation between the estimated value and the observed value of the pose is reduced, and the positioning accuracy is improved.
By way of example, electronic devices may include, but are not limited to, a processor 901, an input device 902, an output device 903, and a computer storage medium 904, the input device 902 may be a keyboard, touch screen, etc., and the output device 903 may be a speaker, display, radio frequency transmitter, etc. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of an electronic device and is not limiting of an electronic device, and may include more or fewer components than shown, or certain components may be combined, or different components.
It should be noted that, since the steps in the above-mentioned planar fusion positioning method shown in fig. 2 or fig. 7 are implemented when the processor 901 of the electronic device executes the computer program, the embodiments of the planar fusion positioning method shown in fig. 2 or fig. 7 are applicable to the electronic device, and the same or similar beneficial effects can be achieved.
The embodiment of the application also provides a computer storage medium (Memory), which is a Memory device in the electronic device and is used for storing programs and data. It will be appreciated that the computer storage medium herein may include both a built-in storage medium in the terminal and an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), adapted to be loaded and executed by the processor 901. The computer storage medium may be a high-speed RAM memory, a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory, or at least one computer storage medium located remote from the processor 901. In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by processor 901 to implement the corresponding steps described above with respect to the planar fusion positioning method.
The computer program of the computer storage medium may illustratively include computer program code, which may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
It should be noted that, since the steps in the above-mentioned planar fusion positioning method are implemented when the computer program of the computer storage medium is executed by the processor, all embodiments of the above-mentioned planar fusion positioning method are applicable to the computer storage medium, and the same or similar beneficial effects can be achieved.
The embodiment of the application also provides a computer program product, wherein the computer program product comprises a computer program, and the computer program is operable to cause a computer to execute the steps in the planar fusion positioning method. The computer program product may be a software installation package.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (9)

1. A planar fusion positioning method, the method comprising:
constructing a sparse plane by using a sparse point cloud obtained based on a currently input image frame, and acquiring first association information of a target three-dimensional grid generated by a front end based on the image frame and the sparse plane;
Obtaining a first point constraint condition according to the first associated information and the current first pose of the carrier;
acquiring an optimized value of the first pose according to the first point constraint condition, a reprojection constraint condition and a depth constraint condition which are obtained based on the image frame and a previous frame of the image frame;
the method further comprises the steps of, before constructing a sparse plane by using a sparse point cloud obtained based on the currently input image frame:
uniformly sampling the image frame to obtain at least one two-dimensional characteristic point;
determining points of the at least one two-dimensional feature point in a depth map corresponding to the image frame;
Mapping points of the at least one two-dimensional feature point in the depth map corresponding to the image frame to a three-dimensional space to obtain the sparse point cloud;
The acquiring the first association information of the target three-dimensional grid generated by the front end based on the image frame and the sparse plane comprises the following steps:
acquiring a difference value between the direction of the target three-dimensional grid and the direction of the sparse plane;
And under the condition that the difference value is smaller than or equal to a preset value and at least one three-dimensional point forming the target three-dimensional grid is on the sparse plane, associating the target three-dimensional grid with the sparse plane to obtain the first association information.
2. The method of claim 1, wherein the obtaining a first point constraint according to the first association information and the current first pose of the carrier includes:
Acquiring second association information of a history sparse plane constructed based on the history sparse point cloud obtained from the previous frame and a history three-dimensional grid generated by the front end based on the previous frame, and a second pose of the carrier in the previous frame;
and constructing a point-plane optimization model by adopting the first association information, the first pose, the second association information and the second pose to obtain the first point-plane constraint condition.
3. The method according to claim 1 or 2, wherein the obtaining the optimized value of the first pose according to the first point plane constraint, a re-projection constraint and a depth constraint based on the image frame and a previous frame of the image frame comprises:
Acquiring at least one second point-plane constraint condition, wherein the at least one second point-plane constraint condition comprises the first point-plane constraint condition, and the point-plane constraint conditions except for the first point-plane constraint condition in the at least one second point-plane constraint condition are obtained based on target association information, and the target association information is association information obtained based on a preset number of first history frames of the image frames;
and optimizing the first pose, and taking pose estimated values meeting the at least one second point-plane constraint condition, the re-projection constraint condition and the depth constraint condition as the optimized values.
4. A method according to claim 3, characterized in that the method further comprises:
and marginalizing a point-plane constraint condition obtained based on a second history frame before the first history frame.
5. The method of claim 1, wherein after obtaining the at least one two-dimensional feature point, the method further comprises:
Constructing a two-dimensional grid by adopting the at least one two-dimensional characteristic point;
The construction of the sparse plane by using the sparse point cloud obtained based on the current input image frame comprises the following steps:
based on the corresponding relation between the two-dimensional characteristic points in the two-dimensional grid and the three-dimensional points in the sparse point cloud, constructing a three-dimensional grid corresponding to the two-dimensional grid by adopting the three-dimensional points;
and clustering the three-dimensional grid based on the first pose to obtain the sparse plane.
6. The method of claim 5, wherein uniformly sampling the image frames to obtain at least one two-dimensional feature point comprises:
Removing edge pixel points of the image frame;
calculating a sampling distance according to the size of the image frame and a preset sampling number;
And sampling the pixel points which are not removed in the image frame according to the sampling distance to obtain the at least one two-dimensional characteristic point.
7. A planar fusion positioning device, which is characterized by comprising an acquisition unit and a processing unit;
The acquisition unit is used for constructing a sparse plane by adopting a sparse point cloud obtained based on a currently input image frame, and acquiring first association information of a target three-dimensional grid generated by the front end based on the image frame and the sparse plane;
The processing unit is used for obtaining a first point surface constraint condition according to the first associated information and the current first pose of the carrier;
The processing unit is further configured to obtain an optimized value of the first pose according to the first point constraint condition, a reprojection constraint condition and a depth constraint condition obtained based on the image frame and a previous frame of the image frame;
The processing unit is also used for uniformly sampling the image frame to obtain at least one two-dimensional characteristic point;
determining points of the at least one two-dimensional feature point in a depth map corresponding to the image frame;
Mapping points of the at least one two-dimensional feature point in the depth map corresponding to the image frame to a three-dimensional space to obtain the sparse point cloud;
in terms of acquiring first association information of the target three-dimensional grid generated by the front end based on the image frame and the sparse plane, the processing unit is specifically configured to:
acquiring a difference value between the direction of the target three-dimensional grid and the direction of the sparse plane;
And under the condition that the difference value is smaller than or equal to a preset value and at least one three-dimensional point forming the target three-dimensional grid is on the sparse plane, associating the target three-dimensional grid with the sparse plane to obtain the first association information.
8. An electronic device comprising an input device and an output device, further comprising:
a processor adapted to implement one or more instructions, and
A computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the method of any one of claims 1-6.
9. A computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the method of any one of claims 1-6.
CN202111059313.0A 2021-09-09 2021-09-09 Plane fusion positioning method, device, electronic equipment and storage medium Active CN113808196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111059313.0A CN113808196B (en) 2021-09-09 2021-09-09 Plane fusion positioning method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111059313.0A CN113808196B (en) 2021-09-09 2021-09-09 Plane fusion positioning method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113808196A CN113808196A (en) 2021-12-17
CN113808196B true CN113808196B (en) 2025-02-21

Family

ID=78940638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111059313.0A Active CN113808196B (en) 2021-09-09 2021-09-09 Plane fusion positioning method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113808196B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116148883B (en) * 2023-04-11 2023-08-08 锐驰智慧科技(安吉)有限公司 SLAM method, device, terminal equipment and medium based on sparse depth image
CN116932794A (en) * 2023-06-27 2023-10-24 盯盯拍(深圳)技术股份有限公司 Three-dimensional information acquisition and storage method, device, system and medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256718A (en) * 2021-05-27 2021-08-13 浙江商汤科技开发有限公司 Positioning method and device, equipment and storage medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180005015A1 (en) * 2016-07-01 2018-01-04 Vangogh Imaging, Inc. Sparse simultaneous localization and matching with unified tracking
CN107830854A (en) * 2017-11-06 2018-03-23 深圳精智机器有限公司 Visual positioning method based on ORB sparse point cloud and two-dimensional code
CN109903330B (en) * 2018-09-30 2021-06-01 华为技术有限公司 Method and device for processing data
CN110389348B (en) * 2019-07-30 2020-06-23 四川大学 Positioning and Navigation Method and Device Based on LiDAR and Binocular Camera
CN110533720B (en) * 2019-08-20 2023-05-02 西安电子科技大学 Semantic SLAM system and method based on joint constraints
CN113424232B (en) * 2019-12-27 2024-03-15 深圳市大疆创新科技有限公司 Three-dimensional point cloud map construction method, system and equipment
CN111664843A (en) * 2020-05-22 2020-09-15 杭州电子科技大学 SLAM-based intelligent storage checking method
CN111795686B (en) * 2020-06-08 2024-02-02 南京大学 A method for mobile robot positioning and mapping
CN111862218B (en) * 2020-07-29 2021-07-27 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN112197773B (en) * 2020-09-30 2022-11-11 江苏集萃未来城市应用技术研究所有限公司 Visual and laser positioning mapping method based on plane information
CN112396656B (en) * 2020-11-24 2023-04-07 福州大学 Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
CN112927251B (en) * 2021-03-26 2022-10-14 中国科学院自动化研究所 Morphology-based scene dense depth map acquisition method, system and device
CN113190120B (en) * 2021-05-11 2022-06-24 浙江商汤科技开发有限公司 Pose acquisition method and device, electronic equipment and storage medium
CN113361365B (en) * 2021-05-27 2023-06-23 浙江商汤科技开发有限公司 Positioning method, positioning device, positioning equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256718A (en) * 2021-05-27 2021-08-13 浙江商汤科技开发有限公司 Positioning method and device, equipment and storage medium

Also Published As

Publication number Publication date
CN113808196A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
EP3505866B1 (en) Method and apparatus for creating map and positioning moving entity
JP6745328B2 (en) Method and apparatus for recovering point cloud data
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
WO2020119684A1 (en) 3d navigation semantic map update method, apparatus and device
CN110263209B (en) Method and apparatus for generating information
CN117132737B (en) Three-dimensional building model construction method, system and equipment
CN111459269B (en) Augmented reality display method, system and computer readable storage medium
CN113361365B (en) Positioning method, positioning device, positioning equipment and storage medium
KR20230006628A (en) method and device for processing image, electronic equipment, storage medium and computer program
CN116563493A (en) Model training method based on three-dimensional reconstruction, three-dimensional reconstruction method and device
CN113808196B (en) Plane fusion positioning method, device, electronic equipment and storage medium
CN117132649A (en) Artificial intelligence integrated Beidou satellite navigation ship video positioning method and device
CN115349140A (en) Efficient positioning based on multiple feature types
CN113592015A (en) Method and device for positioning and training feature matching network
CN115409949A (en) Model training method, perspective image generation method, device, equipment and medium
CN114998433A (en) Pose calculation method and device, storage medium and electronic equipment
JPWO2023112083A5 (en)
CN111784579B (en) Mapping method and device
CN114299242A (en) Method, device and equipment for processing images in high-precision map and storage medium
CN115329111B (en) Image feature library construction method and system based on point cloud and image matching
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN120014579A (en) SLAM method based on radar odometer and visual feature point depth filter
CN113470181B (en) Plane construction method, device, electronic equipment and storage medium
CN111210297B (en) Method and device for dividing boarding points
CN115222815A (en) Obstacle distance detection method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant