[go: up one dir, main page]

CN112288817A - Image-based three-dimensional reconstruction processing method and device - Google Patents

Image-based three-dimensional reconstruction processing method and device Download PDF

Info

Publication number
CN112288817A
CN112288817A CN202011293208.9A CN202011293208A CN112288817A CN 112288817 A CN112288817 A CN 112288817A CN 202011293208 A CN202011293208 A CN 202011293208A CN 112288817 A CN112288817 A CN 112288817A
Authority
CN
China
Prior art keywords
image
matching
images
subsequent
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011293208.9A
Other languages
Chinese (zh)
Other versions
CN112288817B (en
Inventor
宁海宽
李姬俊男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011293208.9A priority Critical patent/CN112288817B/en
Publication of CN112288817A publication Critical patent/CN112288817A/en
Application granted granted Critical
Publication of CN112288817B publication Critical patent/CN112288817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本公开具体涉及基于图像的三维重建方法及装置、计算机可读介质以及电子设备。方法包括:采集图像序列并获取定位信息;在图像序列中选取第一图像后序的连续多帧后序图像,对第一图像与各后序图像进行特征点匹配以筛选与第一图像匹配的后序图像,生成第一图像匹配对并建立第一图像与后序图像的匹配关系;以及利用定位信息在图像序列中筛选与第一图像位置匹配的第二图像集合,对第一图像与第二图像集合中的各图像进行特征点匹配以筛选与第一图像匹配的第二图像,生成第二图像匹配对并建立所述第一图像与第二图像的匹配关系;基于第一图像对应的第一图像匹配对和第二图像匹配对的图像匹配关系构建全局匹配关系;基于全局匹配关系对图像序列三维重建。

Figure 202011293208

The present disclosure particularly relates to an image-based three-dimensional reconstruction method and apparatus, a computer-readable medium, and an electronic device. The method includes: collecting an image sequence and obtaining positioning information; selecting consecutive multiple frames of subsequent images following the first image in the image sequence, and performing feature point matching on the first image and each subsequent image to filter out the images matching the first image. Post-sequence images, generating a first image matching pair and establishing a matching relationship between the first image and the post-sequence image; Feature point matching is performed on each image in the two image sets to screen the second image matching the first image, a second image matching pair is generated, and the matching relationship between the first image and the second image is established; The image matching relationship between the first image matching pair and the second image matching pair constructs a global matching relationship; three-dimensional reconstruction of the image sequence is performed based on the global matching relationship.

Figure 202011293208

Description

基于图像的三维重建处理方法及装置Image-based three-dimensional reconstruction processing method and device

技术领域technical field

本公开涉及图像处理技术领域,具体涉及一种基于图像的三维重建方法、一种基于图像的三维重建装置、一种计算机可读介质以及一种电子设备。The present disclosure relates to the technical field of image processing, and in particular, to an image-based three-dimensional reconstruction method, an image-based three-dimensional reconstruction apparatus, a computer-readable medium, and an electronic device.

背景技术Background technique

在计算机视觉领域,三维重建是一个研究重点。一般来说,现有技术中常用的三维重建算法包括:SFM(Structure from motion,运动恢复结构)技术、Dynamic Fusion(动态融合)算法、Bundle Fusion(捆绑融合)算法等等。在相关技术中,大多采用基于图像特征进行匹配筛选的方式,比较依赖图像特征描述子的鲁棒性。然而对于局部相似纹理,图像特征并不能有效解决此种场景重建。例如,相同的标志牌或logo出现在不同地方时,可能会出现错误匹配;即,将不属于同一处地方的图片相互关联,这会严重降低三维重建的精度,甚至导致建图失败。In the field of computer vision, 3D reconstruction is a research focus. Generally speaking, commonly used three-dimensional reconstruction algorithms in the prior art include: SFM (Structure from motion, motion recovery structure) technology, Dynamic Fusion (dynamic fusion) algorithm, Bundle Fusion (bundle fusion) algorithm and so on. In the related art, the method of matching and screening based on image features is mostly adopted, and the robustness of the descriptors depending on the image features is compared. However, for locally similar textures, image features cannot effectively solve such scene reconstruction. For example, when the same sign or logo appears in different places, false matches may occur; i.e., correlating pictures that do not belong to the same place, which can seriously reduce the accuracy of 3D reconstruction and even cause the mapping to fail.

需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。It should be noted that the information disclosed in the above Background section is only for enhancement of understanding of the background of the present disclosure, and therefore may contain information that does not form the prior art that is already known to a person of ordinary skill in the art.

发明内容SUMMARY OF THE INVENTION

本公开提供一种基于图像的三维重建方法、一种基于图像的三维重建装置、一种计算机可读介质以及一种电子设备,能够有效的避免图像误匹配的情况,提升重建地图的精度。The present disclosure provides an image-based three-dimensional reconstruction method, an image-based three-dimensional reconstruction device, a computer-readable medium, and an electronic device, which can effectively avoid image mismatch and improve the accuracy of reconstructed maps.

本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。Other features and advantages of the present disclosure will become apparent from the following detailed description, or be learned in part by practice of the present disclosure.

根据本公开的第一方面,提供一种基于图像的三维重建方法,包括:According to a first aspect of the present disclosure, there is provided an image-based three-dimensional reconstruction method, comprising:

采集图像序列,并获取各图像的定位信息;Collect image sequences, and obtain the positioning information of each image;

在所述图像序列中选取第一图像及所述第一图像后序的连续多帧后序图像,对所述第一图像与各所述后序图像进行特征点匹配以筛选与所述第一图像匹配的后序图像,生成第一图像匹配对并建立所述第一图像与所述后序图像的匹配关系;以及In the image sequence, a first image and consecutive frames of subsequent images of the first image are selected, and feature point matching is performed on the first image and each of the subsequent images to filter out the first image and the subsequent images. Image-matched post-sequence images, generating a first image matching pair and establishing a matching relationship between the first image and the post-sequence image; and

利用所述定位信息在所述图像序列中筛选与所述第一图像位置匹配的第二图像集合,对所述第一图像与所述第二图像集合中的各图像进行特征点匹配以筛选与所述第一图像匹配的第二图像,生成第二图像匹配对并建立所述第一图像与所述第二图像的匹配关系;Use the positioning information to filter a second image set matching the position of the first image in the image sequence, and perform feature point matching on the first image and each image in the second image set to filter out the second image matched by the first image, generate a second image matching pair and establish a matching relationship between the first image and the second image;

基于所述第一图像对应的第一图像匹配对和第二图像匹配对的图像匹配关系构建全局匹配关系;并基于所述全局匹配关系对所述图像序列进行三维重建。A global matching relationship is constructed based on the image matching relationship between the first image matching pair and the second image matching pair corresponding to the first image; and three-dimensional reconstruction is performed on the image sequence based on the global matching relationship.

根据本公开的第二方面,提供一种基于图像的三维重建装置,包括:According to a second aspect of the present disclosure, there is provided an image-based three-dimensional reconstruction device, comprising:

数据采集模块,用于采集图像序列,并获取各图像的定位信息;The data acquisition module is used to collect the image sequence and obtain the positioning information of each image;

第一图像匹配模块,用于在所述图像序列中选取第一图像及所述第一图像后序的连续多帧后序图像,对所述第一图像与各所述后序图像进行特征点匹配以筛选与所述第一图像匹配的后序图像,生成第一图像匹配对并建立所述第一图像与所述后序图像的匹配关系;以及The first image matching module is used to select the first image and successive frames of subsequent images of the first image in the image sequence, and perform feature points on the first image and each of the subsequent images. matching to filter subsequent images that match the first image, generate a first image matching pair and establish a matching relationship between the first image and the subsequent image; and

第二图像匹配模块,用于利用所述定位信息在所述图像序列中筛选与所述第一图像位置匹配的第二图像集合,对所述第一图像与所述第二图像集合中的各图像进行特征点匹配以筛选与所述第一图像匹配的第二图像,生成第二图像匹配对并建立所述第一图像与所述第二图像的匹配关系;A second image matching module, configured to use the positioning information to filter a second image set matching the position of the first image in the image sequence, and to compare each of the first image and the second image set performing feature point matching on the image to screen the second image matching the first image, generating a second image matching pair and establishing a matching relationship between the first image and the second image;

重建模块,用于基于所述第一图像对应的第一图像匹配对和第二图像匹配对的图像匹配关系构建全局匹配关系;并基于所述全局匹配关系对所述图像序列进行三维重建。A reconstruction module, configured to construct a global matching relationship based on the image matching relationship between the first image matching pair and the second image matching pair corresponding to the first image; and perform three-dimensional reconstruction on the image sequence based on the global matching relationship.

根据本公开的第三方面,提供一种计算机可读介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述的基于图像的三维重建方法。According to a third aspect of the present disclosure, there is provided a computer-readable medium on which a computer program is stored, and when the computer program is executed by a processor, implements the above-mentioned image-based three-dimensional reconstruction method.

根据本公开的第四方面,提供一种电子设备,包括:According to a fourth aspect of the present disclosure, there is provided an electronic device, comprising:

一个或多个处理器;one or more processors;

存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现上述的基于图像的三维重建方法。A storage device for storing one or more programs, when the one or more programs are executed by the one or more processors, so that the one or more processors implement the above-mentioned image-based three-dimensional reconstruction method .

本公开的一种实施例所提供的基于图像的三维重建方法,首先采集图像序列,并在采集图像时标记各图像的定位信息;利用各图像之后的连续多帧后续图像进行匹配图像的筛选并建立匹配关系;在建立上述的匹配关系之后,再利用各图像对应的定位信息进行再次筛选位置匹配的图像,并计算位置匹配的图像之间的特征匹配关系,从而能够根据位置信息实现回环检测;实现了仅对相同空间位置附件的图像进行非序列化的匹配,进而极大减少了错误回环匹配,提升了建图精度以及鲁棒性。An image-based three-dimensional reconstruction method provided by an embodiment of the present disclosure first collects an image sequence, and marks the positioning information of each image when collecting images; Establish a matching relationship; after establishing the above-mentioned matching relationship, use the positioning information corresponding to each image to screen again the images of the position matching, and calculate the feature matching relationship between the images of the position matching, so that the loopback detection can be realized according to the position information; Only unserialized matching of images attached to the same spatial location is realized, which greatly reduces false loopback matching and improves the accuracy and robustness of mapping.

应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure. Obviously, the drawings in the following description are only some embodiments of the present disclosure, and for those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative effort.

图1示意性示出本公开示例性实施例中一种基于图像的三维重建方法的流程示意图;FIG. 1 schematically shows a schematic flowchart of an image-based three-dimensional reconstruction method in an exemplary embodiment of the present disclosure;

图2示意性示出本公开示例性实施例中一种图像采集方法的流程示意图;FIG. 2 schematically shows a schematic flowchart of an image acquisition method in an exemplary embodiment of the present disclosure;

图3示意性示出本公开示例性实施例中一种图像匹配方法的流程示意图;FIG. 3 schematically shows a schematic flowchart of an image matching method in an exemplary embodiment of the present disclosure;

图4示意性示出本公开示例性实施例中一种基于图像的三维重建装置的组成示意图;FIG. 4 schematically shows a composition diagram of an image-based three-dimensional reconstruction apparatus in an exemplary embodiment of the present disclosure;

图5示意性示出本公开示例性实施例中一种电子设备的结构示意图。FIG. 5 schematically shows a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.

具体实施方式Detailed ways

现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments, however, can be embodied in various forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.

此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repeated descriptions will be omitted. Some of the block diagrams shown in the figures are functional entities that do not necessarily necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.

在计算机视觉的相关技术中,通常使用SFM技术来恢复三维环境的空间结构。传统的SFM算法一般包括:特征提取与匹配、计算初始匹配对及点云、光束平差、然后重复添加新的图像帧数据再结合一定策略进行光束平差等步骤依次进行来实现三维重建。对于序列化的图像输入在特征匹配环节常见的方法主要有两种:一种为序列化匹配,另一种则为全局暴力匹配。其中序列化匹配在大尺度环境下会由于累计误差积累过大导致无法形成回环。为了保证最终地图在一个大尺度环境下仍与实际地图保持一致,常使用暴力搜索对所有的图像进行特征的匹配计算,来增加一些回环图像对,减少累计误差。但这样的方式使得计算量较大、耗时较长。基于此,现有的方法通常还是基于图像特征进行匹配筛选,比较依赖图像特征描述子的鲁棒性,并且对于局部相似纹理,图像特征不能有效解决此种场景重建。如相同的标志牌或logo等出现在不同地方时,可能会出现错误匹配,即将不属于同一处地方的图片相互关联,这会严重降低三维重建的精度,甚至导致建图失败。In the related technologies of computer vision, SFM technology is usually used to recover the spatial structure of the three-dimensional environment. The traditional SFM algorithm generally includes: feature extraction and matching, calculation of initial matching pairs and point clouds, beam adjustment, and then repeatedly adding new image frame data and then combining certain strategies for beam adjustment and other steps to achieve 3D reconstruction. There are two common methods for serialized image input in feature matching: one is serialized matching, and the other is global brute force matching. Among them, serialized matching will not be able to form a loopback in a large-scale environment due to the excessive accumulation of accumulated errors. In order to ensure that the final map is still consistent with the actual map in a large-scale environment, brute force search is often used to perform feature matching calculation on all images to add some loopback image pairs and reduce the cumulative error. However, this method results in a large amount of computation and a long time-consuming. Based on this, the existing methods are usually based on image features for matching and screening, and rely on the robustness of image feature descriptors, and for locally similar textures, image features cannot effectively solve such scene reconstruction. For example, when the same signboard or logo appears in different places, there may be incorrect matching, that is, pictures that do not belong to the same place are related to each other, which will seriously reduce the accuracy of 3D reconstruction and even lead to the failure of mapping.

针对上述的现有技术的缺点和不足,本示例实施方式中提供了一种基于图像的三维重建方法。参考图1中所示,上述的基于图像的三维重建方法可以包括以下步骤:In view of the above-mentioned shortcomings and deficiencies of the prior art, this exemplary embodiment provides an image-based three-dimensional reconstruction method. Referring to Fig. 1, the above-mentioned image-based three-dimensional reconstruction method may include the following steps:

S11,采集图像序列,并获取各图像的定位信息;S11, collect an image sequence, and obtain the positioning information of each image;

S12,在所述图像序列中选取第一图像及所述第一图像后序的连续多帧后序图像,对所述第一图像与各所述后序图像进行特征点匹配以筛选与所述第一图像匹配的后序图像,生成第一图像匹配对并建立所述第一图像与所述后序图像的匹配关系;以及S12: Select a first image and multiple consecutive frames of subsequent images following the first image in the image sequence, and perform feature point matching on the first image and each of the subsequent images to filter out the first image and the subsequent images. A subsequent image of the first image matching, generating a first image matching pair and establishing a matching relationship between the first image and the subsequent image; and

S13,利用所述定位信息在所述图像序列中筛选与所述第一图像位置匹配的第二图像集合,对所述第一图像与所述第二图像集合中的各图像进行特征点匹配以筛选与所述第一图像匹配的第二图像,生成第二图像匹配对并建立所述第一图像与所述第二图像的匹配关系;S13: Screen a second image set matching the position of the first image in the image sequence by using the positioning information, and perform feature point matching on the first image and each image in the second image set to Screening a second image matching the first image, generating a second image matching pair and establishing a matching relationship between the first image and the second image;

S14,基于所述第一图像对应的第一图像匹配对和第二图像匹配对的图像匹配关系构建全局匹配关系;并基于所述全局匹配关系对所述图像序列进行三维重建。S14. Build a global matching relationship based on the image matching relationship between the first image matching pair and the second image matching pair corresponding to the first image; and perform three-dimensional reconstruction on the image sequence based on the global matching relationship.

本示例实施方式所提供的三维重建方法中,一方面进行序列化的匹配,即利用各图像之后的连续多帧后续图像进行匹配图像的筛选并建立匹配关系;另一方面,在建立上述的匹配关系之后,再利用各图像对应的定位信息进行再次筛选位置匹配的图像,并计算位置匹配的图像之间的特征匹配关系,从而能够根据位置信息实现回环检测;进而避免了传统意义上的基于图像特征的回环检测,而仅对相同空间位置附近的图像进行非序列化匹配,即回环检测;进而极大减少了错误回环匹配,提升了建图精度以及鲁棒性。尤其对于图像特征不可靠的场景(包括但不限于重复纹理较多、光照变化、弱纹理),能够极大提升重建地图的精度。In the three-dimensional reconstruction method provided by this exemplary embodiment, on the one hand, serialized matching is performed, that is, the matching images are screened and a matching relationship is established by using successive multiple frames of subsequent images after each image; After the relationship, the positioning information corresponding to each image is used to screen the position-matched images again, and the feature matching relationship between the position-matched images is calculated, so that the loop closure detection can be realized according to the position information; thus, the traditional image-based image-based detection can be avoided. Feature loopback detection, and only perform non-serialized matching on images near the same spatial position, that is, loopback detection; furthermore, false loopback matching is greatly reduced, and the accuracy and robustness of mapping are improved. Especially for scenes with unreliable image features (including but not limited to many repeated textures, illumination changes, and weak textures), the accuracy of the reconstructed map can be greatly improved.

下面,将结合附图及实施例对本示例实施方式中的基于图像的三维重建方法的各个步骤进行更详细的说明。Hereinafter, each step of the image-based three-dimensional reconstruction method in this exemplary embodiment will be described in more detail with reference to the accompanying drawings and embodiments.

在步骤S11中,采集图像序列,并获取各图像的定位信息。In step S11, an image sequence is collected, and positioning information of each image is obtained.

本示例实施方式中,上述的方法可以应用于带有拍摄功能的终端设备,例如配置有后置摄像头的手机、平板电脑等智能终端设备;例如,可以应用于对室内环境进行图像和室内定位信息的采集。具体来说,参考图2所示,上述的步骤S11可以包括:In this exemplary embodiment, the above method can be applied to terminal devices with a shooting function, such as smart terminal devices such as mobile phones and tablet computers equipped with rear cameras; for example, it can be applied to image and indoor positioning information of indoor environments. collection. Specifically, referring to FIG. 2 , the above-mentioned step S11 may include:

步骤S111,响应于图像采集指令,激活单目相机进行RGB图像采集;以及Step S111, in response to the image acquisition instruction, activate the monocular camera to perform RGB image acquisition; and

步骤S112,调用超宽带驱动程序,以获取采集所述RGB图像时的位置信息,并将该位置信息配置为所述RGB图像的定位信息。Step S112: Invoke the UWB driver to acquire the position information when the RGB image is collected, and configure the position information as the positioning information of the RGB image.

举例而言,用户可以在终端设备上,应用程序的交互界面中触发对图像的采集指令。在应用程序获取到用户的图像采集指令后,便可以通过指令端口调用并激活单目相机,使单目相机按一定的规则对当前室内环境或当前室内场景进行RGB图像的采集和拍摄,得到对应的图像序列。举例来说,拍摄的规则可以是终端设备的移动速度、图像拍摄的频率、拍摄的角度以及具体的拍摄参数等等。For example, the user can trigger an image acquisition instruction in the interactive interface of the application program on the terminal device. After the application obtains the user's image capture instruction, it can call and activate the monocular camera through the command port, so that the monocular camera can collect and shoot RGB images of the current indoor environment or current indoor scene according to certain rules, and obtain the corresponding image sequence. For example, the shooting rules may be the moving speed of the terminal device, the frequency of image shooting, the shooting angle, specific shooting parameters, and so on.

此外,在调用单目相机进行RGB图像采集的同时,还可以根据图像采集指令同步调用超带宽(Ultra Wide Band,UWB)驱动程序,在采集图像的同时,同步采集UWB的定位信息。举例来说,可以在上述的终端设备中搭载UWB芯片;用户可以预先配置UWB驱动程序采集定位信息的频率与图像采集的频率相同;或者,也可以根据UWB驱动程序自身的要求配置定位信息采集的频率。具体来说,对于采集的定位信息,可以标记对应的定位信息采集时间。In addition, while calling the monocular camera for RGB image acquisition, the Ultra Wide Band (UWB) driver can also be called synchronously according to the image acquisition instruction, and the positioning information of the UWB can be synchronously collected while collecting the image. For example, a UWB chip can be installed in the above-mentioned terminal equipment; the user can pre-configure the UWB driver to collect the positioning information at the same frequency as the image acquisition frequency; or, according to the requirements of the UWB driver itself frequency. Specifically, for the collected positioning information, the corresponding positioning information collection time may be marked.

在步骤S12中,在所述图像序列中选取第一图像及所述第一图像后序的连续多帧后序图像,对所述第一图像与各所述后序图像进行特征点匹配以筛选与所述第一图像匹配的后序图像,生成第一图像匹配对并建立所述第一图像与所述后序图像的匹配关系。In step S12, a first image and successive frames of subsequent images following the first image are selected from the image sequence, and feature point matching is performed on the first image and each of the subsequent images to filter For a subsequent image matched with the first image, a first image matching pair is generated and a matching relationship between the first image and the subsequent image is established.

本示例实施方式中,在完成图像和定位信息的采集后,可以通过离线的方式对图像序列进行处理。具体来说,对于图像序列而言,可以按顺序依次选取各帧图像作为第一图像;同时,选取第一图像之后的连续的k帧图像作为第一图像的后序图像。其中,k为正整数,例如可以配置为5、7、8、9或者10、11等数值。本公开对k的具体数值不做特殊限定。然后,便可以对第一图像和对应的后序图像集合进行特征匹配。实现对图像序列中的各图像进行序列化计算图像间的匹配关系。In this exemplary embodiment, after the collection of images and positioning information is completed, the image sequence may be processed in an offline manner. Specifically, for the image sequence, each frame of image can be selected as the first image in sequence; at the same time, the consecutive k frames of images after the first image are selected as the subsequent images of the first image. Wherein, k is a positive integer, for example, it can be configured as 5, 7, 8, 9, or 10, 11 and other values. The present disclosure does not specifically limit the specific value of k. Then, feature matching can be performed on the first image and the corresponding set of subsequent images. Realize the serialization of each image in the image sequence to calculate the matching relationship between the images.

具体来说,上述的步骤S12中所述的对所述第一图像与所述后序图像进行特征点匹配,以筛选所述第一图像匹配的后序图像,并建立所述第一图像与所述后序图像的匹配关系,参考图3所示,具体可以包括:Specifically, performing feature point matching on the first image and the subsequent images as described in the above step S12 to filter the subsequent images matched by the first image, and establishes the relationship between the first image and the subsequent images. The matching relationship of the subsequent images, as shown in FIG. 3 , may specifically include:

步骤S121,对所述第一图像和后序图像进行特征提取以获取二维特征点,以及各所述二维特征点对应的特征描述子;Step S121, performing feature extraction on the first image and subsequent images to obtain two-dimensional feature points and feature descriptors corresponding to each of the two-dimensional feature points;

步骤S122,利用所述特征描述子计算所述第一图像与后序图像之间特征点的距离,并将距离小于预设阈值的特征点建立所述第一图像与后序图像之间的特征点匹配对;Step S122, using the feature descriptor to calculate the distance between the feature points of the first image and the subsequent images, and establishing the feature points between the first image and the subsequent images with the feature points whose distance is less than a preset threshold point matching pair;

步骤S123,基于上述第一图像和所述后序图像构建对应的基础矩阵,基于所述基础矩阵利用随机抽样一致性算法对所述特征点匹配对进行筛选;Step S123, constructing a corresponding basic matrix based on the above-mentioned first image and the subsequent image, and using a random sampling consistency algorithm to screen the feature point matching pairs based on the basic matrix;

步骤S124,若筛选后的特征点匹配数量大于预设阈值,则判断所述第一图像与所述后序图像匹配并建立所述第一图像与所述后序图像之间的匹配关系。Step S124 , if the number of matched feature points after screening is greater than a preset threshold, determine that the first image matches the subsequent image and establish a matching relationship between the first image and the subsequent image.

具体来说,对于图像序列而言,可以由第一帧图像开始进行序列化匹配。例如,初始将图像序列中的第一帧图像作为第一图像,然后选取后序的连续6帧图像作为后序图像。对于选取的第一图像和多帧后序图像,可以分别计算第一图像与各帧后序图像之间的匹配关系。Specifically, for an image sequence, serialized matching can be performed from the first frame of image. For example, the first frame of images in the image sequence is initially used as the first image, and then six consecutive frames of images in the subsequent sequence are selected as the subsequent images. For the selected first image and multiple frames of subsequent images, the matching relationship between the first image and each frame of subsequent images may be calculated respectively.

具体的,可以对第一图像与各后序图像的进行特征点提取,以计算对应的二维特征点及其特征描述子信息。利用特征描述子信息计算两张图像之间各特征点的距离;若小于特征点之前的距离小于预设的阈值,便可以判定两特征点构成一对特征点匹配对。这样,每两张图片之间都可以得到多对二维特征点匹配对。Specifically, feature point extraction may be performed on the first image and each subsequent image to calculate corresponding two-dimensional feature points and their feature description sub-information. The feature descriptor information is used to calculate the distance of each feature point between the two images; if the distance before the feature point is smaller than the preset threshold, it can be determined that the two feature points constitute a pair of feature point matching pairs. In this way, multiple pairs of two-dimensional feature point matching pairs can be obtained between every two images.

在得到两张图像之间的全部特征点匹配对之后,再利用RANSAC算法(RandomSample Consensus,随机抽样一致性算法)对两图像之间的特征点匹配对进行筛选,删除误匹配的特征点匹配对。具体来说,可以首先利用两图像的特征点信息构建基础矩阵,使用基础矩阵对这些图片间的匹配点对进行RANSAC筛选。若筛选后两张图像间的特征点匹配点对的数量仍大于一预设的阈值,则判定为两张图片匹配,构成图像匹配对;并将两图像的匹配关系保存在数据库中,且记录筛选后的特征点匹配对。随机抽样一致性算法采用常规方法即可完成对特征点匹配的筛选,其具体过程不再赘述。After obtaining all feature point matching pairs between the two images, the RANSAC algorithm (RandomSample Consensus, random sampling consensus algorithm) is used to filter the feature point matching pairs between the two images, and the mismatched feature point matching pairs are deleted. . Specifically, a fundamental matrix can be constructed first by using the feature point information of the two images, and the matching point pairs between these images can be screened by RANSAC using the fundamental matrix. If the number of feature point matching point pairs between the two images after screening is still greater than a preset threshold, it is determined that the two images match, forming an image matching pair; and the matching relationship between the two images is saved in the database, and records Filtered feature point matching pairs. The random sampling consistency algorithm can complete the selection of feature point matching by conventional methods, and its specific process will not be repeated.

通过上述的方式遍历图像序列,完成对图像序列中各图像的第一次匹配,得到各图像对应的图像匹配结果及匹配关系。By traversing the image sequence in the above manner, the first matching of each image in the image sequence is completed, and the image matching result and matching relationship corresponding to each image are obtained.

或者,在本公开的一些示例性实施例中,也可以在图像采集完成后,便对图像序列中的全部图像进行特征提取,计算各图像的二维特征点及对应的特征描述子,并存储在预设数据库中。例如,可以利用SIFT算法(Scale-invariant features transform,尺度不变特征变换)进行特征提取。Alternatively, in some exemplary embodiments of the present disclosure, after the image acquisition is completed, feature extraction may be performed on all images in the image sequence, two-dimensional feature points and corresponding feature descriptors of each image are calculated, and stored in the preset database. For example, a SIFT algorithm (Scale-invariant features transform, scale-invariant features transform) can be used for feature extraction.

在步骤S13中,利用所述定位信息在所述图像序列中筛选与所述第一图像位置匹配的第二图像集合,对所述第一图像与所述第二图像集合中的各图像进行特征点匹配以筛选与所述第一图像匹配的第二图像,生成第二图像匹配对并建立所述第一图像与所述第二图像的匹配关系In step S13, a second image set matching the position of the first image is screened in the image sequence by using the positioning information, and each image in the first image and the second image set is characterized point matching to filter the second image matching the first image, generate a second image matching pair and establish a matching relationship between the first image and the second image

本示例实施方式中,在对图像序列利用特征信息进行匹配的同时,还可以利用位置关系对图像进行第二次匹配。例如,可以利用定位信息对图像序列中各图像之间进行匹配。具体来说,对于上述的第一图像而言,在利用特征信息进行第一次匹配的同时,或者在第一次匹配完成之后,还可以利用定位信息筛选对应的第二图像集合。In this exemplary embodiment, while the feature information is used to match the image sequence, the second matching of the images can also be performed by using the positional relationship. For example, positioning information can be used to match images in a sequence of images. Specifically, for the above-mentioned first image, while using the feature information to perform the first matching, or after the first matching is completed, the positioning information can also be used to filter the corresponding second image set.

本示例实施方式中,具体的,上述的利用所述定位信息在所述图像序列中筛选与所述第一图像位置匹配的第二图像集合可以包括:In this example implementation, specifically, the above-mentioned using the positioning information to filter the second image set matching the position of the first image in the image sequence may include:

步骤S131,分别计算所述第一图像的定位信息与所述图像序列中的其他图像的定位信息之间的距离;Step S131, respectively calculating the distance between the positioning information of the first image and the positioning information of other images in the image sequence;

步骤S132,若所述距离小于预设距离阈值,则将对应的图像添加至所述第二图像集合。Step S132, if the distance is less than a preset distance threshold, add the corresponding image to the second image set.

具体来说,由于在相同位置采集的图片,在整个采样时间内,即使在不同时刻采集也存在匹配关系。对于选取的第一图像,可以向数据库中读取对应的定位信息P,同时读取图像序列中其他图像的定位信息Q,并分别计算第一图像的位置Q与各图像之间的位置Q之间的欧式距离。Specifically, due to the pictures collected at the same position, there is a matching relationship even if the pictures are collected at different times during the entire sampling time. For the selected first image, the corresponding positioning information P can be read from the database, and the positioning information Q of other images in the image sequence can be read at the same time, and the difference between the position Q of the first image and the position Q between each image can be calculated respectively. Euclidean distance between .

若两帧图像之间的距离小于预设的距离阈值,则可以认为这两帧图片是在同一处位置采集,可能构成图像匹配对,将该图像添加至第一图像对应的第二图像集合中。若两帧图像之间的距离较大,则认为不存在对应的匹配关系。重复上述的步骤,遍历图像序列,便可以得到各图像对应的第二图像集合。If the distance between the two frames of images is less than the preset distance threshold, it can be considered that the two frames of images were collected at the same location, which may constitute an image matching pair, and the image is added to the second image set corresponding to the first image. . If the distance between the two frames of images is large, it is considered that there is no corresponding matching relationship. By repeating the above steps and traversing the image sequence, the second image set corresponding to each image can be obtained.

本示例实施方式中,在得到各图像对应的第二图像集合后,便可以利用上述的步骤S121-步骤S124的方法,利用特征信息对第一图像和第二图像集合中的各图像再次进行特征匹配,计算对应的匹配关系;即,首先计算两图像之间的特征点匹配对,再对特征点匹配对进行筛选得到图像对匹配结果。In this exemplary embodiment, after the second image set corresponding to each image is obtained, the above-mentioned methods of steps S121 to S124 can be used to perform feature information on each image in the first image and the second image set again. For matching, the corresponding matching relationship is calculated; that is, firstly, the matching pairs of feature points between two images are calculated, and then the matching pairs of feature points are screened to obtain the matching result of the image pairs.

若经过再次匹配建立匹配关系,则认为产生了回环匹配。至此,完成第一次匹配和第二次匹配。If a matching relationship is established after re-matching, it is considered that a loopback match has occurred. So far, the first match and the second match are completed.

或者,在本公开的其他示例性实施例中,还可以利用上述的第一图像的前序图像进行第二次匹配。具体来说,所述生成第一图像匹配对并建立所述第一图像与所述后序图像的匹配关系之后,所述方法还包括:Alternatively, in other exemplary embodiments of the present disclosure, the second matching may also be performed by using the preceding image of the first image. Specifically, after the first image matching pair is generated and the matching relationship between the first image and the subsequent image is established, the method further includes:

步骤S21,选取所述第一图像前序的各帧图像,并利用定位信息筛选与所述第一图像之间距离小于预设距离阈值的前序图像以构建前序图像集合;对所述第一图像与所述前序图像集合中各所述前序图像进行特征点匹配,以筛选所述第一图像匹配的前序图像,生成第三图像匹配对并建立所述第一图像与所述前序图像的匹配关系;Step S21, select each frame image of the pre-order of the first image, and use the positioning information to screen the pre-order images whose distance from the first image is less than a preset distance threshold to construct a pre-order image set; Feature point matching is performed between an image and each of the preceding images in the set of preceding images to screen the preceding images matched by the first image, generate a third image matching pair, and establish the relationship between the first image and the preceding image. The matching relationship of the preceding images;

步骤S22,遍历所述图像序列以获取各所述图像对应的前序图像和/或后序图像的匹配关系,以构建全局匹配关系;并基于所述全局匹配关系对所述图像序列进行三维重建。Step S22, traverse the image sequence to obtain the matching relationship between the pre-order images and/or the post-order images corresponding to each of the images, so as to construct a global matching relationship; and perform three-dimensional reconstruction on the image sequence based on the global matching relationship .

具体来说,对于图像序列中选取的第一图像,可以选取该第一图像之前序列的各帧前序图像;再利用位置信息计算第一图像与对应的各帧前序图像之间的欧式距离。此时,图像序列中的第一帧图像除外。Specifically, for the first image selected in the image sequence, each frame of the previous sequence of the first image can be selected; the position information is used to calculate the Euclidean distance between the first image and the corresponding frames of the previous image. . In this case, except for the first frame of the image sequence.

具体的,对于各帧图像及对应的全部前序图像,可以首先利用定位信息计算图像位置信息之间的欧式距离,并利用欧式距离的计算结果与预设的距离阈值进行第一次筛选,得到可能存在的匹配关系的第三图像匹配对。之后,再利用各图像的特征信息对初步匹配的第三图像匹配对进行第二次筛选,如上述步骤S121-步骤S124的方法,利用特征信息对第三图像匹配第中的各图像再次进行特征匹配,计算对应的匹配关系;即,首先计算两图像之间的特征点匹配对,再对特征点匹配对进行筛选得到图像对匹配结果。若匹配成功并建立匹配关系,则认为产生了回环匹配。至此,完成第一次匹配和第二次匹配。Specifically, for each frame of images and all the corresponding pre-order images, the Euclidean distance between the position information of the images can be calculated first by using the positioning information, and the Euclidean distance between the calculation results of the Euclidean distance and the preset distance threshold can be used for the first screening to obtain A third image matching pair of possible matching relationships. After that, use the feature information of each image to perform a second screening on the initially matched third image matching pair, as in the above-mentioned steps S121-S124 method, use the feature information to perform feature information on each image in the third image matching step again. For matching, the corresponding matching relationship is calculated; that is, firstly, the matching pairs of feature points between two images are calculated, and then the matching pairs of feature points are screened to obtain the matching result of the image pairs. If the matching is successful and a matching relationship is established, it is considered that a loopback matching occurs. So far, the first match and the second match are completed.

对于图像序列中的各帧图像来说,通过利用各帧图像之后序列的连续多帧图像进行第一次匹配,以及利用各帧图像之前序列的各帧图像进行第二次匹配,能够有效的降低图像匹配过程中的资源消耗,并最大限度的保证匹配关系的准确性,确保回环匹配的有效性。For each frame of image in the image sequence, by using the consecutive multiple frames of images in the sequence after each frame of image to perform the first matching, and using each frame of images in the sequence before each frame of image to perform the second matching, it can effectively reduce the Resource consumption in the image matching process, and maximize the accuracy of the matching relationship to ensure the effectiveness of loopback matching.

在步骤S14中,基于所述第一图像对应的第一图像匹配对和第二图像匹配对的图像匹配关系构建全局匹配关系;并基于所述全局匹配关系对所述图像序列进行三维重建。In step S14, a global matching relationship is constructed based on the image matching relationship between the first image matching pair and the second image matching pair corresponding to the first image; and three-dimensional reconstruction is performed on the image sequence based on the global matching relationship.

本示例实施方式中,可以利用上述的步骤S12、S13的方法遍历图像序列,从而得到各帧图像对应的匹配关系。基于该些匹配关系,可以构建图像序列的全局匹配关系,并利用该全局匹配关系和图像序列进行三维重建。例如,可以利用SFM(Structure from motion)算法进行三维重建。一般来说,利用SFM算法进行三维重建时,其输入可以为二维图像序列;可以通过上述的全局匹配关系推断出相机的各项参数。例如,SFM算法的过程可以包括:首先从图片中提取焦距信息(之后初始化BA需要),然后利用SIFT等特征提取算法去提取图像特征,用kd-tree模型去计算两张图片特征点之间的欧式距离进行特征点的匹配,从而找到特征点匹配个数达到要求的图像对。对于每一个图像匹配对,计算对极几何,估计F矩阵并通过ransac算法优化改善匹配对。这样子如果有特征点可以在这样的匹配对中链式地传递下去,一直被检测到,那么就可以形成轨迹。之后进入structure-from-motion部分,关键的第一步就是选择好的图像对去初始化整个BA过程。首先对初始化选择的两幅图片进行第一次BA,然后循环添加新的图片进行新的BA,最后直到没有可以继续添加的合适的图片,BA结束。得到相机估计参数和场景几何信息,即稀疏的3D点云。其中两幅图片之间的bundleadjust用的是稀疏光束平差法sba软件包,稀疏光束平差法是一种非线性最小二乘的优化目标函数算法。In this exemplary embodiment, the above-mentioned methods of steps S12 and S13 may be used to traverse the image sequence, so as to obtain the matching relationship corresponding to each frame of images. Based on the matching relationships, a global matching relationship of the image sequence can be constructed, and three-dimensional reconstruction can be performed using the global matching relationship and the image sequence. For example, three-dimensional reconstruction can be performed using the SFM (Structure from motion) algorithm. Generally speaking, when using the SFM algorithm for three-dimensional reconstruction, the input can be a two-dimensional image sequence; various parameters of the camera can be inferred through the above-mentioned global matching relationship. For example, the process of the SFM algorithm may include: first extracting the focal length information from the image (needed to initialize BA), then using feature extraction algorithms such as SIFT to extract image features, and using the kd-tree model to calculate the distance between the feature points of the two images. The Euclidean distance is used to match the feature points, so as to find the image pairs with the required matching number of feature points. For each image matching pair, the epipolar geometry is calculated, the F matrix is estimated and the matching pair is refined by ransac algorithm optimization. In this way, if there are feature points that can be chained in such matching pairs and are always detected, then a trajectory can be formed. After entering the structure-from-motion part, the key first step is to select a good image pair to initialize the entire BA process. First, the first BA is performed on the two images selected for initialization, and then new images are added cyclically for new BA. Finally, the BA ends until there are no suitable images that can be added. Get camera estimation parameters and scene geometry information, i.e. sparse 3D point cloud. The bundleadjust between the two pictures uses the sba software package of the sparse beam adjustment method. The sparse beam adjustment method is a nonlinear least squares optimization objective function algorithm.

当然,在本公开的其他示例性实施例中,在构建全局匹配关系后,也可以利用其他算法进行三维重建。例如,基于Deep learning的深度估计和结构重建算法等。Of course, in other exemplary embodiments of the present disclosure, after the global matching relationship is constructed, other algorithms may also be used to perform three-dimensional reconstruction. For example, deep learning-based depth estimation and structure reconstruction algorithms, etc.

基于上述内容,在本公开的其他示例性实施例中,所述采集图像序列时,上述的方法还可以包括:Based on the above content, in other exemplary embodiments of the present disclosure, when the image sequence is acquired, the above method may further include:

步骤S31,对所述图像之间的匹配特征点进行解析,以获取姿态信息和特征点的三维坐标;Step S31, analyzing the matching feature points between the images to obtain attitude information and three-dimensional coordinates of the feature points;

步骤S32,基于匹配的所述图像对应的姿态信息和特征点的三维坐标对相机进行姿态校正。Step S32 , performing posture correction on the camera based on the matched posture information corresponding to the image and the three-dimensional coordinates of the feature points.

具体来说,在构建全局匹配关系之后,或者,也可以在图像序列的采集过程中;可以对图像序列中任意相邻的两图像之间进行特征点匹配,并对匹配的特征点进行解析,从而计算得到相机的位置和姿态信息。以及,求解图像中的二维特征点在三维空间中的位置坐标。基于该些信息,可以生成对相机姿态的校正信息,从而在后序图像的拍摄之前,实时的对相机的姿态进行校正,进而保证后序拍摄图像的姿态信息保持一致。Specifically, after the global matching relationship is constructed, or, in the process of collecting the image sequence, the feature point matching between any two adjacent images in the image sequence can be performed, and the matched feature points can be analyzed. Thereby, the position and attitude information of the camera are calculated. And, solve the position coordinates of the two-dimensional feature points in the image in the three-dimensional space. Based on the information, correction information for the camera posture can be generated, so that the posture of the camera is corrected in real time before the shooting of the subsequent images, thereby ensuring that the attitude information of the subsequent shooting images is consistent.

基于上述内容,在本公开的其他示例性实施例中,在构建全局匹配关系之后,上述的方法还可以包括:根据各图像的采集时间对所述全局匹配关系进行验证,以删除错误的匹配关系。Based on the above content, in other exemplary embodiments of the present disclosure, after the global matching relationship is constructed, the above method may further include: verifying the global matching relationship according to the acquisition time of each image, so as to delete the wrong matching relationship .

具体来说,才利用单目相机进行RGB图像采集时,还可以标记各帧图像的采集时间。在获取全局匹配关系后,对于各图像匹配对,还可以根据图像采集时间对各图像匹配对进行再次的校验,若图像采集时间接近,还可以再次判断定位信息是否接近。从而能够根据图像的采集时间和位置信息对图像全局匹配结果中的图像匹配对进行校验。Specifically, when the monocular camera is used for RGB image acquisition, the acquisition time of each frame of image can also be marked. After the global matching relationship is obtained, for each image matching pair, each image matching pair can be checked again according to the image acquisition time, and if the image acquisition time is close, it can be judged again whether the positioning information is close. Therefore, the image matching pairs in the global image matching result can be verified according to the acquisition time and position information of the images.

本公开实施例所提供的基于图像的三维重建方法,可以应用于离线形式的三维重建过程中。应用于室内外定位导航解决方案中,例如AR导航等。对于图像特征不可靠的场景,例如重复纹理较多、光照变化、弱纹理的场景中,能够极大提升地图精度,同时由于本方案的方法完全通过在第二次匹配过程中结合图像的定位信息,避免了图像误匹配情况,能够极大增加建图的成功率。并且由于在数据采集阶段就确定了回环匹配的图像,避免了在图像匹配检索时的全局暴力匹配,能够大大节省运算时间。区别于传统方法,充分利用其他传感器获取大致的全局位置信息直接确定回环帧的位置,得到更为准确的全局图像连通图,为三维重建的后续步骤提供了很好的输入。并且该方案操作简便,亦可作为对已有的三维重建方案的补充,与其他回环筛选手段相结合。相比其它案例具有较大的使用优势。The image-based three-dimensional reconstruction method provided by the embodiments of the present disclosure can be applied to an offline three-dimensional reconstruction process. It is used in indoor and outdoor positioning and navigation solutions, such as AR navigation. For scenes with unreliable image features, such as scenes with many repeated textures, illumination changes, and weak textures, the map accuracy can be greatly improved. At the same time, because the method of this scheme completely combines the positioning information of the image in the second matching process , to avoid the image mismatch, which can greatly increase the success rate of mapping. In addition, since the loop-back matched images are determined in the data collection stage, global brute force matching during image matching retrieval is avoided, and the computation time can be greatly saved. Different from the traditional method, it makes full use of other sensors to obtain the approximate global position information to directly determine the position of the loopback frame, and obtains a more accurate global image connectivity map, which provides a good input for the subsequent steps of 3D reconstruction. Moreover, the scheme is easy to operate, and can also be used as a supplement to the existing three-dimensional reconstruction scheme, and can be combined with other loop closure screening methods. Compared with other cases, it has a greater use advantage.

需要注意的是,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。It should be noted that the above-mentioned drawings are only schematic illustrations of the processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It is easy to understand that the processes shown in the above figures do not indicate or limit the chronological order of these processes. In addition, it is also readily understood that these processes may be performed, for example, synchronously or asynchronously in multiple modules.

进一步的,参考图4所示,本示例的实施方式中还提供一种基于图像的三维重建装置40,包括:数据采集模块401、第一图像匹配模块402第二图像匹配模块403和重建模块404。其中,Further, referring to FIG. 4 , the embodiment of this example also provides an image-based three-dimensional reconstruction device 40 , including: a data acquisition module 401 , a first image matching module 402 , a second image matching module 403 and a reconstruction module 404 . in,

所述数据采集模块401可以用于采集图像序列,并获取各图像的定位信息。The data collection module 401 can be used to collect image sequences and obtain positioning information of each image.

所述第一图像匹配模块402可以用于在所述图像序列中选取第一图像及所述第一图像后序的连续多帧后序图像,对所述第一图像与各所述后序图像进行特征点匹配以筛选与所述第一图像匹配的后序图像,生成第一图像匹配对并建立所述第一图像与所述后序图像的匹配关系。The first image matching module 402 can be configured to select the first image and successive frames of subsequent images of the first image in the image sequence, and compare the first image and each of the subsequent images. Feature point matching is performed to screen subsequent images matching the first image, a first image matching pair is generated, and a matching relationship between the first image and the subsequent image is established.

所述第二图像匹配模块403可以用于利用所述定位信息在所述图像序列中筛选与所述第一图像位置匹配的第二图像集合,对所述第一图像与所述第二图像集合中的各图像进行特征点匹配以筛选与所述第一图像匹配的第二图像,生成第二图像匹配对并建立所述第一图像与所述第二图像的匹配关系。The second image matching module 403 can be configured to use the positioning information to filter a second image set matching the position of the first image in the image sequence, and compare the first image and the second image set. Feature point matching is performed on each image in the image to filter the second image matching the first image, a second image matching pair is generated, and a matching relationship between the first image and the second image is established.

所述重建模块404可以用于基于所述第一图像对应的第一图像匹配对和第二图像匹配对的图像匹配关系构建全局匹配关系;并基于所述全局匹配关系对所述图像序列进行三维重建。The reconstruction module 404 can be configured to construct a global matching relationship based on the image matching relationship between the first image matching pair and the second image matching pair corresponding to the first image; reconstruction.

在本公开的一种示例中,所述数据采集模块401可以包括:图像采集单元和定位信息采集单元(图中未示出)。其中,In an example of the present disclosure, the data acquisition module 401 may include: an image acquisition unit and a positioning information acquisition unit (not shown in the figure). in,

所述图像采集单元可以用于响应于图像采集指令,激活单目相机进行RGB图像采集。The image acquisition unit may be configured to activate the monocular camera to perform RGB image acquisition in response to the image acquisition instruction.

定位信息采集单元可以用于调用超宽带驱动程序,以获取采集所述RGB图像时的位置信息,并将该位置信息配置为所述RGB图像的定位信息The positioning information acquisition unit can be used to call the UWB driver to acquire the position information when collecting the RGB image, and configure the position information as the positioning information of the RGB image

在本公开的一种示例中,所述第一图像匹配模块402还可以用于对所述第一图像和后序图像进行特征提取以获取二维特征点,以及各所述二维特征点对应的特征描述子;利用所述特征描述子计算所述第一图像与后序图像之间特征点的距离,并将距离小于预设阈值的特征点建立所述第一图像与后序图像之间的特征点匹配对;基于上述第一图像和所述后序图像构建对应的基础矩阵,基于所述基础矩阵利用随机抽样一致性算法对所述特征点匹配对进行筛选;若筛选后的特征点匹配数量大于预设阈值,则判断所述第一图像与所述后序图像匹配并建立所述第一图像与所述后序图像之间的匹配关系。In an example of the present disclosure, the first image matching module 402 may also be configured to perform feature extraction on the first image and subsequent images to obtain two-dimensional feature points, and each of the two-dimensional feature points corresponds to The feature descriptor; the feature descriptor is used to calculate the distance between the feature points between the first image and the subsequent images, and the feature points whose distance is less than a preset threshold are established between the first image and the subsequent images. based on the first image and the subsequent image to construct a corresponding basic matrix, and use the random sampling consistency algorithm to screen the feature point matching pairs based on the basic matrix; if the screened feature points If the number of matches is greater than a preset threshold, it is determined that the first image matches the subsequent image, and a matching relationship between the first image and the subsequent image is established.

在本公开的一种示例中,所述第二图像匹配模块403还可以用于分别计算所述第一图像的定位信息与所述图像序列中的其他图像的定位信息之间的距离;若所述距离小于预设距离阈值,则将对应的图像添加至所述第二图像集合。In an example of the present disclosure, the second image matching module 403 may be further configured to calculate the distance between the positioning information of the first image and the positioning information of other images in the image sequence; If the distance is smaller than the preset distance threshold, the corresponding image is added to the second image set.

在本公开的一种示例中,所述装置40还可以包括:第三匹配模块(图中未示出)。In an example of the present disclosure, the apparatus 40 may further include: a third matching module (not shown in the figure).

所述第三匹配模块可以用于在生成第一图像匹配对并建立所述第一图像与所述后序图像的匹配关系之后,选取所述第一图像前序的各帧图像,并利用定位信息筛选与所述第一图像之间距离小于预设距离阈值的前序图像以构建前序图像集合;对所述第一图像与所述前序图像集合中各所述前序图像进行特征点匹配,以筛选所述第一图像匹配的前序图像,生成第三图像匹配对并建立所述第一图像与所述前序图像的匹配关系;遍历所述图像序列以获取各所述图像对应的前序图像和/或后序图像的匹配关系,以构建全局匹配关系;并基于所述全局匹配关系对所述图像序列进行三维重建。The third matching module can be configured to select each frame image of the first sequence of the first image after generating the first image matching pair and establish the matching relationship between the first image and the subsequent image, and use the positioning method. The information filters the pre-sequence images whose distance from the first image is less than a preset distance threshold to construct a pre-sequence image set; perform feature points on each of the pre-sequence images in the first image and the pre-sequence image set. matching, to filter the pre-order images matched by the first image, generate a third image matching pair and establish a matching relationship between the first image and the pre-order image; traverse the image sequence to obtain the corresponding images of each image The matching relationship between the pre-sequence images and/or the post-sequence images to construct a global matching relationship; and three-dimensional reconstruction is performed on the image sequence based on the global matching relationship.

在本公开的一种示例中,所述装置40还包括:第一校验模块(图中未示出)。In an example of the present disclosure, the apparatus 40 further includes: a first verification module (not shown in the figure).

所述第一校验模块可以用于在图像采集时,或者构建全局匹配关系之后,对所述图像之间的匹配特征点进行解析,以获取姿态信息和特征点的三维坐标;基于匹配的所述图像对应的姿态信息和特征点的三维坐标对相机进行姿态校正。The first verification module can be used to analyze the matching feature points between the images during image acquisition or after building a global matching relationship to obtain attitude information and three-dimensional coordinates of the feature points; The attitude correction of the camera is performed according to the attitude information corresponding to the image and the three-dimensional coordinates of the feature points.

在本公开的一种示例中,所述装置40还可以包括:第二校验模块(图中未示出)。In an example of the present disclosure, the apparatus 40 may further include: a second verification module (not shown in the figure).

所述第二校验模块可以用于在构建全局匹配关系之后,根据各图像的采集时间对所述全局匹配关系进行验证,以删除错误的匹配关系。The second verification module may be configured to verify the global matching relationship according to the acquisition time of each image after constructing the global matching relationship, so as to delete the wrong matching relationship.

上述的基于图像的三维重建装置中各模块的具体细节已经在对应的基于图像的三维重建方法中进行了详细的描述,因此此处不再赘述。The specific details of each module in the above-mentioned image-based three-dimensional reconstruction apparatus have been described in detail in the corresponding image-based three-dimensional reconstruction method, and therefore will not be repeated here.

应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that although several modules or units of the apparatus for action performance are mentioned in the above detailed description, this division is not mandatory. Indeed, according to embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of one module or unit described above may be further divided into multiple modules or units to be embodied.

图5示出了适于用来实现本发明实施例的电子设备的示意图。Figure 5 shows a schematic diagram of an electronic device suitable for implementing embodiments of the present invention.

需要说明的是,图5示出的电子设备500仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。It should be noted that the electronic device 500 shown in FIG. 5 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.

如图5所示,电子设备500包括中央处理单元(Central Processing Unit,CPU)501,其可以根据存储在只读存储器(Read-Only Memory,ROM)502中的程序或者从储存部分508加载到随机访问存储器(Random Access Memory,RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有系统操作所需的各种程序和数据。CPU 501、ROM502以及RAM 503通过总线504彼此相连。输入/输出(Input/Output,I/O)接口505也连接至总线504。As shown in FIG. 5 , the electronic device 500 includes a central processing unit (Central Processing Unit, CPU) 501, which can be loaded into a random device according to a program stored in a read-only memory (Read-Only Memory, ROM) 502 or from a storage part 508 Various appropriate operations and processes are executed by accessing a program in a memory (Random Access Memory, RAM) 503 . In the RAM 503, various programs and data necessary for system operation are also stored. The CPU 501 , the ROM 502 , and the RAM 503 are connected to each other through a bus 504 . An Input/Output (I/O) interface 505 is also connected to the bus 504 .

以下部件连接至I/O接口505:包括键盘、鼠标等的输入部分506;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分507;包括硬盘等的储存部分508;以及包括诸如LAN(Local Area Network,局域网)卡、调制解调器等的网络接口卡的通信部分509。通信部分509经由诸如因特网的网络执行通信处理。驱动器510也根据需要连接至I/O接口505。可拆卸介质511,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器510上,以便于从其上读出的计算机程序根据需要被安装入储存部分508。The following components are connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, etc.; an output section 507 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker, etc. ; a storage part 508 including a hard disk and the like; and a communication part 509 including a network interface card such as a LAN (Local Area Network) card, a modem, and the like. The communication section 509 performs communication processing via a network such as the Internet. A drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 510 as needed so that a computer program read therefrom is installed into the storage section 508 as needed.

特别地,根据本发明的实施例,下文参考流程图描述的过程可以被实现为计算机软件程序。例如,本发明的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分509从网络上被下载和安装,和/或从可拆卸介质511被安装。在该计算机程序被中央处理单元(CPU)501执行时,执行本申请的系统中限定的各种功能。In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs according to embodiments of the present invention. For example, embodiments of the present invention include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network via the communication portion 509 and/or installed from the removable medium 511 . When the computer program is executed by the central processing unit (CPU) 501, various functions defined in the system of the present application are executed.

具体来说,上述的电子设备可以是手机、平板电脑或者笔记本电脑等智能移动终端设备。或者,上述的电子设备也可以是台式电脑等智能终端设备。Specifically, the above-mentioned electronic device may be an intelligent mobile terminal device such as a mobile phone, a tablet computer, or a notebook computer. Alternatively, the above-mentioned electronic device may also be an intelligent terminal device such as a desktop computer.

需要说明的是,本发明实施例所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本发明中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本发明中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、有线等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium shown in the embodiment of the present invention may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. The computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, portable Compact Disc Read-Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable of the above The combination. In the present invention, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In the present invention, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device . Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to wireless, wired, etc., or any suitable combination of the foregoing.

附图中的流程图和框图,图示了按照本发明各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams or flowchart illustrations, and combinations of blocks in the block diagrams or flowchart illustrations, can be implemented in special purpose hardware-based systems that perform the specified functions or operations, or can be implemented using A combination of dedicated hardware and computer instructions is implemented.

描述于本发明实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现,所描述的单元也可以设置在处理器中。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。The units involved in the embodiments of the present invention may be implemented in a software manner, or may be implemented in a hardware manner, and the described units may also be provided in a processor. Among them, the names of these units do not constitute a limitation on the unit itself under certain circumstances.

需要说明的是,作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该电子设备执行时,使得该电子设备实现如下述实施例中所述的方法。例如,所述的电子设备可以实现如图1所示的各个步骤。It should be noted that, as another aspect, the present application also provides a computer-readable medium, and the computer-readable medium may be included in the electronic device described in the foregoing embodiments; assembled into the electronic device. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, causes the electronic device to implement the methods described in the following embodiments. For example, the electronic device can implement the various steps shown in FIG. 1 .

此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。Furthermore, the above-mentioned figures are merely schematic illustrations of the processes included in the methods according to the exemplary embodiments of the present invention, and are not intended to be limiting. It is easy to understand that the processes shown in the above figures do not indicate or limit the chronological order of these processes. In addition, it is also readily understood that these processes may be performed, for example, synchronously or asynchronously in multiple modules.

此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。Furthermore, the above-mentioned figures are merely schematic illustrations of the processes included in the methods according to the exemplary embodiments of the present invention, and are not intended to be limiting. It is easy to understand that the processes shown in the above figures do not indicate or limit the chronological order of these processes. In addition, it is also readily understood that these processes may be performed, for example, synchronously or asynchronously in multiple modules.

本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。Other embodiments of the present disclosure will readily suggest themselves to those skilled in the art upon consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the present disclosure that follow the general principles of the present disclosure and include common knowledge or techniques in the technical field not disclosed by the present disclosure . The specification and examples are to be regarded as exemplary only, with the true scope and spirit of the disclosure being indicated by the claims.

应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限。It is to be understood that the present disclosure is not limited to the precise structures described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image-based three-dimensional reconstruction method, comprising:
acquiring an image sequence and acquiring positioning information of each image;
selecting a first image and continuous multi-frame subsequent images subsequent to the first image in the image sequence, performing feature point matching on the first image and each subsequent image to screen subsequent images matched with the first image, generating a first image matching pair and establishing a matching relation between the first image and the subsequent images; and
screening a second image set matched with the first image in position in the image sequence by using the positioning information, carrying out feature point matching on each image in the first image and the second image set to screen a second image matched with the first image, generating a second image matching pair and establishing a matching relation between the first image and the second image;
constructing a global matching relation based on the image matching relation of a first image matching pair and a second image matching pair corresponding to the first image; and performing three-dimensional reconstruction on the image sequence based on the global matching relation.
2. The image-based three-dimensional reconstruction method according to claim 1, wherein the acquiring the image sequence and obtaining the positioning information of each image comprises:
responding to an image acquisition instruction, and activating a monocular camera to acquire RGB images; and
and calling an ultra-wideband driving program to acquire the position information when the RGB image is acquired, and configuring the position information as the positioning information of the RGB image.
3. The image-based three-dimensional reconstruction method according to claim 1, wherein the performing feature point matching on the first image and the subsequent images to screen the subsequent images matched with the first image and establish a matching relationship between the first image and the subsequent images comprises:
performing feature extraction on the first image and the subsequent images to obtain two-dimensional feature points and feature descriptors corresponding to the two-dimensional feature points;
calculating the distance of the feature points between the first image and the subsequent image by using the feature descriptors, and establishing a feature point matching pair between the first image and the subsequent image by using the feature points of which the distance is smaller than a preset threshold value;
constructing a corresponding basic matrix based on the first image and the subsequent image, and screening the feature point matching pairs by utilizing a random sampling consistency algorithm based on the basic matrix;
and if the matching quantity of the screened feature points is greater than a preset threshold value, judging that the first image is matched with the subsequent image and establishing a matching relation between the first image and the subsequent image.
4. The method of claim 1, wherein the using the localization information to filter the second set of images in the sequence of images that match the first set of images includes:
respectively calculating distances between the positioning information of the first image and the positioning information of other images in the image sequence;
and if the distance is smaller than a preset distance threshold value, adding the corresponding image to the second image set.
5. The method according to claim 1 or 3, wherein after generating the first image matching pair and establishing the matching relationship between the first image and the subsequent image, the method further comprises:
selecting each frame of image of the first image preamble, and screening the preamble image with the distance between the frame of image and the first image smaller than a preset distance threshold value by utilizing positioning information to construct a preamble image set; performing feature point matching on the first image and each pre-order image in the pre-order image set to screen pre-order images matched with the first image, generating a third image matching pair and establishing a matching relation between the first image and the pre-order images;
traversing the image sequence to obtain a matching relation of a preamble image and/or a subsequent image corresponding to each image so as to construct a global matching relation; and performing three-dimensional reconstruction on the image sequence based on the global matching relation.
6. The method of claim 1, wherein the acquiring of the sequence of images further comprises:
analyzing the matched feature points between the images to obtain attitude information and three-dimensional coordinates of the feature points;
and carrying out attitude correction on the camera based on the matched attitude information corresponding to the image and the three-dimensional coordinates of the characteristic points.
7. The method of claim 1, wherein after the constructing the global matching relationship, the method further comprises:
and verifying the global matching relationship according to the acquisition time of each image so as to delete the wrong matching relationship.
8. An apparatus for image-based three-dimensional reconstruction, comprising:
the data acquisition module is used for acquiring an image sequence and acquiring positioning information of each image;
the first image matching module is used for selecting a first image and continuous multi-frame subsequent images subsequent to the first image in the image sequence, performing feature point matching on the first image and each subsequent image to screen the subsequent images matched with the first image, generating a first image matching pair and establishing a matching relation between the first image and the subsequent images; and
the second image matching module is used for screening a second image set matched with the first image in position in the image sequence by using the positioning information, carrying out feature point matching on each image in the first image set and the second image set so as to screen a second image matched with the first image, generating a second image matching pair and establishing a matching relation between the first image and the second image;
the reconstruction module is used for constructing a global matching relationship based on the image matching relationship of a first image matching pair and a second image matching pair corresponding to the first image; and performing three-dimensional reconstruction on the image sequence based on the global matching relation.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method for image-based three-dimensional reconstruction according to one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of image-based three-dimensional reconstruction according to any one of claims 1 to 7.
CN202011293208.9A 2020-11-18 2020-11-18 Three-dimensional reconstruction processing method and device based on image Active CN112288817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011293208.9A CN112288817B (en) 2020-11-18 2020-11-18 Three-dimensional reconstruction processing method and device based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011293208.9A CN112288817B (en) 2020-11-18 2020-11-18 Three-dimensional reconstruction processing method and device based on image

Publications (2)

Publication Number Publication Date
CN112288817A true CN112288817A (en) 2021-01-29
CN112288817B CN112288817B (en) 2024-05-07

Family

ID=74399693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011293208.9A Active CN112288817B (en) 2020-11-18 2020-11-18 Three-dimensional reconstruction processing method and device based on image

Country Status (1)

Country Link
CN (1) CN112288817B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115333A (en) * 2023-02-27 2023-11-24 荣耀终端有限公司 A three-dimensional reconstruction method combining IMU data

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650178A (en) * 2009-09-09 2010-02-17 中国人民解放军国防科学技术大学 Method for image matching guided by control feature point and optimal partial homography in three-dimensional reconstruction of sequence images
US20130147789A1 (en) * 2011-12-08 2013-06-13 Electronics & Telecommunications Research Institute Real-time three-dimensional real environment reconstruction apparatus and method
CN105825518A (en) * 2016-03-31 2016-08-03 西安电子科技大学 Sequence image rapid three-dimensional reconstruction method based on mobile platform shooting
US20180315232A1 (en) * 2017-05-01 2018-11-01 Lockheed Martin Corporation Real-time incremental 3d reconstruction of sensor data
CN110335319A (en) * 2019-06-26 2019-10-15 华中科技大学 Camera positioning and the map reconstruction method and system of a kind of semantics-driven
CN110361005A (en) * 2019-06-26 2019-10-22 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, readable storage medium and electronic device
US20200033880A1 (en) * 2018-07-30 2020-01-30 Toyota Research Institute, Inc. System and method for 3d scene reconstruction of agent operation sequences using low-level/high-level reasoning and parametric models
CN111063021A (en) * 2019-11-21 2020-04-24 西北工业大学 A method and device for establishing a three-dimensional reconstruction model of a space moving target
CN111209978A (en) * 2020-04-20 2020-05-29 浙江欣奕华智能科技有限公司 Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN111402413A (en) * 2020-06-04 2020-07-10 浙江欣奕华智能科技有限公司 Three-dimensional visual positioning method and device, computing equipment and storage medium
CN111433818A (en) * 2018-12-04 2020-07-17 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 3D Reconstruction Method of Large Component Based on Image Sequence

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650178A (en) * 2009-09-09 2010-02-17 中国人民解放军国防科学技术大学 Method for image matching guided by control feature point and optimal partial homography in three-dimensional reconstruction of sequence images
US20130147789A1 (en) * 2011-12-08 2013-06-13 Electronics & Telecommunications Research Institute Real-time three-dimensional real environment reconstruction apparatus and method
CN105825518A (en) * 2016-03-31 2016-08-03 西安电子科技大学 Sequence image rapid three-dimensional reconstruction method based on mobile platform shooting
US20180315232A1 (en) * 2017-05-01 2018-11-01 Lockheed Martin Corporation Real-time incremental 3d reconstruction of sensor data
US20200033880A1 (en) * 2018-07-30 2020-01-30 Toyota Research Institute, Inc. System and method for 3d scene reconstruction of agent operation sequences using low-level/high-level reasoning and parametric models
CN111433818A (en) * 2018-12-04 2020-07-17 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
CN110335319A (en) * 2019-06-26 2019-10-15 华中科技大学 Camera positioning and the map reconstruction method and system of a kind of semantics-driven
CN110361005A (en) * 2019-06-26 2019-10-22 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, readable storage medium and electronic device
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 3D Reconstruction Method of Large Component Based on Image Sequence
CN111063021A (en) * 2019-11-21 2020-04-24 西北工业大学 A method and device for establishing a three-dimensional reconstruction model of a space moving target
CN111209978A (en) * 2020-04-20 2020-05-29 浙江欣奕华智能科技有限公司 Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN111402413A (en) * 2020-06-04 2020-07-10 浙江欣奕华智能科技有限公司 Three-dimensional visual positioning method and device, computing equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115333A (en) * 2023-02-27 2023-11-24 荣耀终端有限公司 A three-dimensional reconstruction method combining IMU data
CN117115333B (en) * 2023-02-27 2024-09-06 荣耀终端有限公司 A 3D reconstruction method combining IMU data

Also Published As

Publication number Publication date
CN112288817B (en) 2024-05-07

Similar Documents

Publication Publication Date Title
CN110866953B (en) Map construction method and device, and positioning method and device
CN110221690B (en) Gesture interaction method and device based on AR scene, storage medium and communication terminal
CN112927362B (en) Map reconstruction method and device, computer readable medium and electronic device
CN109887003B (en) Method and equipment for carrying out three-dimensional tracking initialization
WO2020259248A1 (en) Depth information-based pose determination method and device, medium, and electronic apparatus
WO2019170164A1 (en) Depth camera-based three-dimensional reconstruction method and apparatus, device, and storage medium
CN110322500A (en) Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring
CN101398937B (en) Three-dimensional reconstruction method based on fringe photograph collection of same scene
CN110427917A (en) Method and apparatus for detecting key point
WO2021136386A1 (en) Data processing method, terminal, and server
CN115131420A (en) Visual SLAM method and device based on key frame optimization
CN113674400A (en) Spectrum three-dimensional reconstruction method and system based on repositioning technology and storage medium
CN107123142A (en) Position and orientation estimation method and device
CN112085031A (en) Object detection method and system
WO2023015938A1 (en) Three-dimensional point detection method and apparatus, electronic device, and storage medium
CN114419189B (en) Map construction method and device, electronic device, and storage medium
CN112258647B (en) Map reconstruction method and device, computer readable medium and electronic equipment
TW202244680A (en) Pose acquisition method, electronic equipment and storage medium
CN110866497A (en) Robot positioning and image building method and device based on dotted line feature fusion
WO2023169281A1 (en) Image registration method and apparatus, storage medium, and electronic device
CN114638846A (en) Pickup pose information determination method, pickup pose information determination device, pickup pose information determination equipment and computer readable medium
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
CN110296686A (en) Localization method, device and the equipment of view-based access control model
CN112598732B (en) Target equipment positioning method, map construction method and device, medium and equipment
CN117456280A (en) Rock mass structural plane identification method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant