[go: up one dir, main page]

CN111754566A - Robot scene positioning method and construction operation method - Google Patents

Robot scene positioning method and construction operation method Download PDF

Info

Publication number
CN111754566A
CN111754566A CN202010396690.2A CN202010396690A CN111754566A CN 111754566 A CN111754566 A CN 111754566A CN 202010396690 A CN202010396690 A CN 202010396690A CN 111754566 A CN111754566 A CN 111754566A
Authority
CN
China
Prior art keywords
robot
information
pose
positioning
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010396690.2A
Other languages
Chinese (zh)
Other versions
CN111754566B (en
Inventor
姜欣
孙常恒
叶泽锋
刘云辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Original Assignee
Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology filed Critical Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Priority to CN202010396690.2A priority Critical patent/CN111754566B/en
Publication of CN111754566A publication Critical patent/CN111754566A/en
Application granted granted Critical
Publication of CN111754566B publication Critical patent/CN111754566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本发明公开了一种机器人场景定位方法和施工操作方法。涉及机器人控制领域,其中,机器人场景定位方法通过根据BIM数据提取结构信息生成建筑的完工状态栅格图,获取机器人上安装的激光雷达对当前场景的扫描数据,利用自适应蒙特卡洛算法根据扫描数据、完工状态栅格图对机器人在当前场景进行定位得到机器人的当前定位信息。基于BIM模型信息作为先验信息,从中提取关于房屋建筑结构尺寸信息,并转化为可用于机器人定位的完工状态栅格图,可在整个工程周期中使用信息冗余的完工图定位方法,简化了定位流程,提高机器人在变化场景下的定位效率。

Figure 202010396690

The invention discloses a robot scene positioning method and a construction operation method. The invention relates to the field of robot control, wherein, the robot scene positioning method generates a grid map of the completed state of the building by extracting structural information according to BIM data, obtains the scanning data of the current scene from the lidar installed on the robot, and uses an adaptive Monte Carlo algorithm to scan the current scene. The data and the completed state grid map are used to locate the robot in the current scene to obtain the current positioning information of the robot. Based on the BIM model information as a priori information, the information about the size of the building structure is extracted from it, and converted into a completed state grid map that can be used for robot positioning. The completed map positioning method with redundant information can be used in the entire engineering cycle, simplifying the process. The positioning process improves the positioning efficiency of the robot in changing scenarios.

Figure 202010396690

Description

机器人场景定位方法和施工操作方法Robot scene positioning method and construction operation method

技术领域technical field

本发明涉及机器人控制领域,尤其是涉及一种机器人场景定位方法和施工操作方法。The invention relates to the field of robot control, in particular to a robot scene positioning method and a construction operation method.

背景技术Background technique

同时定位和映射(simultaneous localization and mapping,简称SLAM)的主要原理是通过机器人上的传感器检测周围环境,并在估计姿势(位姿包括位置和方向)的同时构建环境图。以目前大量应用于室内机器人定位的视觉SLAM为例,到达施工场地后需要人工遥控机器人构建场景的视觉特征地图,尤其是在场地面积大、结构复杂时将会消耗大量工作时间及人力物力,降低机器人的自主性,失去了使用机器人提高装修自动化的意义。除此之外,由于建筑装修工作的特殊性,施工环境与机器人存在这大规模互动,随着施工过程的进行,机器人所处的实际环境会不断变化,例如逐渐搭建墙体,最终与完工图纸吻合等。面对变化场景的情况,传统情况下都需要不停的重新建图以确保机器人在变化场景下的定位精度,导致定位效率降低。因此需要提出一种能够提高机器人在变化场景下的定位效率的机器人场景定位方法。The main principle of simultaneous localization and mapping (SLAM) is to detect the surrounding environment through sensors on the robot, and construct an environment map while estimating the pose (the pose includes position and orientation). Taking the visual SLAM currently widely used in indoor robot positioning as an example, after arriving at the construction site, it is necessary to manually control the robot to construct the visual feature map of the scene, especially when the site is large and the structure is complex, it will consume a lot of working time and manpower and material resources, reducing The autonomy of robots loses the significance of using robots to improve decoration automation. In addition, due to the particularity of building decoration work, there is such a large-scale interaction between the construction environment and the robot. With the progress of the construction process, the actual environment where the robot is located will continue to change, such as gradually building walls, and finally with the completed drawings. match etc. In the face of changing scenes, traditionally, it is necessary to constantly recreate the map to ensure the positioning accuracy of the robot in changing scenes, resulting in a decrease in positioning efficiency. Therefore, it is necessary to propose a robot scene localization method that can improve the robot's localization efficiency in changing scenes.

发明内容SUMMARY OF THE INVENTION

本发明旨在至少解决相关技术中存在的技术问题之一。为此,本发明提出一种机器人场景定位方法,能够提高机器人在变化场景下的定位效率,实现无需事先建图,并且可在整个工程周期中使用的定位方法。The present invention aims to solve at least one of the technical problems existing in the related art. To this end, the present invention proposes a robot scene positioning method, which can improve the positioning efficiency of the robot under changing scenes, and realizes a positioning method that does not require prior mapping and can be used in the entire engineering cycle.

第一方面,本发明的一个实施例提供了:一种机器人场景定位方法,包括:In a first aspect, an embodiment of the present invention provides: a method for locating a robot scene, including:

根据BIM数据提取结构信息生成建筑的完工状态栅格图;Extracting structural information from BIM data to generate a raster diagram of the completed state of the building;

获取所述机器人上安装的激光雷达对当前场景的扫描数据;Obtain the scanning data of the current scene by the lidar installed on the robot;

利用自适应蒙特卡洛算法根据所述扫描数据、所述完工状态栅格图对所述机器人在当前场景进行定位得到所述机器人的当前定位信息。Using an adaptive Monte Carlo algorithm to locate the robot in the current scene according to the scan data and the completed state grid map, the current positioning information of the robot is obtained.

进一步地,所述利用自适应蒙特卡洛算法根据所述扫描数据、所述完工状态栅格图对所述机器人在当前场景进行定位得到所述机器人的当前定位信息,包括:Further, the use of the adaptive Monte Carlo algorithm to locate the robot in the current scene according to the scan data and the completed state grid map to obtain the current positioning information of the robot, including:

获取所述扫描数据中包含的工程结构信息;Obtain the engineering structure information contained in the scan data;

根据所述工程结构信息实时更新自适应蒙特卡洛算法中每一个粒子的似然域;Update the likelihood field of each particle in the adaptive Monte Carlo algorithm in real time according to the engineering structure information;

将所述似然域作为权重对粒子进行筛选;Screen the particles using the likelihood field as a weight;

根据筛选结果、所述完工状态栅格图对所述机器人在当前场景进行定位得到所述机器人的当前定位信息。The current positioning information of the robot is obtained by positioning the robot in the current scene according to the screening result and the completed state grid map.

本发明实施例至少具有如下有益效果:简化了定位流程,提高机器人在变化场景下的定位效率The embodiments of the present invention have at least the following beneficial effects: simplifying the positioning process and improving the positioning efficiency of the robot in changing scenarios

第二方面,本发明的一个实施例提供了:一种施工操作方法,包括:In the second aspect, an embodiment of the present invention provides: a construction operation method, comprising:

利用如第一方面任一项所述的机器人场景定位方法得到机器人的当前定位信息;Obtain the current positioning information of the robot by using the robot scene positioning method according to any one of the first aspects;

利用所述当前定位信息进行帧间位姿追踪得到待优化估计位姿;Using the current positioning information to perform inter-frame pose tracking to obtain the estimated pose to be optimized;

对所述待优化估计位姿进行联合优化,得到联合优化位姿;Perform joint optimization on the estimated pose to be optimized to obtain a joint optimized pose;

根据所述联合优化位姿和通过特征提取得到的三维位置信息进行施工操作。The construction operation is performed according to the joint optimized pose and the three-dimensional position information obtained through feature extraction.

进一步地,所述将所述当前定位信息作为初始位姿进行帧间位姿追踪得到待优化估计位姿,包括:Further, performing the inter-frame pose tracking using the current positioning information as the initial pose to obtain the estimated pose to be optimized, including:

根据机器人与相机之间的变换矩阵和所述当前定位信息得到当前的相机定位信息;Obtain the current camera positioning information according to the transformation matrix between the robot and the camera and the current positioning information;

将所述相机定位信息作为初始位姿获得相机帧间定位差,利用相机投影误差和所述相机帧间定位差,得到待优化估计位姿。The camera positioning information is used as the initial pose to obtain the positioning difference between camera frames, and the estimated pose to be optimized is obtained by using the camera projection error and the positioning difference between the camera frames.

进一步地,所述联合优化为利用非线性优化光束平差法进行优化。Further, the joint optimization is optimized by using a nonlinear optimization beam adjustment method.

进一步地,通过特征匹配得到所述相机投影误差,包括:Further, obtaining the camera projection error through feature matching, including:

利用激光投射器进行环境投影得到若干个投影光斑;Using a laser projector to perform environmental projection to obtain several projection spots;

利用摄像装置获取所述投影光斑进行ORB特征匹配;Use a camera to obtain the projected light spot for ORB feature matching;

根据特征匹配结果得到所述相机投影误差。The camera projection error is obtained according to the feature matching result.

进一步地,还包括利用神经网络进行操作识别,根据操作识别结果确定操作目标的三维位置信息,机器人根据所述联合优化位姿和所述三维位置信息进行施工操作。Further, it also includes using a neural network to perform operation recognition, determining the three-dimensional position information of the operation target according to the operation recognition result, and the robot performs construction operations according to the joint optimized pose and the three-dimensional position information.

本发明实施例至少具有如下有益效果:利用BIM模型信息得到的当前定位信息,进行前端融合BIM信息的帧间位姿追踪和后端融合BIM信息的联合优化,提高传统视觉SLAM的精度。The embodiments of the present invention have at least the following beneficial effects: using the current positioning information obtained from the BIM model information to perform joint optimization of the front-end fusion BIM information inter-frame pose tracking and the back-end fusion BIM information, and improve the accuracy of traditional visual SLAM.

第三方面,本发明的一个实施例提供了:一种施工操作系统,包括:In a third aspect, an embodiment of the present invention provides: a construction operating system, comprising:

获取当前定位信息单元:用于利用如第一方面任一项所述的机器人场景定位方法得到机器人的当前定位信息;Obtaining the current positioning information unit: used to obtain the current positioning information of the robot by using the robot scene positioning method according to any one of the first aspects;

帧间位姿追踪单元:用于利用所述当前定位信息进行帧间位姿追踪得到待优化估计位姿;Inter-frame pose tracking unit: used for using the current positioning information to perform inter-frame pose tracking to obtain the estimated pose to be optimized;

联合优化单元:用于对所述待优化估计位姿进行联合优化,得到联合优化位姿;Joint optimization unit: used to jointly optimize the estimated pose to be optimized to obtain a joint optimized pose;

施工操作单元:用于根据所述联合优化位姿和通过特征提取得到的操作位置信息进行施工操作。Construction operation unit: used for construction operation according to the joint optimized pose and the operation position information obtained through feature extraction.

第四方面,本发明的一个实施例提供了:一种电子设备,其特征在于,包括:In a fourth aspect, an embodiment of the present invention provides: an electronic device, characterized in that it includes:

至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;at least one processor; and, a memory communicatively coupled to the at least one processor;

其中,所述处理器通过调用所述存储器中存储的计算机程序,用于执行如第一方面任一项所述的方法,或,用于执行如第二方面任一项所述的方法。Wherein, the processor is configured to execute the method according to any one of the first aspect, or to execute the method according to any one of the second aspect, by calling the computer program stored in the memory.

第五方面,本发明的一个实施例提供了:一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如第一方面任一项所述的方法,或,用于执行如第二方面任一项所述的方法。In a fifth aspect, an embodiment of the present invention provides: a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, the computer-executable instructions are used to cause a computer to execute the first aspect The method of any one, or, for performing the method of any one of the second aspects.

本发明实施例的有益效果是:The beneficial effects of the embodiments of the present invention are:

本发明实施例通过根据BIM数据提取结构信息生成建筑的完工状态栅格图,获取机器人上安装的激光雷达对当前场景的扫描数据,利用自适应蒙特卡洛算法根据扫描数据、完工状态栅格图对机器人在当前场景进行定位得到机器人的当前定位信息。无需在工程开始时事先遥控机器人构建场景的视觉特征地图,避免在场地面积大、结构复杂时消耗大量工作时间及人力物力,降低机器人的自主性,基于BIM模型信息作为先验信息,从中提取关于房屋建筑结构尺寸信息,并转化为可用于机器人定位的完工状态栅格图,可在整个工程周期中使用信息冗余的完工图定位方法,简化了定位流程,提高机器人在变化场景下的定位效率。In the embodiment of the present invention, the completed state grid map of the building is generated by extracting structural information according to the BIM data, the scanning data of the current scene from the laser radar installed on the robot is obtained, and the self-adaptive Monte Carlo algorithm is used according to the scanning data and the completed state grid map. Position the robot in the current scene to obtain the current positioning information of the robot. There is no need to remotely control the robot to construct the visual feature map of the scene at the beginning of the project, avoid consuming a lot of working time and manpower and material resources when the site is large and complex in structure, and reduce the autonomy of the robot. Based on the BIM model information as a priori information, extract information about The size information of the building structure is converted into a completed state grid map that can be used for robot positioning. The completed map positioning method with redundant information can be used in the entire project cycle, which simplifies the positioning process and improves the positioning efficiency of the robot in changing scenarios. .

应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure. Obviously, the drawings in the following description are only some embodiments of the present disclosure, and for those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative effort. In the attached image:

图1是本发明实施例中机器人场景定位方法的一具体实施例流程示意图;1 is a schematic flowchart of a specific embodiment of a robot scene positioning method in an embodiment of the present invention;

图2是本发明实施例中机器人场景定位方法的一具体实施例BIM模型图与完工状态栅格图示意图;2 is a schematic diagram of a BIM model diagram and a completed state grid diagram of a specific embodiment of a robot scene positioning method in an embodiment of the present invention;

图3是本发明实施例中机器人场景定位方法的一具体实施例激光扫描示意图;3 is a schematic diagram of laser scanning of a specific embodiment of a robot scene positioning method in an embodiment of the present invention;

图4是本发明实施例中施工操作方法的一具体实施例流程示意图;4 is a schematic flow chart of a specific embodiment of the construction operation method in the embodiment of the present invention;

图5是本发明实施例中施工操作方法的一具体实施例帧间位姿追踪示意图;5 is a schematic diagram of inter-frame pose tracking according to a specific embodiment of the construction operation method in the embodiment of the present invention;

图6-图9是本发明实施例中施工操作方法的一具体实施例视觉增强效果示意图;Fig. 6-Fig. 9 is the visual enhancement effect schematic diagram of a specific embodiment of the construction operation method in the embodiment of the present invention;

图10是相关技术中重投影误差示意图;10 is a schematic diagram of reprojection error in the related art;

图11是本发明实施例中施工操作方法的一具体实施例基于BIM信息的帧间约束重投影误差示意图;FIG. 11 is a schematic diagram of the inter-frame constraint reprojection error based on BIM information in a specific embodiment of the construction operation method in the embodiment of the present invention;

图12是两种墙面缺陷示意图;Figure 12 is a schematic diagram of two kinds of wall defects;

图13是本发明实施例中施工操作方法的一具体实施例YOLO卷积神经网络检测效果示意图;13 is a schematic diagram of the detection effect of the YOLO convolutional neural network according to a specific embodiment of the construction operation method in the embodiment of the present invention;

图14是本发明实施例中施工操作系统的一具体实施例结构框图。FIG. 14 is a structural block diagram of a specific embodiment of the construction operating system in the embodiment of the present invention.

具体实施方式Detailed ways

为了更清楚地说明本发明实施例或相关技术中的技术方案,下面将对照附图说明本发明的具体实施方式。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图,并获得其他的实施方式。In order to more clearly describe the embodiments of the present invention or the technical solutions in the related art, the specific embodiments of the present invention will be described below with reference to the accompanying drawings. Obviously, the accompanying drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative efforts, and obtain other implementations.

除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terms used herein in the description of the present invention are for the purpose of describing specific embodiments only, and are not intended to limit the present invention.

附图中所示的流程图仅是示例性说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解,而有的操作/步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。The flowcharts shown in the figures are only exemplary illustrations and do not necessarily include all contents and operations/steps, nor do they have to be performed in the order described. For example, some operations/steps can be decomposed, and some operations/steps can be combined or partially combined, so the actual execution order may be changed according to the actual situation.

实施例:Example:

施工机器人工作环境不同于一般的室内移动机器人,主要有以下三个特点:1.可以获取工作环境先验信息BIM(Building Information Modeling,简称BIM);2.工作环境多属于室内弱纹理毛胚墙;3.工作环境环境随施工进行会发生结构性变化。传统视觉SLAM的建图加定位流程框架效率低并且无法在弱纹理环境下工作,另外当环境结构发生变化时,传统视觉SLAM建立的特征地图将无法使用,需要重新构建地图,这与施工机器人工作场景存在不可调和的矛盾。The working environment of construction robots is different from general indoor mobile robots. It mainly has the following three characteristics: 1. It can obtain the prior information of the working environment BIM (Building Information Modeling, referred to as BIM); 2. The working environment is mostly indoor weak texture rough wall 3. The working environment will undergo structural changes with the construction. The mapping and positioning process framework of traditional visual SLAM is inefficient and cannot work in a weak texture environment. In addition, when the environmental structure changes, the feature map created by traditional visual SLAM will not be available, and the map needs to be rebuilt, which is similar to the work of construction robots. There are irreconcilable contradictions in the scene.

本实施例通过预先获取建筑环境先验信息BIM,并结合双目视觉与2D激光雷达进行可重复、高精度的定位工作。In this embodiment, repeatable and high-precision positioning work is performed by pre-acquiring the prior information BIM of the building environment and combining binocular vision and 2D lidar.

本发明一实施例提供一种机器人场景定位方法,图1为本发明实施例提供的机器人场景定位方法的流程示意图,如图1所示,该方法包括以下步骤:An embodiment of the present invention provides a method for locating a robot scene. FIG. 1 is a schematic flowchart of the method for locating a robot scene according to an embodiment of the present invention. As shown in FIG. 1 , the method includes the following steps:

S11:根据BIM数据提取结构信息生成建筑的完工状态栅格图。S11: Extract the structural information from the BIM data to generate a grid diagram of the completed state of the building.

其中BIM全称为:Building Information Modeling,BIM将建筑的设计、施工、运行直至建筑全寿命周期的终结中各种信息整合于一个三维模型信息数据库中,通过建立虚拟的建筑工程三维模型,利用数字化技术,为这个模型提供完整的、与实际情况一致的建筑工程信息库。该数据库不仅包含描述建筑物构件的几何信息、专业属性及状态信息,还包含了非构件对象(如空间、运动行为)的状态信息。由于BIM包含了建筑领域的信息存储方案,有丰富的建筑物信息,因此本实施例针对施工机器人定位需求对BIM中的可用信息进行提取,将其作为建筑环境先验信息。The full name of BIM is: Building Information Modeling. BIM integrates various information from the design, construction, operation of the building until the end of the whole life cycle of the building into a 3D model information database. , which provides a complete and actual construction engineering information base for this model. The database not only contains the geometric information, professional attributes and state information describing building components, but also the state information of non-component objects (such as space and motion behavior). Since BIM includes an information storage solution in the construction field and has rich building information, this embodiment extracts the available information in the BIM for the positioning requirements of the construction robot, and uses it as the prior information of the building environment.

在一种实施方式中,BIM以标准交换格式IFC(IndustryFoundationClass)进行保存,IFC的信息描述分为四个层次:资源层(IFC-Resource Layer)、核心层(IFC-CoreLayer)、共享层(IFC-Interoperability Layer)和领域层(IFC-Domain Laye)。In one embodiment, the BIM is stored in the standard exchange format IFC (IndustryFoundationClass). The information description of the IFC is divided into four levels: the resource layer (IFC-Resource Layer), the core layer (IFC-CoreLayer), and the shared layer (IFC). -Interoperability Layer) and domain layer (IFC-Domain Layer).

其中资源层作为整个体系的基本层,IFC任意层都可引用资源层中的实体。该层主要定义了工程项目的通用信息,这些信息独立于具体建筑,没有整体结构,是分散的基础信息。该层核心内容主要包括属性资源、表现资源、结构资源。这些实体资源主要用于上层实体资源的定义,以显示上层实体的属性。The resource layer is the basic layer of the entire system, and any layer of the IFC can reference entities in the resource layer. This layer mainly defines the general information of the engineering project, which is independent of the specific building and has no overall structure, but is scattered basic information. The core content of this layer mainly includes attribute resources, performance resources, and structure resources. These entity resources are mainly used for the definition of upper-level entity resources to display the attributes of the upper-level entity.

核心层定义了信息模型的总体框架,例如定义了产品、过程、控制等相关信息,主要作用是将下层分散的基础信息组织起来,形成IFC模型的基本结构,然后用以描述现实世界中的实物以及抽象的流程。在整个体系之中起到了承上启下的作用。该层提炼定义了适用于整个建筑行业的抽象概念,比如IFCProduct实体可以描述建筑项目的建筑场地、建筑空间、建筑构件等。The core layer defines the overall framework of the information model, such as product, process, control and other related information. and abstract processes. It plays a linking role in the whole system. This layer abstracts and defines abstract concepts applicable to the entire construction industry, such as the IFCProduct entity can describe the construction site, construction space, construction components, etc. of a construction project.

共享层主要是服务于领域层,使各个领域间的信息能够交互,同时细化系统的组成元素,具体的建筑构件如板(IFCSlab)、柱(IFCColumn)、梁(IFCBeam)、墙(IfcWall)等均在这一层被定义,假如一栋楼有100道墙,在中性文件里就会有100个IfcWall实例。The shared layer mainly serves the domain layer, enabling the information exchange between various domains, and at the same time refining the components of the system, such as specific building components such as slabs (IFCSlab), columns (IFCColumn), beams (IFCBeam), walls (IfcWall) All are defined at this level. If a building has 100 walls, there will be 100 instances of IfcWall in the neutral file.

领域层是IFC体系架构的顶层,主要定义了面向各个专业领域的实体类型。这些实体都面向各个专业领域具有特定的概念。比如暖通领域的锅炉、管道等。The domain layer is the top layer of the IFC system architecture and mainly defines the entity types for each professional domain. These entities have specific concepts for each area of expertise. Such as boilers and pipes in the field of HVAC.

如图2所示,为一种实施例中BIM模型图与完工状态栅格图示意图,从图中可见,根据BIM模型图中各IFC实体之间的继承关系提取工程中结构、尺寸信息,然后按照预设分辨率生成建筑的完工状态栅格图。As shown in Figure 2, it is a schematic diagram of the BIM model diagram and the completed state grid diagram in an embodiment. It can be seen from the figure that the structure and size information in the project are extracted according to the inheritance relationship between the IFC entities in the BIM model diagram, and then Generates a raster map of the as-built state of the building at a preset resolution.

S12:获取机器人上安装的激光雷达对当前场景的扫描数据。S12: Obtain the scanning data of the current scene by the lidar installed on the robot.

在一种实施方式中,机器人上安装一个2D激光雷达,实时对施工场景进行扫描获得现场的扫描数据,扫描数据包括工程结构信息,例如承重墙、隔板墙等。In one embodiment, a 2D lidar is installed on the robot to scan the construction scene in real time to obtain scan data on the site, where the scan data includes engineering structure information, such as load-bearing walls, partition walls, and the like.

S13:利用自适应蒙特卡洛算法根据扫描数据、完工状态栅格图对机器人在当前场景进行定位得到机器人的当前定位信息。S13 : using the adaptive Monte Carlo algorithm to locate the robot in the current scene according to the scan data and the completed state grid map to obtain the current positioning information of the robot.

在一种实施方式中,步骤S13的实现过程如下述:In one embodiment, the implementation process of step S13 is as follows:

S131:获取扫描数据中包含的工程结构信息,根据施工过程中环境变化特点可知,最初状态只有承重墙,随着施工进展,搭建出隔板墙,当所有的隔板墙搭建完毕后,达到完工图纸的效果。S131: Obtain the engineering structure information contained in the scanned data. According to the characteristics of environmental changes during the construction process, the initial state is only the load-bearing wall. As the construction progresses, the partition wall is built. When all the partition walls are built, the completion is achieved. The effect of the drawing.

S132:根据障碍物信息实时更新自适应蒙特卡洛算法中每一个粒子的似然域。S132: Update the likelihood domain of each particle in the adaptive Monte Carlo algorithm in real time according to the obstacle information.

S133:将似然域作为权重对粒子进行筛选。S133: Screen the particles using the likelihood domain as a weight.

S134:根据筛选结果、完工状态栅格图对机器人在当前场景进行定位得到机器人的当前定位信息。S134: Position the robot in the current scene according to the screening result and the completed state grid diagram to obtain the current positioning information of the robot.

例如在扫描层scan_layer进行更新,通过potential_map_server在潜在层加载完工状态的栅格图,在扫描层scan_layer获取当前位置的激光雷达扫描数据中的所有工程结构信息例如隔板墙等障碍物。通过计算每一个粒子的似然域(即上述的概率)作为权重对粒子进行筛选,权重越小的粒子说明以其为基准得到的观测数据与实际地图越不相符,所以该粒子越容易被筛选掉。For example, update the scan layer scan_layer, load the raster image of the completed state in the potential layer through the potential_map_server, and obtain all the engineering structure information in the lidar scan data of the current position in the scan layer scan_layer, such as obstacles such as clapboard walls. The particles are screened by calculating the likelihood field (that is, the above-mentioned probability) of each particle as the weight. The smaller the weight of the particle, the less consistent the observed data obtained based on it is with the actual map, so the particle is easier to be screened. Lose.

如图3所示,为一种实施方式中激光扫描示意图,对于所有粒子而言,如果地图中存在一面观测不到的墙(如图中虚线所示),那么激光会穿过该墙,如果末端能照射到其他墙体上,那么可以根据其他墙体的似然域来计算权重(如左图),如果末端没有照射到任何墙体,那么将会作为无效值舍去(如右图)。当某物体既出现在扫描层scan_layer中又出现在潜在层potential_map_server中,即认为其为墙体。As shown in Figure 3, it is a schematic diagram of laser scanning in one embodiment. For all particles, if there is an invisible wall in the map (as shown by the dotted line in the figure), the laser will pass through the wall. If If the end can illuminate other walls, the weight can be calculated according to the likelihood domain of other walls (as shown on the left). If the end does not illuminate any wall, it will be discarded as an invalid value (as shown on the right). . When an object appears in both the scan layer scan_layer and the potential layer potential_map_server, it is considered to be a wall.

在一种实施方式中,利用自适应蒙特卡洛算法的粒子滤波实现定位,蒙特卡洛粒子滤波定位可以看成下述过程:例如一开始在场景中均匀分布一群粒子,通过机器人的移动来移动粒子,比如机器人向前移动一米,所有的粒子也向前移动一米,然后使用每个粒子所在的位置模拟一个传感器位置信息与本实施例中机器人通过2D激光雷达扫描得到的扫描数据进行比对,从而赋给每个粒子一个概率,即似然域,根据生成的概率大小来重新生成粒子,一般来说概率越高的生成概率越大,也就是将似然域进行权重进行筛选,留下权重大的粒子,经过多次迭代之后,所有的粒子会慢慢的收敛,根据粒子收敛的结果进行机器人定位得到机器人的当前定位信息。In one embodiment, the particle filter of the adaptive Monte Carlo algorithm is used to realize positioning. The Monte Carlo particle filter positioning can be regarded as the following process: for example, a group of particles are evenly distributed in the scene at the beginning, and the robot moves through the movement of the robot. For example, if the robot moves forward one meter, all the particles move forward one meter, and then use the position of each particle to simulate a sensor position information and compare the scanning data obtained by the robot through 2D lidar scanning in this example. Yes, so as to give each particle a probability, that is, the likelihood domain, and regenerate the particles according to the generated probability. Generally speaking, the higher the probability, the greater the generation probability, that is, the likelihood domain is weighted for screening, leaving For the particles with heavy weights, after many iterations, all the particles will slowly converge, and the robot will be positioned according to the result of the particle convergence to obtain the current positioning information of the robot.

在一种实施方式中权重计算方式有很多,权重可以简单的认为是和真实传感器测量值之间的差别大小,比如说当前的一个粒子的预测位置和测量目标之前的距离差别大,则其越大权重越小,小的权重说明距离真实的位置远。In one embodiment, there are many ways to calculate the weight, and the weight can be simply considered as the difference between the measured value of the real sensor and the actual sensor. The larger the weight is, the smaller the weight indicates that it is far from the real position.

例如机器人初始在一个墙附近的位置,但是机器人并不知道自己在哪,这时候初始化所有的粒子,并且所有粒子的权重都是一样的。此时机器人通过激光扫描数据得知面前是一面墙,这时候和墙比较近的粒子的权重都提高了。进行重采样之后,机器人移动,可以看到采样后的粒子都集中在原来靠近墙的这些粒子点附近,这时候机器人重新感知,得知面前还是一个墙,这时候,原来采样的靠近墙的粒子权重变大,进行筛选之后保留权重大的粒子,此时主要的采样点已经集中在墙的位置,机器人通过筛选结果得知自己的位置是在墙旁边,实现对机器人的定位。For example, the robot is initially located near a wall, but the robot does not know where it is. At this time, all particles are initialized, and the weights of all particles are the same. At this time, the robot knows that there is a wall in front of it through the laser scanning data, and the weight of the particles closer to the wall is increased at this time. After resampling, the robot moves, and it can be seen that the sampled particles are concentrated near the particle points that were originally close to the wall. At this time, the robot re-perceives and knows that there is still a wall in front of it. At this time, the originally sampled particles close to the wall The weight becomes larger, and the particles with greater weight are retained after screening. At this time, the main sampling points have been concentrated on the wall. The robot knows that its position is next to the wall through the screening results, and realizes the positioning of the robot.

由于任何施工阶段地图都是完工状态时地图的子集,同时本实施例的粒子似然域可以实时更新,因此通过权重筛选可以实时得到机器人的当前定位信息,所以本实施例中使用完工状态栅格图能够用于不同施工阶段的导航,无需在工程开始时事先遥控机器人构建场景的视觉特征地图,避免在场地面积大、结构复杂时消耗大量工作时间及人力物力,降低机器人的自主性。同时本实施例基于BIM模型信息作为先验信息,从中提取关于房屋建筑结构尺寸信息,并转化为可用于机器人定位的完工状态栅格图,可在整个工程周期中使用信息冗余的完工图定位方法,简化了定位流程,面对变化场景,直接可以定位,提高机器人在变化场景下的定位效率。Since any construction stage map is a subset of the map in the completed state, and the particle likelihood field of this embodiment can be updated in real time, the current positioning information of the robot can be obtained in real time through weight screening, so the completed state grid is used in this embodiment. Lattice can be used for navigation in different construction stages. It is not necessary to remotely control the robot to construct a visual feature map of the scene at the beginning of the project. This avoids consuming a lot of work time and manpower and material resources when the site area is large and the structure is complex, and reduces the autonomy of the robot. At the same time, based on the BIM model information as a priori information, this embodiment extracts information about the size of the building structure and converts it into a completed state grid map that can be used for robot positioning. The completed map positioning with redundant information can be used in the entire engineering cycle. The method simplifies the positioning process, can directly locate in the face of changing scenes, and improves the positioning efficiency of the robot in changing scenes.

在另一个实施例中,提供一种施工操作方法,基于传统视觉SLAM方法进行改进。解决传统视觉SLAM在变化环境、弱纹理的室内装修情况下无法满足高效、高精度定位建图需求的问题,同时通过深度学习使构建的SLAM地图具有语义信息。In another embodiment, a construction operation method is provided, which is improved based on the traditional visual SLAM method. It solves the problem that traditional visual SLAM cannot meet the needs of efficient and high-precision positioning and mapping in the case of changing environments and weak texture interior decoration, and at the same time, through deep learning, the constructed SLAM map has semantic information.

传统的激光SLAM和视觉SLAM优缺点非常的明显:在面对弱纹理环境下,激光SLAM的稳定性、精度都优于传统视觉SLAM;而在感知高维度环境信息(如:识别墙面缺陷点、三维重建等)方面,视觉SLAM更胜一筹。但是视觉SLAM对环境变化的鲁棒性差,不能直接应用于建筑施工场景中,无法很好的在室内墙面弱纹理的装修环境下工作,因此本实施例通过视觉结合2D激光雷达的融合SLAM的施工操作方法有效的结合传统激光SLAM和传统视觉SLAM的各自优势。The advantages and disadvantages of traditional laser SLAM and visual SLAM are very obvious: in the face of weak texture environment, the stability and accuracy of laser SLAM are better than traditional visual SLAM; while in perceiving high-dimensional environmental information (such as: identifying wall defect points) , 3D reconstruction, etc.), visual SLAM is better. However, the robustness of visual SLAM to environmental changes is poor, and it cannot be directly applied to building construction scenarios, nor can it work well in a decoration environment with weak interior wall textures. Therefore, this embodiment combines visual and 2D lidar fusion SLAM The construction operation method effectively combines the respective advantages of traditional laser SLAM and traditional visual SLAM.

如图4所示,为本发明实施例提供的施工操作方法的流程示意图,如图1所示,该方法包括以下步骤:As shown in Figure 4, it is a schematic flowchart of a construction operation method provided by an embodiment of the present invention, as shown in Figure 1, the method includes the following steps:

S21:利用如上述实施例所述的任一项的机器人场景定位方法得到机器人的当前定位信息,即上述通过实施例得到机器人的当前定位信息。S21: Obtain the current positioning information of the robot by using any one of the robot scene positioning methods described in the foregoing embodiments, that is, obtain the current positioning information of the robot through the above-mentioned embodiments.

S22:利用当前定位信息进行帧间位姿追踪得到待优化估计位姿,本实施例的帧间位姿追踪可看作是前端融合BIM信息的帧间位姿追踪。S22 : Using the current positioning information to perform inter-frame pose tracking to obtain the estimated pose to be optimized, the inter-frame pose tracking in this embodiment may be regarded as inter-frame pose tracking in which BIM information is fused at the front end.

传统视觉SLAM,其前端通过运动匀速模型,该模型认为相机始终处于恒速运动状态,所以第k+1帧相对于第k帧的变换矩阵与第k帧相对于第k-1帧的变换矩阵相同,但是这样的假设显然不符合施工机器人工作要求,因此本实施例中通过融合BIM信息对匀速模型进行改进。In traditional visual SLAM, its front end passes through a uniform motion model. This model considers that the camera is always moving at a constant speed, so the transformation matrix of the k+1th frame relative to the kth frame and the transformation matrix of the kth frame relative to the k-1th frame The same, but this assumption obviously does not meet the working requirements of the construction robot, so in this embodiment, the uniform velocity model is improved by fusing BIM information.

S23:对待优化估计位姿进行联合优化,得到联合优化位姿。例如一种实施方式中采用利用非线性优化光束平差法进行联合优化,非线性优化光束平差法(BundleAdjustment)又称BA法。S23: Perform joint optimization on the estimated pose to be optimized to obtain a joint optimized pose. For example, in one embodiment, joint optimization is performed using a nonlinear optimized beam adjustment method, which is also called a BA method.

S24:根据联合优化位姿和通过特征提取得到的三维位置信息进行施工操作,其中三维位置信息是一种语义信息,根据语义信息进行地图构建,识别出被操作目标,例如墙面修复时,墙面缺陷的三维位置信息,施工操作即为机器人进行墙面缺陷修复等,扩展机器人的应用范围。S24: The construction operation is performed according to the joint optimized pose and the three-dimensional position information obtained by feature extraction, wherein the three-dimensional position information is a kind of semantic information, and the map is constructed according to the semantic information, and the operated target is identified, for example, when the wall is repaired, the wall The three-dimensional position information of surface defects, the construction operation is to repair wall defects for the robot, etc., and the application scope of the robot is expanded.

在一种实施方式中,步骤S22的前端融合BIM信息的帧间位姿追踪具体过程如下所示,包括:In an embodiment, the specific process of the inter-frame pose tracking of the front-end fusion BIM information in step S22 is as follows, including:

S221:根据机器人与相机之间的变换矩阵和当前定位信息得到当前的相机定位信息,表示为:S221: Obtain the current camera positioning information according to the transformation matrix between the robot and the camera and the current positioning information, which is expressed as:

T相机=T当前机器人定位*T机器人-相机 T camera = T current robot position * T robot - camera

其中,T相机表示当前的相机定位信息,T当前机器人定位表示机器人的当前定位信息,T机器人-相机表示机器人与相机之间的变换矩阵,该变换矩阵可以作为先验参数得到。Among them, T camera represents the current camera positioning information, T current robot positioning represents the current positioning information of the robot, T robot-camera represents the transformation matrix between the robot and the camera, and the transformation matrix can be obtained as a priori parameter.

S222:将相机定位信息作为初始位姿获得相机帧间定位差,利用相机投影误差和相机帧间定位差,得到待优化估计位姿。S222 : using the camera positioning information as the initial pose to obtain the positioning difference between camera frames, and using the camera projection error and the positioning difference between the camera frames to obtain the estimated pose to be optimized.

在一种实施方式中,将当前的相机定位信息作为初始位姿,替代相关技术中的匀速运动模型的初始位姿,然后基于已优化的上一帧到当前帧的相机帧间定位差,结合当前帧观测到的三维特征点到当前的相机投影误差。In one embodiment, the current camera positioning information is used as the initial pose to replace the initial pose of the uniform motion model in the related art, and then based on the optimized camera frame positioning difference from the previous frame to the current frame, combined with The projection error of the 3D feature points observed in the current frame to the current camera.

如图5所示,为本实施例的帧间位姿追踪示意图,其中P为三维空间点,P1和P2为三维空间点分别在相机第k帧(已优化的前一帧)和第k+1帧(待优化的当前帧)的观测相匹配点对,即通过特征匹配得到观测相匹配点对,P2’为三维空间点P在相机在第k+1帧的投影点,e投影表示相机当前帧的观测点P2与投影点P2’之间的相机平面距离误差,即上述相机投影误差,eBIM表示相机帧间定位差,即相机当前帧的待优化估计位姿和通过BIM信息获得的相机初始位姿(步骤S221得到的相机定位信息)之间的误差。As shown in FIG. 5 , a schematic diagram of the inter-frame pose tracking of this embodiment, where P is a three-dimensional space point, and P1 and P2 are three-dimensional space points in the kth frame (previous frame that has been optimized) and the kth point of the camera, respectively. The observation matching point pair of 1 frame (the current frame to be optimized), that is, the observation matching point pair is obtained through feature matching, P2' is the projection point of the three-dimensional space point P on the camera in the k+1th frame, and the e projection represents the camera. The camera plane distance error between the observation point P2 of the current frame and the projection point P2', that is, the above-mentioned camera projection error, e BIM represents the positioning difference between the camera frames, that is, the estimated pose of the current frame of the camera to be optimized and obtained through the BIM information. The error between the initial camera poses (the camera positioning information obtained in step S221).

待优化估计位姿是属于帧间追踪,通过观测对当前帧进行优化,表示为:The estimated pose to be optimized belongs to inter-frame tracking, and the current frame is optimized through observation, which is expressed as:

Figure BDA0002487818780000091
Figure BDA0002487818780000091

其中,ζ*表示待优化估计位姿,即相机位姿李代数,u表示相机第k+1帧中观测点P2的像素坐标,Kexp(ζ^)P表示三维空间点P在相机参数K的情况下对相机第k+1帧的帧平面投影,其投影值表示为P2’,Tw_curr表示相机第k+1帧的待优化估计位姿,Tw_BIM表示通过BIM信息获得的第k+1帧相机定位信息,即初始位姿,e投影表示相机第k+1帧的观测点P2与投影点P2’之间的相机平面距离误差,即上述相机投影误差,eBIM表示相机帧间定位差,即相机当前帧的待优化估计位姿和通过BIM信息获得的相机初始位姿之间的误差。Among them, ζ* represents the estimated pose to be optimized, that is, the Lie algebra of the camera pose, u represents the pixel coordinates of the observation point P2 in the k+1th frame of the camera, and Kexp(ζ^)P represents the three-dimensional space point P in the camera parameter K. In this case, the frame plane projection of the k+1th frame of the camera, its projection value is represented as P2', Tw_curr represents the estimated pose to be optimized of the k+1th frame of the camera, and Tw_BIM represents the k+ 1th obtained through the BIM information Frame camera positioning information, that is, the initial pose, e projection represents the camera plane distance error between the observation point P2 and the projection point P2' of the k+1th frame of the camera, that is, the above camera projection error, e BIM represents the camera inter-frame positioning difference , that is, the error between the estimated pose of the camera's current frame to be optimized and the initial pose of the camera obtained through the BIM information.

在一种实施方式中,通过特征匹配得到相机投影误差,包括:In one embodiment, the camera projection error is obtained through feature matching, including:

S2221:利用激光投射器进行环境投影得到若干个投影光斑;S2221: Use a laser projector to perform environmental projection to obtain several projection spots;

S2222:利用摄像装置获取投影光斑进行ORB特征匹配;S2222: Use the camera to obtain the projected light spot for ORB feature matching;

S2223:根据特征匹配结果得到相机投影误差。S2223: Obtain the camera projection error according to the feature matching result.

由于装修环境中可能遇到墙面缺乏纹理特征的情况,这时将无法提取视觉特征进行特征匹配。因此本实施例针对此种情况进行人工投影的视觉特征增强。具体做法是通过激光投射器组成点阵向缺乏视觉特征的墙面进行环境投影得到若干个投影光斑,通过摄像装置,例如双目摄像机捕捉这些人工投影的投影光斑提取ORB特征进行ORB特征匹配,根据特征匹配结果按照上述方式得到相机投影误差,以进行待优化估计位姿的计算。Due to the lack of texture features on the walls in the decoration environment, it is impossible to extract visual features for feature matching at this time. Therefore, in this embodiment, the visual feature enhancement of artificial projection is performed for this situation. The specific method is to use a laser projector to form a dot matrix to perform environmental projection on a wall lacking visual features to obtain several projection spots, and capture these artificially projected projection spots through a camera device, such as a binocular camera, to extract ORB features for ORB feature matching. The feature matching result obtains the camera projection error according to the above method, so as to calculate the estimated pose to be optimized.

在一种实施方式中,设置激光投射器点阵的目的是当有些施工场景墙面大、场景范围大时,能够满足全场地视觉特征投放要求,因此设计全向激光点阵投射装置,例如该装置设置成8个激光点阵投射器,并均布在360度的圆形基座上,每个激光投射器配有2个自由度的旋转底座,可分别实现垂直于水平面方向的旋转与俯仰,以此满足不同房屋结构的投射需求。同时底部可安装三脚架,提高投射点高度,防止杂物遮挡光路。In an embodiment, the purpose of setting the laser projector dot matrix is to meet the requirements for the placement of visual features of the entire site when some construction scenes have large walls and a large scene range. Therefore, an omnidirectional laser dot matrix projection device is designed, such as The device is set up as 8 laser dot matrix projectors, which are evenly distributed on a 360-degree circular base. Each laser projector is equipped with a rotating base with 2 degrees of freedom, which can respectively realize rotation and rotation perpendicular to the horizontal direction. Pitch to meet the projection needs of different housing structures. At the same time, a tripod can be installed at the bottom to increase the height of the projection point and prevent debris from blocking the light path.

如图6-图9所示,为本实施例一种实施方式中视觉增强效果示意图。图6为室内墙面提取到的视觉特征,图7为两幅图像之间成功匹配的特征,可以发现,由于墙面较为平整,纹理特征稀疏,导致相邻帧之间能够成功匹配的特征点很少。对于室内粉刷工艺需要操作的墙面为白墙时,可供提取的视觉特征将会更加稀疏。图8为激光投射器光斑示意图,图9为使用激光投射器进行视觉特征增强后的特征匹配情况。可以看出经过激光投影,能够提取的ORB特征显著增多,即使在完全空白的墙面上也可以实现特征匹配,实现弱纹理装修环境下的定位与建图,能够根据特征匹配结果按照上述方式得到相机投影误差,以进行待优化估计位姿的计算,最终满足施工机器人施工操作系统需求。As shown in FIGS. 6-9 , it is a schematic diagram of a visual enhancement effect in an implementation manner of this embodiment. Figure 6 shows the visual features extracted from the indoor wall, and Figure 7 shows the features successfully matched between the two images. It can be found that, because the wall is relatively flat and the texture features are sparse, the feature points that can be successfully matched between adjacent frames rare. When the wall that needs to be operated in the indoor painting process is a white wall, the visual features that can be extracted will be more sparse. FIG. 8 is a schematic diagram of a laser projector spot, and FIG. 9 is a feature matching situation after visual feature enhancement using a laser projector. It can be seen that after laser projection, the ORB features that can be extracted are significantly increased. Even on a completely blank wall, feature matching can be achieved, and positioning and mapping in a weak texture decoration environment can be achieved. According to the feature matching results, the above method can be obtained. The camera projection error is used to calculate the estimated pose to be optimized, and finally meet the requirements of the construction robot construction operating system.

本实施例上述过程完成了前端融合BIM信息的帧间位姿追踪,能够通过相邻帧间的检测图像估计相机的运动,将相邻时刻的运动串联起来,就构成了机器人的运动轨迹,从而实现定位。另一方面,根据每一时刻的相机位姿,就能够计算出各像素对应的三维空间点的位置,也就能够得到场景地图。The above process of this embodiment completes the frame-to-frame pose tracking of the front-end fusion of BIM information. The motion of the camera can be estimated through the detection images between adjacent frames, and the motions at adjacent moments are connected in series to form the motion trajectory of the robot. achieve positioning. On the other hand, according to the camera pose at each moment, the position of the three-dimensional space point corresponding to each pixel can be calculated, and the scene map can be obtained.

由于任何测量数据都有噪声,上述前端融合BIM信息的帧间位姿追踪能够从图像中估计到相机的运动,但是需要降低估计过程中噪声的影响进行后端优化,否则不可避免地出现累计漂移,漂移将导致无法建立一致的地图,因此将前端的待优化估计位姿以及这些数据的初始值发送给后端进行整体优化,一般来说,优化方法为滤波或非线性优化算法。Since any measurement data has noise, the above-mentioned front-end inter-frame pose tracking with BIM information fusion can estimate the motion of the camera from the image, but it is necessary to reduce the influence of noise in the estimation process for back-end optimization, otherwise cumulative drift will inevitably occur. , the drift will cause the inability to establish a consistent map, so the estimated pose to be optimized at the front end and the initial values of these data are sent to the back end for overall optimization. Generally speaking, the optimization method is a filtering or nonlinear optimization algorithm.

本实施例中步骤S23对待优化估计位姿进行联合优化,得到联合优化位姿中的联合优化即上述的后端优化过程,一种实施方式中采用利用非线性优化光束平差法进行联合优化,光束平差法(Bundle Adjustment)又称BA法,是一种非线性优化方法。In this embodiment, step S23 performs joint optimization on the estimated pose to be optimized, and the joint optimization in the joint optimization pose is obtained, that is, the above-mentioned back-end optimization process. In one embodiment, the nonlinear optimization beam adjustment method is used for joint optimization. Bundle Adjustment, also known as BA method, is a nonlinear optimization method.

相关技术中后端优化多是使用重投影误差法,该方法将像素坐标对应的三维特征点坐标按照估计的位姿投影到临近帧上,在临近帧上得到的像素坐标与特征匹配得到的像素坐标之间的误差。In the related art, the back-end optimization mostly uses the reprojection error method. This method projects the coordinates of the three-dimensional feature points corresponding to the pixel coordinates to the adjacent frame according to the estimated pose, and the pixel coordinates obtained on the adjacent frame are matched with the pixels obtained by the feature. error between coordinates.

如图10所示,为相关技术中重投影误差示意图,其中P为三维空间点,P1和P2为三维空间点分别在相机第k帧和k+1帧的观测相匹配点对,P2’为三维空间点P在相机第k+1帧的投影点,e为相机第k+1帧观测点P2与投影点P2’之间的相机平面距离误差。使用非线性优化光束平差法BA进行求解。As shown in Figure 10, it is a schematic diagram of the reprojection error in the related art, where P is a three-dimensional space point, P1 and P2 are the observation matching point pairs of the three-dimensional space points in the kth frame and k+1 frame of the camera, respectively, and P2' is The projection point of the three-dimensional space point P in the k+1th frame of the camera, and e is the camera plane distance error between the observation point P2 and the projection point P2' in the k+1th frame of the camera. The solution is solved using the nonlinear optimal beam adjustment method BA.

但是现有的非线性优化光束平差法BA的优化过度依赖相机的初始位姿,由于初始位姿不精确,导致非线性优化光束平差法BA的优化过程需要多次迭代才能达到足够精度,需要的迭代周期较长,并且不能得到较好的优化结果。因此本实施例对重投影误差法进行改进,使用融合BIM信息,并通过2D激光雷达进行预定位得到相机定位信息作为相机当前帧的初始位姿,实现帧间约束,这样可以在较少优化迭代周期内获得更精确的优化结果。However, the optimization of the existing nonlinear optimized beam adjustment method BA relies too much on the initial pose of the camera. Due to the inaccuracy of the initial pose, the optimization process of the nonlinear optimized beam adjustment method BA requires multiple iterations to achieve sufficient accuracy. The required iteration period is longer, and better optimization results cannot be obtained. Therefore, this embodiment improves the reprojection error method, uses the fusion BIM information, and uses the 2D lidar to perform pre-positioning to obtain the camera positioning information as the initial pose of the current frame of the camera to implement inter-frame constraints, which can reduce the number of optimization iterations. Get more accurate optimization results in cycles.

如图11所示,为本实施例中基于BIM信息的帧间约束重投影误差示意图。其中P为三维空间点,P1和P2为三维空间点分别在相机第k帧和第k+1帧的观测相匹配点对,即通过特征匹配得到观测相匹配点对,P2’为三维空间点P在相机第k+1帧的投影点,e投影表示相机k+1帧的观测点P2与投影点P2’之间的相机平面距离误差,即上述相机投影误差,eBIM表示相机帧间定位差,即相机当前帧的待优化估计位姿和通过BIM信息获得的相机初始位姿之间的误差。As shown in FIG. 11 , a schematic diagram of the inter-frame constraint reprojection error based on BIM information in this embodiment is shown. Among them, P is the three-dimensional space point, P1 and P2 are the observation matching point pair of the three-dimensional space point in the kth frame and the k+1th frame of the camera respectively, that is, the observation matching point pair is obtained through feature matching, and P2' is the three-dimensional space point. P is the projection point of the k+1th frame of the camera, e projection represents the camera plane distance error between the observation point P2 of the camera k+1 frame and the projection point P2', that is, the above-mentioned camera projection error, e BIM represents the camera inter-frame positioning The difference is the error between the estimated pose of the camera's current frame to be optimized and the initial pose of the camera obtained from the BIM information.

经过联合优化的联合优化位姿属于帧间优化,因此需要遍历当前帧的若干共视帧进行联合优化,表示为:The joint optimized pose after joint optimization belongs to the inter-frame optimization, so it is necessary to traverse several co-view frames of the current frame for joint optimization, which is expressed as:

Figure BDA0002487818780000111
Figure BDA0002487818780000111

其中,ζ*1表示联合优化位姿,是一种李代数,

Figure BDA0002487818780000112
表示遍历第k+1帧之前的帧,例如取20个共视帧,即k从1遍历到20,u表示相机第k+1帧中观测点P2的像素坐标,Kexp(ζ^)P表示三维空间点P在相机参数K的情况下对相机第k+1帧的帧平面投影,其投影值表示为P2’,Tw_k+1表示相机第k+1帧的待优化估计位姿,Tw_BIM1表示通过BIM信息获得的第k+1帧相机定位信息,即初始位姿,e投影表示相机第k+1帧的观测点P2与投影点P2’之间的相机平面距离误差,即上述相机投影误差,eBIM表示相机帧间定位差,即相机第k+1帧的待优化估计位姿和通过BIM信息获得的相机初始位姿之间的误差。Among them, ζ* 1 represents the joint optimization pose, which is a Lie algebra,
Figure BDA0002487818780000112
Indicates to traverse the frames before the k+1th frame, for example, take 20 common view frames, that is, k traverses from 1 to 20, u represents the pixel coordinates of the observation point P2 in the k+1th frame of the camera, and Kexp(ζ ^ )P represents The three-dimensional space point P projects the frame plane of the k+1th frame of the camera under the condition of the camera parameter K, and its projection value is expressed as P2', Tw_k+1 represents the estimated pose to be optimized of the k+1th frame of the camera, T w_BIM1 represents the camera positioning information of the k+1th frame obtained through the BIM information, that is, the initial pose, and e projection represents the camera plane distance error between the observation point P2 and the projection point P2' of the k+1th frame of the camera, that is, the above camera Projection error, e BIM represents the positioning difference between camera frames, that is, the error between the camera's k+1th frame's estimated pose to be optimized and the camera's initial pose obtained through BIM information.

上述得到联合优化位姿过程中,在图优化非线性优化光束平差法BA中,通过BIM信息的注入增加了相机第k帧和相机第k+1帧之间的约束边,相比较相关技术中非线性优化只有相机和路标三维空间点的边约束来说,提高了优化效果,减少了优化迭代次数。In the process of obtaining the joint optimization pose above, in the graph optimization nonlinear optimization beam adjustment method BA, the constraint edge between the kth frame of the camera and the k+1th frame of the camera is added through the injection of BIM information. Compared with related technologies In the medium nonlinear optimization, only the edge constraints of the camera and the landmark three-dimensional space point are concerned, which improves the optimization effect and reduces the number of optimization iterations.

在一种实施方式中步骤S24根据联合优化位姿和通过特征提取得到的三维位置信息进行施工操作,其中三维位置信息是一种语义信息,根据语义信息进行地图构建,识别出被操作目标,例如墙面修复时,墙面缺陷的三维位置信息,扩展机器人的应用范围。In one embodiment, step S24 performs the construction operation according to the joint optimized pose and the three-dimensional position information obtained by feature extraction, wherein the three-dimensional position information is a kind of semantic information, and the map is constructed according to the semantic information to identify the operated target, such as When the wall is repaired, the three-dimensional position information of the wall defect expands the application scope of the robot.

例如利用神经网络进行操作识别,根据操作识别结果确定操作目标的三维位置信息,机器人根据三维位置信息进行施工操作,操作识别是根据操作类型的不同识别操作对象,因为施工机器人施工过程中不仅需要获取自身位置、环境中物体的位置,还需要识别出被操作对象,例如墙面上需要打磨的凸包等,甚至还需要识别出工地中常见的一些物体,例如梯子、桶、沙袋等等,并且定位这些操作对象,根据操作对象进行对应的施工操作。For example, the neural network is used for operation recognition, the three-dimensional position information of the operation target is determined according to the operation recognition result, and the robot performs construction operations according to the three-dimensional position information. Its own position, the position of objects in the environment, it is also necessary to identify the object to be operated, such as the convex hull that needs to be polished on the wall, etc., and even some common objects on the construction site, such as ladders, buckets, sandbags, etc. need to be identified, and These operation objects are located, and corresponding construction operations are carried out according to the operation objects.

以墙面修复为例,墙面缺陷的修复是装修施工中的一项主要工艺,该任务需要机器人识别墙面缺陷并确定其位置,从而实现对缺陷的打磨、抹平等操作。本实施方式中通过YOLO卷积神经网络进行目标检测,实现墙面缺陷的识别,根据缺陷识别结果确定缺陷对应的ORB特征来确定该缺陷在世界坐标的三维位置信息,在地图上表示修补缺陷的操作目标三维位置信息,将其注册到视觉特征地图中,方便机器人与环境交互进行施工操作,从而满足施工机器人的工作需要。Taking wall repair as an example, repair of wall defects is a major process in decoration construction. This task requires robots to identify wall defects and determine their positions, so as to achieve operations such as grinding and plastering of defects. In this embodiment, target detection is performed through the YOLO convolutional neural network to realize the identification of wall defects, and the ORB feature corresponding to the defect is determined according to the defect identification result to determine the three-dimensional position information of the defect in world coordinates, and the repaired defect is indicated on the map. The three-dimensional position information of the operation target is registered and registered in the visual feature map, which is convenient for the robot to interact with the environment for construction operations, so as to meet the work needs of the construction robot.

如图12所示,为两种墙面缺陷示意图,分别是凹坑缺陷和凸坑缺陷,这两种缺陷属于常见的墙面缺陷。在一种实施方式中,通过在装修现场采集若干张照片,手工标注制作数据集进行模型训练,例如采集500张照片作为样本,其中420张图片用于训练数据集,80张图片用于测试数据集,通过软件环境:Ubuntu+Darknet+Cuda+Cudnn来进行模型训练得到合适的模型参数。As shown in Figure 12, it is a schematic diagram of two kinds of wall defects, namely pit defects and convex pit defects, which are common wall defects. In one embodiment, model training is performed by collecting several photos at the decoration site and manually annotating a data set. For example, 500 photos are collected as samples, of which 420 are used for the training data set and 80 are used for the test data. Set, through the software environment: Ubuntu+Darknet+Cuda+Cudnn to conduct model training to obtain appropriate model parameters.

如图13所示,为本实施例的YOLO卷积神经网络检测效果示意图,上图为采集的墙面缺陷示意图,其中包含两个凸坑缺陷一个凹坑缺陷,下图为经过本实施例的YOLO卷积神经网络检测后的检测结果示意图,从图中可见,经过检测识别出左图中的三处缺陷。As shown in FIG. 13 , a schematic diagram of the detection effect of the YOLO convolutional neural network in this embodiment is shown. The upper diagram is a schematic diagram of a collected wall defect, which includes two convex pit defects and one pit defect. A schematic diagram of the detection results after the YOLO convolutional neural network detection. As can be seen from the figure, the three defects in the left picture were identified after detection.

上述通过基于深度学习的YOLO卷积神经网络建立关于缺陷语义信息,识别出被操作目标,并将操作目标标注进视觉SLAM创建的特征地图中,实现可操作的语义地图。The above-mentioned deep learning-based YOLO convolutional neural network establishes semantic information about defects, identifies the operated target, and labels the operation target into the feature map created by visual SLAM to realize an operable semantic map.

在一种实施方式中,还包括对后端的联合优化位姿进行回环检测来解决位置估计随时间漂移的问题,回环检测也称为闭环检测,实质上是一种检测观测数据相似性的算法,是指机器人识别曾到达场景的能力,如果检测回环成功,把检测信息提供给后端进行处理,后端根据这些信息把轨迹和地图调整到符合回环检测结果的样子,优化对累计误差进行矫正,剔除冗余地图点,优化本质图,更新所有地图点,可以显著地减小累积误差,实现全局位姿一致,得到全局一致的轨迹和地图。In one embodiment, it also includes performing loopback detection on the joint optimized pose of the back-end to solve the problem of position estimation drifting over time. Loopback detection is also called closed-loop detection, which is essentially an algorithm for detecting the similarity of observed data. It refers to the ability of the robot to recognize the scene that has reached the scene. If the detection loopback is successful, the detection information is provided to the backend for processing. The backend adjusts the trajectory and map to match the loopback detection result based on this information, and optimizes the correction of the accumulated error. Eliminating redundant map points, optimizing the essential map, and updating all map points can significantly reduce the cumulative error, achieve global pose consistency, and obtain globally consistent trajectories and maps.

本实施例基于BIM模型信息作为先验信息,从中提取关于房屋建筑结构尺寸信息,并转化为可用于机器人定位的完工状态栅格图,可在整个工程周期中使用信息冗余的完工图定位方法,简化了定位流程,面对变化场景,直接可以定位,提高机器人在变化场景下的定位效率。然后施工操作中,针对传统视觉SLAM无法在弱纹理环境下进行建图与定位问题,通过设置激光投射器点阵进行人工投影增加墙面视觉特征,解决了视觉SLAM前端在弱纹理环境下无法工作的问题,并利用YOLO卷积神经网络进行墙面可操作目标检测,将可操作目标融入视觉SLAM特征地图中,构建包含操作三维位置信息的语义地图。同时利用BIM模型信息得到的当前定位信息,进行前端融合BIM信息的帧间位姿追踪和后端融合BIM信息的联合优化,提高传统视觉SLAM的精度。In this embodiment, based on BIM model information as a priori information, information about the size of the building structure is extracted from it, and converted into a completed state grid map that can be used for robot positioning, and a completed map positioning method with redundant information can be used in the entire engineering cycle. , simplifies the positioning process, and can directly locate in the face of changing scenarios, improving the positioning efficiency of the robot in changing scenarios. Then in the construction operation, in view of the problem that traditional visual SLAM cannot perform mapping and positioning in a weak texture environment, the visual features of the wall are increased by setting a laser projector lattice for manual projection, which solves the problem that the visual SLAM front end cannot work in a weak texture environment. It uses YOLO convolutional neural network to detect operable objects on the wall, integrates operable objects into visual SLAM feature maps, and constructs semantic maps that contain three-dimensional location information of operations. At the same time, the current positioning information obtained from the BIM model information is used to perform the joint optimization of the front-end fusion BIM information inter-frame pose tracking and the back-end fusion BIM information to improve the accuracy of traditional visual SLAM.

在本公开的另一个实施例中,提供一种施工操作系统,用于执行上述施工操作方法,如图14所示,为本实施例施工操作系统结构框图,包括:In another embodiment of the present disclosure, a construction operating system is provided for executing the above-mentioned construction operation method. As shown in FIG. 14 , a structural block diagram of the construction operating system in this embodiment includes:

获取当前定位信息单元100:用于利用如上述机器人场景定位方法得到机器人的当前定位信息;Obtaining the current positioning information unit 100: used to obtain the current positioning information of the robot by using the robot scene positioning method as described above;

帧间位姿追踪单元200:用于利用当前定位信息进行帧间位姿追踪得到待优化估计位姿;Inter-frame pose tracking unit 200: used for performing inter-frame pose tracking using current positioning information to obtain an estimated pose to be optimized;

联合优化单元300:用于对待优化估计位姿进行联合优化,得到联合优化位姿;Joint optimization unit 300: used to jointly optimize the estimated pose to be optimized to obtain a joint optimized pose;

施工操作单元400:用于根据联合优化位姿和通过特征提取得到的操作位置信息进行施工操作。Construction operation unit 400: used to perform construction operations according to the joint optimized pose and the operation position information obtained through feature extraction.

上述中施工操作系统中各单元模块的具体细节已经在上述实施例对应的施工操作方法中进行了详细的描述,因此此处不再赘述。The specific details of each unit module in the above construction operating system have been described in detail in the construction operation method corresponding to the above embodiment, and thus are not repeated here.

另外,本发明实施例还提供一种电子设备,包括:In addition, an embodiment of the present invention also provides an electronic device, including:

至少一个处理器,以及与所述至少一个处理器通信连接的存储器;at least one processor, and a memory communicatively coupled to the at least one processor;

其中,所述处理器通过调用所述存储器中存储的计算机程序,用于执行如实施例一所述的方法。计算机程序即程序代码,当程序代码在设备上运行时,程序代码用于使设备执行本说明书上述实施例部分描述的机器人场景定位方法或施工操作方法中的步骤。Wherein, the processor is configured to execute the method according to the first embodiment by calling the computer program stored in the memory. A computer program is a program code. When the program code runs on the device, the program code is used to make the device execute the steps in the robot scene positioning method or the construction operation method described in the above-mentioned embodiments of this specification.

另外,本发明还提供一种计算机可读存储介质,计算机可读存储介质存储有计算机可执行指令,其中计算机可执行指令用于使计算机执行本说明书上述实施例部分描述的机器人场景定位方法或施工操作方法中的步骤。In addition, the present invention also provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, wherein the computer-executable instructions are used to make the computer execute the robot scene positioning method or construction described in the above-mentioned embodiments of this specification. steps in how-to.

不失一般性,所述计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、EPROM、EEPROM、闪存或其他固态存储其技术,CD-ROM、DVD或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知所述计算机存储介质不局限于上述几种。Without loss of generality, the computer-readable media can include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory or other solid state storage technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art know that the computer storage medium is not limited to the above-mentioned ones.

需要说明的是:上述本申请实施例先后顺序仅仅为了描述,不代表实施例的优劣。且上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。It should be noted that: the above-mentioned order of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing describes specific embodiments of the present specification. Other embodiments are within the scope of the appended claims. In some cases, the actions or steps recited may be performed in a different order than in the embodiments and still achieve desirable results. Additionally, the processes depicted in the figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.

本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于设备、存储介质和系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a progressive manner, and the same and similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the device, storage medium and system embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for related parts.

以上各实施例仅用以说明本发明的技术方案,而非对其限制,尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围,其均应涵盖在本发明的权利要求和说明书的范围当中。The above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them. Although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it is still possible to implement the foregoing embodiments. The technical solutions described in the examples are modified, or some or all of the technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the scope of the technical solutions of the embodiments of the present invention, and all of them should cover within the scope of the claims and description of the invention.

Claims (10)

1. A robot scene positioning method, comprising:
extracting structural information according to BIM data to generate a complete state grid diagram of the building;
acquiring scanning data of a laser radar installed on the robot on a current scene;
and positioning the robot in the current scene by using a self-adaptive Monte Carlo algorithm according to the scanning data and the completion state grid diagram to obtain the current positioning information of the robot.
2. The method as claimed in claim 1, wherein the using an adaptive monte carlo algorithm to locate the robot in the current scene according to the scan data and the completion status raster map to obtain the current location information of the robot includes:
acquiring engineering structure information contained in the scanning data;
updating the likelihood domain of each particle in the self-adaptive Monte Carlo algorithm in real time according to the engineering structure information;
screening the particles by taking the likelihood domain as weight;
and positioning the robot in the current scene according to the screening result and the completion state grid diagram to obtain the current positioning information of the robot.
3. A construction operation method, characterized by comprising:
obtaining current positioning information of the robot by using the robot scene positioning method according to any one of claims 1 or 2;
tracking the pose between frames by using the current positioning information to obtain an estimated pose to be optimized;
performing joint optimization on the estimated pose to be optimized to obtain a joint optimization pose;
and performing construction operation according to the combined optimization pose and the three-dimensional position information obtained by feature extraction.
4. The construction operation method according to claim 3, wherein the inter-frame pose tracking of the current positioning information as an initial pose to obtain an estimated pose to be optimized comprises:
obtaining current camera positioning information according to a transformation matrix between the robot and the camera and the current positioning information;
and obtaining camera inter-frame positioning difference by taking the camera positioning information as an initial pose, and obtaining the pose to be optimized and estimated by using camera projection errors and the camera inter-frame positioning difference.
5. A construction operation method according to claim 3, wherein said joint optimization is an optimization using a nonlinear optimization beam-balancing method.
6. The construction operation method according to claim 4, wherein obtaining the camera projection error through feature matching comprises:
carrying out environmental projection by using a laser projector to obtain a plurality of projection light spots;
utilizing a camera device to obtain the projection light spot for ORB feature matching;
and obtaining the camera projection error according to the feature matching result.
7. The construction operation method according to claim 3, further comprising performing operation recognition using a neural network, determining three-dimensional position information of an operation target according to the operation recognition result, and performing construction operation by the robot according to the joint optimization pose and the three-dimensional position information.
8. A construction work system, comprising:
acquiring a current positioning information unit: for obtaining current positioning information of a robot using a robot scene positioning method according to any of claims 1 or 2;
an inter-frame pose tracking unit: the current positioning information is used for tracking the pose between frames to obtain an estimated pose to be optimized;
a joint optimization unit: the system is used for carrying out joint optimization on the estimated pose to be optimized to obtain a joint optimization pose;
a construction operation unit: and the construction operation is carried out according to the joint optimization pose and the operation position information obtained by feature extraction.
9. An electronic device, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the processor is adapted to perform the method of any one of claims 1 or 2, or to perform the method of any one of claims 3 to 7, by invoking a computer program stored in the memory.
10. A computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform the method of any one of claims 1 or 2, or for performing the method of any one of claims 3 to 7.
CN202010396690.2A 2020-05-12 2020-05-12 Robot scene positioning method and construction operation method Active CN111754566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010396690.2A CN111754566B (en) 2020-05-12 2020-05-12 Robot scene positioning method and construction operation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010396690.2A CN111754566B (en) 2020-05-12 2020-05-12 Robot scene positioning method and construction operation method

Publications (2)

Publication Number Publication Date
CN111754566A true CN111754566A (en) 2020-10-09
CN111754566B CN111754566B (en) 2024-12-20

Family

ID=72673804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010396690.2A Active CN111754566B (en) 2020-05-12 2020-05-12 Robot scene positioning method and construction operation method

Country Status (1)

Country Link
CN (1) CN111754566B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965081A (en) * 2021-02-05 2021-06-15 浙江大学 Simulated learning social navigation method based on feature map fused with pedestrian information
CN113001559A (en) * 2021-02-24 2021-06-22 汕头大学 Automatic leaking point positioning method for pressurized leaking stoppage robot
CN113219980A (en) * 2021-05-14 2021-08-06 深圳中智永浩机器人有限公司 Global self-positioning method and device for robot, computer equipment and storage medium
CN113345044A (en) * 2021-04-22 2021-09-03 北京房江湖科技有限公司 Household graph generation method and device
CN114547866A (en) * 2022-01-26 2022-05-27 深圳大学 Intelligent detection method for prefabricated part based on BIM-unmanned aerial vehicle-mechanical dog
CN114581623A (en) * 2022-04-08 2022-06-03 广州文远知行科技有限公司 Simulation scene generation method and device, storage medium and computer equipment
CN114625815A (en) * 2020-12-11 2022-06-14 广东博智林机器人有限公司 A method and system for generating semantic map of construction robot
CN115290097A (en) * 2022-09-30 2022-11-04 安徽建筑大学 BIM-based real-time accurate map construction method, terminal and storage medium
CN116399328A (en) * 2023-04-17 2023-07-07 石家庄铁道大学 A BIM-based map construction and positioning method for indoor mobile robots

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080119961A1 (en) * 2006-11-16 2008-05-22 Samsung Electronics Co., Ltd. Methods, apparatus, and medium for estimating pose of mobile robot using particle filter
US20180075643A1 (en) * 2015-04-10 2018-03-15 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
CN108589979A (en) * 2018-05-28 2018-09-28 朱从兵 A kind of large space robot module separates furred ceiling decoration method and equipment
CN109459039A (en) * 2019-01-08 2019-03-12 湖南大学 A kind of the laser positioning navigation system and its method of medicine transfer robot
CN109682382A (en) * 2019-02-28 2019-04-26 电子科技大学 Global fusion and positioning method based on adaptive Monte Carlo and characteristic matching
CN110501017A (en) * 2019-08-12 2019-11-26 华南理工大学 A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080119961A1 (en) * 2006-11-16 2008-05-22 Samsung Electronics Co., Ltd. Methods, apparatus, and medium for estimating pose of mobile robot using particle filter
US20180075643A1 (en) * 2015-04-10 2018-03-15 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
CN108589979A (en) * 2018-05-28 2018-09-28 朱从兵 A kind of large space robot module separates furred ceiling decoration method and equipment
CN109459039A (en) * 2019-01-08 2019-03-12 湖南大学 A kind of the laser positioning navigation system and its method of medicine transfer robot
CN109682382A (en) * 2019-02-28 2019-04-26 电子科技大学 Global fusion and positioning method based on adaptive Monte Carlo and characteristic matching
CN110501017A (en) * 2019-08-12 2019-11-26 华南理工大学 A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙常恒: "基于BIM的自主装修机器人定位与环境感知系统", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, 15 January 2021 (2021-01-15), pages 038 - 2177 *
王俊: "面向自主装修机器人的同时定位与建图方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 February 2020 (2020-02-15), pages 140 - 679 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625815A (en) * 2020-12-11 2022-06-14 广东博智林机器人有限公司 A method and system for generating semantic map of construction robot
CN112965081A (en) * 2021-02-05 2021-06-15 浙江大学 Simulated learning social navigation method based on feature map fused with pedestrian information
CN112965081B (en) * 2021-02-05 2023-08-01 浙江大学 Simulated learning social navigation method based on feature map fused with pedestrian information
CN113001559A (en) * 2021-02-24 2021-06-22 汕头大学 Automatic leaking point positioning method for pressurized leaking stoppage robot
CN113345044B (en) * 2021-04-22 2022-06-10 贝壳找房(北京)科技有限公司 Household graph generation method and device
CN113345044A (en) * 2021-04-22 2021-09-03 北京房江湖科技有限公司 Household graph generation method and device
CN113219980A (en) * 2021-05-14 2021-08-06 深圳中智永浩机器人有限公司 Global self-positioning method and device for robot, computer equipment and storage medium
CN113219980B (en) * 2021-05-14 2024-04-12 深圳中智永浩机器人有限公司 Robot global self-positioning method, device, computer equipment and storage medium
CN114547866A (en) * 2022-01-26 2022-05-27 深圳大学 Intelligent detection method for prefabricated part based on BIM-unmanned aerial vehicle-mechanical dog
CN114547866B (en) * 2022-01-26 2023-09-26 深圳大学 Prefabricated part intelligent detection method based on BIM-unmanned aerial vehicle-mechanical dog
CN114581623A (en) * 2022-04-08 2022-06-03 广州文远知行科技有限公司 Simulation scene generation method and device, storage medium and computer equipment
CN114581623B (en) * 2022-04-08 2024-03-05 广州文远知行科技有限公司 Simulation scene generation method and device, storage medium and computer equipment
CN115290097A (en) * 2022-09-30 2022-11-04 安徽建筑大学 BIM-based real-time accurate map construction method, terminal and storage medium
CN116399328A (en) * 2023-04-17 2023-07-07 石家庄铁道大学 A BIM-based map construction and positioning method for indoor mobile robots
CN116399328B (en) * 2023-04-17 2024-06-21 石家庄铁道大学 A map construction and positioning method for indoor mobile robots based on BIM

Also Published As

Publication number Publication date
CN111754566B (en) 2024-12-20

Similar Documents

Publication Publication Date Title
CN111754566B (en) Robot scene positioning method and construction operation method
CN111076733B (en) Robot indoor map building method and system based on vision and laser slam
Callieri et al. RoboScan: an automatic system for accurate and unattended 3D scanning
Chen et al. Improving completeness and accuracy of 3D point clouds by using deep learning for applications of digital twins to civil structures
CN113096190B (en) Omnidirectional mobile robot navigation method based on visual mapping
CN112505065A (en) Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN110108258A (en) A kind of monocular vision odometer localization method
CN112418288A (en) A Dynamic Vision SLAM Method Based on GMS and Motion Detection
CN110223351B (en) A deep camera localization method based on convolutional neural network
CN118758287A (en) A navigation method and system for a building inspection robot based on no prior map
CN108416385A (en) It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
CN115439621A (en) Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot
Bai et al. Colmap-pcd: An open-source tool for fine image-to-point cloud registration
CN120339522A (en) Three-dimensional real scene reconstruction method and related equipment based on robot autonomous cruising
CN119399362A (en) Method and system for constructing real-scene three-dimensional model of building
Ma et al. Semantic geometric fusion multi-object tracking and lidar odometry in dynamic environment
CN114511590A (en) Construction method of multi-guide lines at intersection based on monocular vision 3D vehicle detection and tracking
Gao et al. Multi-source data-based 3D digital preservation of largescale ancient chinese architecture: A case report
CN120808088A (en) Microgravity environment flying robot sensing and scene understanding method and system
CN120807830A (en) Indoor and outdoor fine three-dimensional modeling method based on multi-mode data fusion
CN113920254A (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
CN120063305A (en) Robot integrating multi-mode scene recognition and automatic mapping functions and method thereof
CN117095130A (en) A three-dimensional modeling method and its system
CN117330051A (en) Real-time radar vision fusion indoor scene acquisition modeling method and system
Wallbaum et al. Towards real-time Scan-versus-BIM: Methods applications and challenges

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant