[go: up one dir, main page]

CN111796600A - Object recognition and tracking system based on quadruped robot - Google Patents

Object recognition and tracking system based on quadruped robot Download PDF

Info

Publication number
CN111796600A
CN111796600A CN202010708991.4A CN202010708991A CN111796600A CN 111796600 A CN111796600 A CN 111796600A CN 202010708991 A CN202010708991 A CN 202010708991A CN 111796600 A CN111796600 A CN 111796600A
Authority
CN
China
Prior art keywords
unit
information
map
feature
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010708991.4A
Other languages
Chinese (zh)
Inventor
刘涛
郝骞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN202010708991.4A priority Critical patent/CN111796600A/en
Publication of CN111796600A publication Critical patent/CN111796600A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an object recognition and tracking system based on a quadruped robot, which relates to the technical field of robots and aims at solving the problems of low robustness of an existing robot to an environmental feature extraction algorithm, inaccurate updating of an environmental map and low real-time performance. According to the invention, through extraction of local invariant features of the image, uncertainty information processing and data association and updating of an environment map, efficient identification and processing of information by the robot are improved, and accuracy and efficiency of autonomous environment cognition of the robot are improved.

Description

一种基于四足机器人的物体识别与跟踪系统An Object Recognition and Tracking System Based on Quadruped Robot

技术领域technical field

本发明涉及机器人技术领域,尤其涉及一种基于四足机器人的物体识别与跟踪系统。The invention relates to the field of robot technology, in particular to an object recognition and tracking system based on a quadruped robot.

背景技术Background technique

四足机器人对工作场景及目标对象的认知是实现其自主导航、物体识别与跟踪以及环境监测等应用的前提和关键。目前四足机器人自主环境认知研究中存在环境特征提取算法鲁棒性不强、以及环境地图更新不准确和实时性不高的问题,为此我们提出了一种基于四足机器人的物体识别与跟踪系统。The cognition of the working scene and the target object of the quadruped robot is the premise and key to realize its applications such as autonomous navigation, object recognition and tracking, and environmental monitoring. At present, there are problems in the research on autonomous environment cognition of quadruped robots that the robustness of environmental feature extraction algorithms is not strong, as well as the inaccurate update of environment maps and low real-time performance. tracking system.

发明内容SUMMARY OF THE INVENTION

本发明提出的一种基于四足机器人的物体识别与跟踪系统,解决了机器人对环境特征提取算法鲁棒性不强、以及环境地图更新不准确和实时性不高的问题。The object recognition and tracking system based on the quadruped robot proposed by the present invention solves the problems that the robot is not robust to the environmental feature extraction algorithm, and the environmental map update is inaccurate and the real-time performance is not high.

为了实现上述目的,本发明采用了如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:

一种基于四足机器人的物体识别与跟踪系统,包括任务单元、地图信息单元、特征处理单元、特征提取单元、图像预处理单元和原始数据单元,所述任务单元建立待执行任务,所述原始数据单元通过里程计和双目视觉传感器进行数据获取,所述原始数据单元将获取的数据传输给所述图像预处理单元进行场景图像预处理,所述图像预处理单元再将预处理的数据传输给所述特征提取单元进行场景图像特征点的提取,所述特征提取单元将提取的数据传输给所述特征处理单元,所述特征处理单元在信息传输给所述地图信息单元,所述地图信息单元产生的地图信息与所述任务单元建立的待执行任务产生的信息进行比对。An object recognition and tracking system based on a quadruped robot, comprising a task unit, a map information unit, a feature processing unit, a feature extraction unit, an image preprocessing unit and an original data unit, the task unit establishes a task to be executed, the original The data unit acquires data through an odometer and a binocular vision sensor, the raw data unit transmits the acquired data to the image preprocessing unit for scene image preprocessing, and the image preprocessing unit transmits the preprocessed data Extracting the feature points of the scene image to the feature extraction unit, the feature extraction unit transmits the extracted data to the feature processing unit, the feature processing unit transmits the information to the map information unit, the map information The map information generated by the unit is compared with the information generated by the task to be executed established by the task unit.

优选的,所述待执行任务包括自主导航、物体识别与跟踪、环境监测和环境认知。Preferably, the tasks to be performed include autonomous navigation, object recognition and tracking, environmental monitoring and environmental cognition.

优选的,所述地图信息单元生成几何地图、拓扑地图、栅格地图以及几何和拓扑混合地图。Preferably, the map information unit generates a geometric map, a topological map, a grid map and a mixed geometric and topological map.

优选的,所述特征处理单元用来处理环境中的几何特征信息与关联数据和不确定信息。Preferably, the feature processing unit is used to process geometric feature information, associated data and uncertain information in the environment.

优选的,所述特征提取单元对场景图像特征点的提取,再将提取的特征点生成场景图像特征描述符,所述场景图像特征描述符将信息与场景图像特征进行匹配,最后将匹配后的信息进行局部特征描述。Preferably, the feature extraction unit extracts the feature points of the scene image, and then generates the scene image feature descriptor from the extracted feature points, and the scene image feature descriptor matches the information with the scene image feature, and finally matches the matched image. information for local feature description.

优选的,所述特征提取单元对通过提取和匹配、环境特征数据库数据关联的不确定性信息的影响,建立图像局部不变特征与地图数据库特征之间准确的对应关系,同时更新相应的几何-拓扑混合地图库,实现对环境信息的准确认知。Preferably, the feature extraction unit establishes an accurate correspondence between image local invariant features and map database features through the influence of extraction and matching, and uncertainty information associated with environmental feature database data, and updates the corresponding geometric- Topological hybrid map library to achieve accurate cognition of environmental information.

优选的,所述环境认知就是通过传感器设备自主通过完全未知环境时,根据获取的传感器信息和认知策略构建完整的未知环境地图。Preferably, the environment cognition is to construct a complete unknown environment map according to acquired sensor information and cognitive strategies when the sensor device autonomously passes through a completely unknown environment.

与现有的技术相比,本发明的有益效果是:本发明通过图像局部不变特征的提取、不确定性信息处理和数据关联、环境地图的更新,提高机器人对信息的高效识别与处理,提高机器人的自主环境认知的准确性及效率。Compared with the prior art, the beneficial effects of the present invention are: the present invention improves the efficient identification and processing of information by the robot through the extraction of local invariant features of the image, the processing of uncertain information and data association, and the updating of the environment map. Improve the accuracy and efficiency of the robot's autonomous environmental cognition.

附图说明Description of drawings

图1为本发明提出的一种基于四足机器人的物体识别与跟踪系统的工作流程示意图;1 is a schematic diagram of the workflow of an object recognition and tracking system based on a quadruped robot proposed by the present invention;

图2为本发明提出的一种基于四足机器人的物体识别与跟踪系统的信息处理示意图。FIG. 2 is a schematic diagram of information processing of an object recognition and tracking system based on a quadruped robot proposed by the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments.

参照图1-2,一种基于四足机器人的物体识别与跟踪系统,包括任务单元、地图信息单元、特征处理单元、特征提取单元、图像预处理单元和原始数据单元,任务单元建立待执行任务,原始数据单元通过里程计和双目视觉传感器进行数据获取,原始数据单元将获取的数据传输给图像预处理单元进行场景图像预处理,图像预处理单元再将预处理的数据传输给特征提取单元进行场景图像特征点的提取,特征提取单元将提取的数据传输给特征处理单元,特征处理单元在信息传输给地图信息单元,地图信息单元产生的地图信息与任务单元建立的待执行任务产生的信息进行比对。Referring to Figure 1-2, an object recognition and tracking system based on a quadruped robot includes a task unit, a map information unit, a feature processing unit, a feature extraction unit, an image preprocessing unit and a raw data unit, and the task unit establishes tasks to be executed. , the original data unit acquires data through odometer and binocular vision sensor, the original data unit transmits the acquired data to the image preprocessing unit for scene image preprocessing, and the image preprocessing unit transmits the preprocessed data to the feature extraction unit Extract the feature points of the scene image, the feature extraction unit transmits the extracted data to the feature processing unit, the feature processing unit transmits the information to the map information unit, the map information generated by the map information unit and the information generated by the task to be executed established by the task unit Compare.

本实施例中,待执行任务包括自主导航、物体识别与跟踪、环境监测和环境认知。In this embodiment, the tasks to be performed include autonomous navigation, object recognition and tracking, environmental monitoring, and environmental cognition.

本实施例中,地图信息单元生成几何地图、拓扑地图、栅格地图以及几何和拓扑混合地图。In this embodiment, the map information unit generates a geometric map, a topological map, a grid map, and a mixed geometric and topological map.

本实施例中,特征处理单元用来处理环境中的几何特征信息与关联数据和不确定信息。In this embodiment, the feature processing unit is used to process geometric feature information, associated data and uncertain information in the environment.

本实施例中,特征提取单元对场景图像特征点的提取,再将提取的特征点生成场景图像特征描述符,场景图像特征描述符将信息与场景图像特征进行匹配,最后将匹配后的信息进行局部特征描述。In this embodiment, the feature extraction unit extracts the feature points of the scene image, and then generates the scene image feature descriptor from the extracted feature points. The scene image feature descriptor matches the information with the scene image features, and finally the matched information is processed. Local feature description.

本实施例中,特征提取单元对通过提取和匹配、环境特征数据库数据关联的不确定性信息的影响,建立图像局部不变特征与地图数据库特征之间准确的对应关系,同时更新相应的几何-拓扑混合地图库,实现对环境信息的准确认知。In this embodiment, the feature extraction unit establishes an accurate correspondence between the local image invariant features and the map database features by extracting and matching the uncertainty information associated with the environmental feature database data, and at the same time updates the corresponding geometric- Topological hybrid map library to achieve accurate cognition of environmental information.

本实施例中,环境认知就是通过传感器设备自主通过完全未知环境时,根据获取的传感器信息和认知策略构建完整的未知环境地图。In this embodiment, the environment cognition is to construct a complete unknown environment map according to the acquired sensor information and the cognitive strategy when the sensor device autonomously passes through the completely unknown environment.

工作原理,四足机器人对环境的认知与理解是实现其自主导航,包括路径规划、自主定位及地图创建的首要前提。而基于视觉的四足机器人自主导航中,主要通过视觉传感器来感知其工作场景,并提取图像特征来表征描述机器人的典型环境信息。并提取稳定的图像特征点来表征3D空间实际物理点,以此作为自然特征,来构建环境的几何地图,同时通过与当前时刻之前所创建的环境地图(自然特征库)中的特征进行匹配,估计机器人当前位姿并更新自然特征库,从而实现四足机器人的自主认知过程。通过处理特征提取和匹配、环境特征数据库数据关联等不确定性信息的影响,建立图像局部不变特征与地图数据库特征之间准确的对应关系,同时更新相应的几何-拓扑混合地图库,实现对环境信息的准确认知。机器人根据自身携带的传感器设备自主通过完全未知环境时,根据获取的传感器信息和认知策略构建完整的或近似完整的未知环境地图。在具体应用中,四足机器人的任务包括自定位、路径规划与导航、自主环境认知等,为了满足不同任务的需求,机器人在环境认知过程中需创建相应的环境地图,如几何地图、拓扑地图、栅格地图以及几何、拓扑混合地图等。Working principle, the cognition and understanding of the environment of the quadruped robot is the primary prerequisite for its autonomous navigation, including path planning, autonomous positioning and map creation. In the autonomous navigation of the vision-based quadruped robot, the visual sensor is mainly used to perceive its working scene, and the image features are extracted to represent the typical environmental information describing the robot. And extract stable image feature points to represent the actual physical points in 3D space, use them as natural features to construct a geometric map of the environment, and at the same time match with the features in the environment map (natural feature library) created before the current moment, The robot's current pose is estimated and the natural feature library is updated, so as to realize the autonomous cognitive process of the quadruped robot. By dealing with the influence of uncertain information such as feature extraction and matching, environmental feature database data association, etc., the accurate correspondence between image local invariant features and map database features is established, and the corresponding geometry-topology hybrid map Accurate cognition of environmental information. When the robot independently passes through the completely unknown environment according to the sensor equipment it carries, it constructs a complete or nearly complete unknown environment map according to the acquired sensor information and cognitive strategies. In specific applications, the tasks of quadruped robots include self-positioning, path planning and navigation, autonomous environment cognition, etc. In order to meet the needs of different tasks, the robot needs to create corresponding environment maps in the process of environment cognition, such as geometric Topological maps, raster maps, and geometric, topological hybrid maps, etc.

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其发明构思加以等同替换或改变,都应涵盖在本发明的保护范围之内。The above description is only a preferred embodiment of the present invention, but the protection scope of the present invention is not limited to this. The equivalent replacement or change of the inventive concept thereof shall be included within the protection scope of the present invention.

Claims (7)

1. An object recognition and tracking system based on a quadruped robot comprises a task unit, a map information unit, a feature processing unit, a feature extraction unit, an image preprocessing unit and an original data unit, it is characterized in that the task unit establishes a task to be executed, the original data unit acquires data through a speedometer and a binocular vision sensor, the original data unit transmits the acquired data to the image preprocessing unit for scene image preprocessing, the image preprocessing unit transmits the preprocessed data to the feature extraction unit to extract the scene image feature points, the feature extraction unit transmits the extracted data to the feature processing unit, the feature processing unit transmits the information to the map information unit, and the map information generated by the map information unit is compared with the information generated by the task to be executed established by the task unit.
2. The system of claim 1, wherein the tasks to be performed comprise autonomous navigation, object recognition and tracking, environmental monitoring and environmental awareness.
3. The quadruped robot-based object recognition and tracking system of claim 1, wherein the map information unit generates a geometric map, a topological map, a grid map, and a mixed geometric and topological map.
4. The quadruped robot-based object recognition and tracking system of claim 1, wherein the feature processing unit is configured to process the geometric feature information and the associated data and uncertainty information in the environment.
5. The system of claim 1, wherein the feature extraction unit extracts scene image feature points, generates scene image feature descriptors from the extracted feature points, matches the scene image feature descriptors with information, and performs local feature description on the matched information.
6. The system for identifying and tracking objects based on the quadruped robot as claimed in claim 1, wherein the feature extraction unit establishes an accurate correspondence between the local invariant features of the image and the features of the map database through the influence of the uncertainty information associated with the extraction and matching and the data of the environmental feature database, and updates the corresponding geometric-topological mixed map library to realize accurate cognition on the environmental information.
7. The system of claim 2, wherein the environment recognition is to construct a complete unknown environment map according to the acquired sensor information and the recognition strategy when the sensor device autonomously passes through the completely unknown environment.
CN202010708991.4A 2020-07-22 2020-07-22 Object recognition and tracking system based on quadruped robot Pending CN111796600A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010708991.4A CN111796600A (en) 2020-07-22 2020-07-22 Object recognition and tracking system based on quadruped robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010708991.4A CN111796600A (en) 2020-07-22 2020-07-22 Object recognition and tracking system based on quadruped robot

Publications (1)

Publication Number Publication Date
CN111796600A true CN111796600A (en) 2020-10-20

Family

ID=72827428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010708991.4A Pending CN111796600A (en) 2020-07-22 2020-07-22 Object recognition and tracking system based on quadruped robot

Country Status (1)

Country Link
CN (1) CN111796600A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101920498A (en) * 2009-06-16 2010-12-22 泰怡凯电器(苏州)有限公司 Device and robot for simultaneous localization and map creation of indoor service robots
CN103268729A (en) * 2013-05-22 2013-08-28 北京工业大学 Mobile robot cascading type map creating method based on mixed characteristics
KR20140013172A (en) * 2012-07-19 2014-02-05 고려대학교 산학협력단 Method for building the map of a mobile robot and recognizing the position of the mobile robot
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN104463108A (en) * 2014-11-21 2015-03-25 山东大学 Monocular real-time target recognition and pose measurement method
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 Vision-based feature point extraction method
CN109870167A (en) * 2018-12-25 2019-06-11 四川嘉垭汽车科技有限公司 Simultaneous localization and map creation method for vision-based driverless cars
CN110411441A (en) * 2018-04-30 2019-11-05 北京京东尚科信息技术有限公司 System and method for multi-modal mapping and positioning
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101920498A (en) * 2009-06-16 2010-12-22 泰怡凯电器(苏州)有限公司 Device and robot for simultaneous localization and map creation of indoor service robots
KR20140013172A (en) * 2012-07-19 2014-02-05 고려대학교 산학협력단 Method for building the map of a mobile robot and recognizing the position of the mobile robot
CN103268729A (en) * 2013-05-22 2013-08-28 北京工业大学 Mobile robot cascading type map creating method based on mixed characteristics
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN104463108A (en) * 2014-11-21 2015-03-25 山东大学 Monocular real-time target recognition and pose measurement method
CN110411441A (en) * 2018-04-30 2019-11-05 北京京东尚科信息技术有限公司 System and method for multi-modal mapping and positioning
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 Vision-based feature point extraction method
CN109870167A (en) * 2018-12-25 2019-06-11 四川嘉垭汽车科技有限公司 Simultaneous localization and map creation method for vision-based driverless cars
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion

Similar Documents

Publication Publication Date Title
CN109074085B (en) Autonomous positioning and map building method and device and robot
CN110686677A (en) Global positioning method based on geometric information
Miller et al. Any way you look at it: Semantic crossview localization and mapping with lidar
CN114413881A (en) Method and device for constructing high-precision vector map and storage medium
US20190226851A1 (en) Driver assistance system for determining a position of a vehicle
CN101441769A (en) Real time vision positioning method of monocular camera
Meng et al. Efficient and reliable LiDAR-based global localization of mobile robots using multiscale/resolution maps
CN110827353B (en) Robot positioning method based on monocular camera assistance
CN110487286A (en) It is a kind of to project the robot pose determining method merged with laser point cloud based on point feature
CN111283730B (en) Robot initial pose acquisition method based on point-line characteristics and starting self-positioning method
Song et al. Bundledslam: An accurate visual slam system using multiple cameras
Qian et al. Wearable-assisted localization and inspection guidance system using egocentric stereo cameras
CN115700507B (en) Map updating method and device
Tschopp et al. Superquadric object representation for optimization-based semantic SLAM
CN118548874A (en) Map updating method and system based on point cloud descriptors
Esfahani et al. Unsupervised scene categorization, path segmentation and landmark extraction while traveling path
CN117570959A (en) Man-machine collaborative rescue situation map construction method
JP2007249592A (en) 3D object recognition system
Nguyen et al. A visual SLAM system on mobile robot supporting localization services to visually impaired people
CN106650814A (en) Vehicle-mounted monocular vision-based outdoor road adaptive classifier generation method
Hwang et al. Monocular vision-based global localization using position and orientation of ceiling features
CN111796600A (en) Object recognition and tracking system based on quadruped robot
Hofstetter et al. On ambiguities in feature-based vehicle localization and their a priori detection in maps
Saito et al. Pre-driving needless system for autonomous mobile robots navigation in real world robot challenge 2013
Aggarwal Machine vision based SelfPosition estimation of mobile robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201020

RJ01 Rejection of invention patent application after publication