[go: up one dir, main page]

TWI851310B - Robot and method for autonomously moving and grabbing objects - Google Patents

Robot and method for autonomously moving and grabbing objects Download PDF

Info

Publication number
TWI851310B
TWI851310B TW112124363A TW112124363A TWI851310B TW I851310 B TWI851310 B TW I851310B TW 112124363 A TW112124363 A TW 112124363A TW 112124363 A TW112124363 A TW 112124363A TW I851310 B TWI851310 B TW I851310B
Authority
TW
Taiwan
Prior art keywords
grasping
robot
robot arm
mobile platform
camera
Prior art date
Application number
TW112124363A
Other languages
Chinese (zh)
Other versions
TW202438256A (en
Inventor
宋開泰
邱建緯
Original Assignee
國立陽明交通大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立陽明交通大學 filed Critical 國立陽明交通大學
Priority to US18/384,377 priority Critical patent/US20240316767A1/en
Application granted granted Critical
Publication of TWI851310B publication Critical patent/TWI851310B/en
Publication of TW202438256A publication Critical patent/TW202438256A/en

Links

Landscapes

  • Manipulator (AREA)

Abstract

A robot and method for autonomously moving and grabbing objects, the robot comprising: a mechanical arm, and a mobile platform carrying the robotic arm, wherein the mobile platform is used to navigate the robot to where the target object to be grabbed is located, and has: Semantic navigation system, used to navigate the mobile platform to the location of the target; the first camera, used to take pictures of the external environment during navigation; automatic docking coordination controller, used to optimize the moving path and docking position of robot's automatic grasping; the mobile platform controller, used to control the movement of the mobile platform; and the drive system, used to drive the mobile platform forward. The robotic arm is used to grab the target object and has: the second camera, used to allow the robotic arm to understand the environment; the object recognition and pose estimation system, used for semantic recognition, segmentation of object and pose estimation of the grasping target; the grasping index calculation module, used to determine the most appropriate grasping pose, and the mobile grasping controller, used to control the robot movement.

Description

自主移動抓取物件之機器人與方法 Autonomous mobile robot and method for grasping objects

本發明係一種可自主移動抓取物體之機械手臂,可應用於機器人自動化及智慧製造系統之產業。 This invention is a robotic arm that can autonomously move and grasp objects, and can be applied to industries such as robotic automation and intelligent manufacturing systems.

美國核准專利US9785911B2乙案提出一種包括中央伺服器和機械手臂用於在物流設施內分揀或上架的方法和系統,其中,中央伺服器被配置為與機器人通信以發送和接收揀選數據;機器人可以通過由多個感測器中的至少一個感測器識別地標,在物流設施內自主導航和定位自己,感測器還提供與要拾取或存放的工件的檢測、識別和位置相關的信號。惟查,該專利所揭露之技術都是對已知標識定位,缺乏應用彈性。 The US patent application US9785911B2 proposes a method and system including a central server and a robot arm for sorting or shelving in a logistics facility, wherein the central server is configured to communicate with a robot to send and receive picking data; the robot can autonomously navigate and locate itself in the logistics facility by identifying landmarks through at least one of a plurality of sensors, and the sensor also provides signals related to the detection, identification and location of the workpiece to be picked up or stored. However, the technology disclosed in the patent is all for positioning of known landmarks, lacking application flexibility.

美國核准專利US11033338B2乙案提出一種自適應機械手臂抓取規劃技術,用於雜亂堆疊抓取,分析工件形狀以識別多個穩健的抓取選項,每個抓握選項具有位置和方向;並進一步分析工件形狀以確定多個穩定的中間位姿;評估箱中的每個單獨工件以識別一組可行的抓取。惟查該專利揭露之技術僅對單一工件進行描述,並未提出對於不同工件的解決方法。 The US patent application US11033338B2 proposes an adaptive robotic arm grasping planning technology for chaotic stack grasping, analyzing the shape of the workpiece to identify multiple stable grasping options, each grasping option has a position and direction; and further analyzing the shape of the workpiece to determine multiple stable intermediate postures; evaluating each individual workpiece in the box to identify a set of feasible grasps. However, the technology disclosed in the patent only describes a single workpiece, and does not propose a solution for different workpieces.

美國公開專利US20180161986A1乙案提出語義同步追蹤、物件辨識及三維(3 Dimensional,縮寫為3D)建圖系統可以維護由靜態和動態物件組成的世 界地圖,而不是3D點雲,並且可以實時學習物件的語義屬性,機器人可以使用這種語義信息來提高其導航和定位能力。惟該專利僅單純利用物件進行導航,並未討論到對物件的抓取。 The US patent US20180161986A1 proposes a semantic synchronous tracking, object recognition and three-dimensional (3D) mapping system that can maintain a world map composed of static and dynamic objects instead of a 3D point cloud, and can learn the semantic properties of objects in real time. Robots can use this semantic information to improve their navigation and positioning capabilities. However, the patent only uses objects for navigation and does not discuss the grasping of objects.

文獻「K.T.Song and L.Kang,"Object-Oriented Navigation with a Multi-layer Semantic Map,"in Proc.of the 13th Asian Control Conference(ASCC 2022),Jeju Island,Korea,2022,pp.1386-1391.」中提出,基於語義地圖資訊之無人搬運車導航系統,該系統能對於機器人所要進行導航的工作場域執行語義同時定位和建圖(Simultaneous Localization and Mapping,縮寫為SLAM),生成用於實現語義導航任務的環境語義SLAM地圖,然而該語義資訊僅使用來進行無人搬運車導航,並沒有對物體進行抓取規劃。 The literature "K.T.Song and L.Kang,"Object-Oriented Navigation with a Multi-layer Semantic Map,"in Proc.of the 13th Asian Control Conference (ASCC 2022),Jeju Island,Korea,2022,pp.1386-1391." proposes an unmanned vehicle navigation system based on semantic map information. The system can perform semantic simultaneous localization and mapping (SLAM) for the workspace where the robot needs to navigate, and generate an environmental semantic SLAM map for semantic navigation tasks. However, the semantic information is only used for unmanned vehicle navigation, and no grasping planning is performed on objects.

文獻「E.Brzozowska,O.Lima and R.Ventura,“A Generic Optimization Based Cartesian Controller for Robotic Mobile Manipulation,”in Proc.of 2019 International Conference on Robotics and Automation(ICRA),Montreal,QC,Canada,2019,pp.2054-2060.」中考慮了安裝在移動機器人基座的機械手臂,在給定三維目標姿態的情況下,決定手臂和移動平台的最佳關節速度。該論文針對深度攝影機感測與機械手臂和移動平台之間的即時閉迴路系統架構,但並沒有考量到機械手臂的靠站需求。 The paper "E.Brzozowska,O.Lima and R.Ventura, "A Generic Optimization Based Cartesian Controller for Robotic Mobile Manipulation," in Proc.of 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 2019, pp.2054-2060." considers the optimal joint velocity of the robot arm mounted on the base of the mobile robot under the given three-dimensional target posture. The paper focuses on the real-time closed-loop system architecture between the depth camera sensing and the robot arm and the mobile platform, but does not consider the need for the robot arm to dock.

文獻「S.Zhu,X.Zheng,M.Xu,Z.Zeng and H.Zhang,“A Robotic Semantic Grasping Method For Pick-and-place Tasks,”in Proc.of 2019 Chinese Automation Congress(CAC),Hangzhou,China,2019,pp.4130-4136」中提出一種機器人語義抓取方法來估計機器人的六自由度(6-DOF)抓取姿態,它將像素級語義分 割的物件檢測方法與點雲處理方法相結合,從而計算抓取配置,實現與物體表面垂直的抓取,僅單純對於質心位置進行抓取,不能符合所有物件的抓取需求。 The literature "S. Zhu, X. Zheng, M. Xu, Z. Zeng and H. Zhang, "A Robotic Semantic Grasping Method For Pick-and-place Tasks," in Proc. of 2019 Chinese Automation Congress (CAC), Hangzhou, China, 2019, pp. 4130-4136" proposes a robot semantic grasping method to estimate the robot's six-degree-of-freedom (6-DOF) grasping posture. It combines the object detection method of pixel-level semantic segmentation with the point cloud processing method to calculate the grasping configuration and realize the grasping perpendicular to the object surface. It only grasps the center of mass position and cannot meet the grasping requirements of all objects.

本發明之主要目的在於為了能使機械手臂能在日常生活的環境中,自主地完成抓取的任務,所以我們希望結合語義資訊,使機器人能夠了解環境,並且透過物體姿態估測及抓取規劃設計,以提升抓取成功率,並結合移動抓取控制器使機械手臂姿態及移動平台同時抵達抓取位置,以有效率的完成抓取。 The main purpose of this invention is to enable the robot arm to autonomously complete the grasping task in the environment of daily life. Therefore, we hope to combine semantic information to enable the robot to understand the environment, and through object posture estimation and grasping planning design, to improve the grasping success rate, and combine the mobile grasping controller to make the robot arm posture and mobile platform reach the grasping position at the same time, so as to complete the grasping efficiently.

本發明之次一目的,在於優化機械手臂和移動平台運動進行機械手臂移動抓取,同時考慮到機械手臂硬體限制及移動平台停靠位置。 A second purpose of the present invention is to optimize the movement of the robot arm and the mobile platform for robot arm mobile grasping, while taking into account the hardware limitations of the robot arm and the mobile platform docking position.

本發明之再一目的,在於可應用於機器人自動化、智慧製造系統之產業,可提升機器人之靈活性,以滿足彈性製造系統的需求,適應變動的環境執行任務,增加機器手臂工作空間。 Another purpose of the present invention is to be applicable to industries of robot automation and intelligent manufacturing systems, to enhance the flexibility of robots, to meet the needs of flexible manufacturing systems, to adapt to changing environments to perform tasks, and to increase the working space of robot arms.

本發明之又一目的,在於應用於協作型機器人、工業機器人及陪伴機器人之產品上;協作型機器人能與工作人員共享一個工作環境執行任務,並且易於重新撰寫程式,因應製造過程中的變化,透過此方式,機器手臂可以在這些製造單元周邊移動,執行如運輸成品、半成品及自動裝配等任務。 Another purpose of the present invention is to apply it to collaborative robots, industrial robots and companion robots. Collaborative robots can share a working environment with workers to perform tasks and are easy to reprogram to respond to changes in the manufacturing process. In this way, the robot arm can move around these manufacturing units to perform tasks such as transporting finished products, semi-finished products and automatic assembly.

為了達到前述諸目的及其他目的,本發明之一種自主移動抓取物件之機器人與方法,所述機器人包括:機械手臂,及承載該機械手臂之移動平台,其中該移動平台,用於導航機器人至所欲抓取之目標物件所在之處,具有:語義導航系統,用於將移動平台導航至目標物所在之位置;第一攝影機,用於導航時拍攝外部環境;自動靠站協調控制器,用於優化機器人自動靠站的移動路徑及抓取姿態;移動平台控制器,用於控制移動平台之移動;及驅動系統,用於驅動移動平台前進; 該機械手臂用於抓取目標物件,具有:第二攝影機用於讓機械手臂對環境之理解、物件辨識及姿態估測系統用於進行語義辨識分割抓取目標之物件及姿態估計、抓取指標計算模組係用來決定最適當的抓取姿態、及移動抓取控制器用於控制機械人之運動。 In order to achieve the above-mentioned purposes and other purposes, the present invention provides a robot and method for autonomously moving and grasping objects, wherein the robot comprises: a robot arm, and a mobile platform carrying the robot arm, wherein the mobile platform is used to navigate the robot to the location of the target object to be grasped, and has: a semantic navigation system, used to navigate the mobile platform to the location of the target object; a first camera, used to photograph the external environment during navigation; an automatic docking coordination controller, used to optimize the robot's automatic docking The moving path and grasping posture of the station; the mobile platform controller is used to control the movement of the mobile platform; and the driving system is used to drive the mobile platform forward; The robot arm is used to grasp the target object, and has: a second camera for the robot arm to understand the environment, an object recognition and posture estimation system for semantic recognition and segmentation of the grasping target object and posture estimation, a grasping index calculation module for determining the most appropriate grasping posture, and a mobile grasping controller for controlling the movement of the robot.

1:機器人 1:Robot

7:移動機械手 7: Mobile robot

10:第一攝影機 10: First Camera

11:第二攝影機 11: Second camera

20:語義導航系統 20:Semantic Navigation System

30:自動靠站協調控制器 30: Automatic docking coordination controller

40:機械臂夾爪 40: Mechanical arm gripper

100:機械手臂 100:Robotic arm

200:移動平台 200: Mobile platform

102:物件辨識及姿態估測系統 102: Object recognition and posture estimation system

104:移動抓取控制器 104: Mobile grabbing controller

202:移動平台控制器 202: Mobile platform controller

1021:影像預處理模組 1021: Image preprocessing module

1022:語義分割模組 1022: Semantic segmentation module

1023:姿態估測模組 1023: Posture estimation module

S1~S8:步驟 S1~S8: Steps

圖1係機器人之模組方塊圖;圖2係機械手臂之系統架構圖;圖3係物件辨識及姿態估測系統之架構圖;圖4A係機械手臂抓取規劃架構圖;圖4B係抓取目標物件之3D立體圖;圖4C係抓取目標物件之平面圖;圖4D係對於抓取目標物件之位置標示圖;圖5係機械手臂進行物件抓取流程圖;圖6係機械手臂運動規劃流程圖;圖7係機器人靠站位置規劃流程圖;圖8係移動抓取運動控制系統架構圖;及,圖9係依據本發明一實施例之一種自主移動抓取物件的方法之流程圖。 Figure 1 is a module block diagram of the robot; Figure 2 is a system architecture diagram of the robot arm; Figure 3 is an architecture diagram of the object recognition and posture estimation system; Figure 4A is a grasping planning architecture diagram of the robot arm; Figure 4B is a 3D stereogram of the grasping target object; Figure 4C is a plane diagram of the grasping target object; Figure 4D is a position marking diagram for the grasping target object; Figure 5 is a flowchart of the grasping of the robot arm; Figure 6 is a flowchart of the motion planning of the robot arm; Figure 7 is a flowchart of the robot's docking position planning; Figure 8 is a diagram of the mobile grasping motion control system; and Figure 9 is a flowchart of a method for autonomous mobile grasping of objects according to an embodiment of the present invention.

請參照圖1,本發明係一種自主移動抓取物件之機器人與方法,其中,該機器人1包括:一機械手臂100及承載該機械手臂100之一移動平台200, 其中:該移動平台200,係用於移動該機器人1至該機械手臂100所抓取之目標物件所在之處,具有:一語義導航系統20與該移動平台200電性連接,係用於將該移動平台200導航至目標所在之位置;第一攝影機10,與該語義導航系統20電性連接,係用於導航時拍攝外部環境,能夠提供高質量同步視頻之顏色和深度,為圖像深度攝影機(RGB-D camera);一自動靠站協調控制器30與該機械手臂100及該移動平台200電性連接,係用於取得該機械手臂100及該移動平台200最好的移動抓取路徑及姿態;及一移動平台控制器202,與該移動平台200電性連接,係用於控制該機器人1之移動;該機械手臂100係用於抓取目標物件,具有:一物件辨識及姿態估測系統102,與該機械手臂100電性連接,用於進行語義辨識分割抓取目標之物件及姿態估計;一移動抓取控制器104,與該機械手臂100電性連接,用於控制機械手臂之移動,讓該機械手臂100停靠到較好的位置進行抓取;及第二攝影機11,與該物件辨識及姿態估測系統102電性連接,係用於讓該機械手臂100對環境之理解,為手眼視覺圖像深度攝影機(Eye in hand RGB-D camera)。 Please refer to FIG. 1 . The present invention is a robot and method for autonomously moving and grasping objects. The robot 1 includes: a robot arm 100 and a mobile platform 200 carrying the robot arm 100. The mobile platform 200 is used to move the robot 1 to the location of the target object grasped by the robot arm 100. The mobile platform 200 has: a semantic navigation system 20 electrically connected to the mobile platform 200, which is used to navigate the mobile platform 200 to the location of the target object; a first camera 10 electrically connected to the semantic navigation system 20, which is used to shoot the external environment during navigation, and can provide high-quality synchronized video color and depth, which is an image depth camera (RGB-D camera); an automatic docking coordination controller 30 electrically connected to the robot arm 100 and the mobile platform 200, used to obtain the best mobile grasping path and posture of the robot arm 100 and the mobile platform 200; and a mobile platform controller 202 electrically connected to the mobile platform 200, used to control the movement of the robot 1; the robot arm 100 is used to grasp the target object, and has: an object recognition and posture estimation system 102, connected to the robot The robot arm 100 is electrically connected to perform semantic recognition, segmentation, and posture estimation of the grasping target object; a mobile grasping controller 104 is electrically connected to the robot arm 100 to control the movement of the robot arm so that the robot arm 100 can stop at a better position for grasping; and a second camera 11 is electrically connected to the object recognition and posture estimation system 102 to enable the robot arm 100 to understand the environment, and is a hand-eye visual image depth camera (Eye in hand RGB-D camera).

請繼續參照圖1,該物件辨識及姿態估測系統102,包括:一影像預處理模組1021,係用於處理該第二攝影機11拍攝到之畫面,為RGB-D影像的預處理模組;一語義分割模組1022,與該影像預處理模組1021電性連接,係將影像預處理後的RGB彩圖進行影像語義分割;及一姿態估測模組1023,與該語義分割模組1022及該影像預處理模組1021電性連接,係用於估測姿態,計算出進行影像預處理及語義分割後物體的六自由度(6-degree of freedom,縮寫為6-DOF)姿態估測結果。 Please continue to refer to Figure 1. The object recognition and posture estimation system 102 includes: an image pre-processing module 1021, which is used to process the image captured by the second camera 11 and is a pre-processing module for RGB-D images; a semantic segmentation module 1022, which is electrically connected to the image pre-processing module 1021 and performs image semantic segmentation on the RGB color image after image pre-processing; and a posture estimation module 1023, which is electrically connected to the semantic segmentation module 1022 and the image pre-processing module 1021 and is used to estimate posture and calculate the six-degree of freedom (6-DOF) posture estimation result of the object after image pre-processing and semantic segmentation.

如圖2所示,係本發明之系統架構,該第二攝影機11傳輸拍攝到之彩圖影像及深度影像傳輸給物體辨識及姿態估測模組102在方塊211中進行物體辨識語義分割,其中該物體辨識及姿態估測模組102中運用物件辨識及姿態估測演算法,使用輕量化的深度學習模型ESPNetv2進行即時性的語義分割;再將分割後的 彩圖與相對應的點雲資訊放入另一套輕量化的DenseFusion模型進行姿態估測;在該機械手臂100抵達目標物體的抓取位置後,物體辨識及姿態估測模組102藉由上方的該第二攝影機11會對於目標物體進行二次的姿態估測,以較近距離的姿態估測能提高物體辨識的精確度;在方塊212-214中進行下列處理:得到目標物體姿態與輪廓後,透過提出之基於抓取指標(Grasp Index)之抓取規劃演算法產生方塊212的抓取規劃,能透過抓取位置與質心的距離以及抓取片段寬度之關係,規劃出目標物體較好的抓取姿態,進而控制該機械手臂100,來提高抓取的成功率,同時透過方塊213做物體定位及方塊214做靠站規劃處理以產生目標物體的目標姿態輸入自動靠站協調控制器30,該自動靠站協調控制器30同時將處理結果傳輸到機械手臂100及移動平台控制器202,以在方塊218中進行機器人移動機械手臂100抓取目標物體,可達到提升移動抓取的工作效率以及穩健性。 As shown in FIG. 2, the system architecture of the present invention is shown. The second camera 11 transmits the captured color image and depth image to the object recognition and posture estimation module 102 to perform object recognition semantic segmentation in block 211. The object recognition and posture estimation module 102 uses the object recognition and posture estimation algorithm and the lightweight deep learning model ESPNetv2 to perform real-time semantic segmentation. The segmented color image and the corresponding point cloud information are then placed in another A lightweight DenseFusion model is used for posture estimation; after the robot arm 100 reaches the grasping position of the target object, the object recognition and posture estimation module 102 performs a secondary posture estimation of the target object through the second camera 11 above, so that the posture estimation at a closer distance can improve the accuracy of object recognition; the following processing is performed in blocks 212-214: after obtaining the posture and outline of the target object, the proposed Grasp Indicator (Grasp The grasping planning algorithm of the automatic docking coordination controller 30 generates the grasping planning of block 212, which can plan a better grasping posture of the target object through the relationship between the grasping position and the distance of the center of mass and the width of the grasping segment, and then control the robot arm 100 to improve the grasping success rate. At the same time, the target posture of the target object is generated through block 213 for object positioning and block 214 for docking planning processing to input the automatic docking coordination controller 30. The automatic docking coordination controller 30 transmits the processing results to the robot arm 100 and the mobile platform controller 202 at the same time, so that the robot moves the robot arm 100 to grasp the target object in block 218, which can improve the working efficiency and stability of mobile grasping.

請繼續參照圖2,該第一攝影機10,傳送彩圖影像及深度影像至語義導航系統20中,該語義導航系統20經由方塊221,222,231及232,利用基於即時外觀建圖(Real-Time Appearance-Based Mapping,縮寫為RTAB-Map)結合物件偵測的算法來建立地圖,藉由預先建立之語義地圖資訊,其中方塊221的語義SLAM(Simultaneous Localization and Mapping,縮寫為SLAM)定位處理是以提高該機械手臂100對於環境之理解,並增加任務的彈性;方塊222的路徑規劃處理是使該移動平台200接收到語義資訊後,做路徑規劃並將結果傳輸到導航控制器232;方塊231的雷射SLAM定位處理是由雷射掃描儀230輸入訊號並透過導航控制器232配合該移動平台控制器202自主導航停靠至目標點;且該語義導航系統20會傳送物體姿態及辨識結果至該移動平台控制器202及該機械手臂100,運用移動規劃演算法及抓取規劃演算法,能夠使該機械手臂100及該移動平台200同時移動,增進該機械手臂100抓取的效率,而該移動平台100能順利停靠至工作站前方較易進 行抓取任務,並且使機械手臂100到達物體上方準備進行抓取,在方塊218中進行機器人移動機械手臂100抓取目標物體。 Please continue to refer to FIG. 2 . The first camera 10 transmits the color image and the depth image to the semantic navigation system 20. The semantic navigation system 20 uses the Real-Time Appearance-Based Mapping (RTAB-Map) algorithm combined with object detection to create a map through blocks 221, 222, 231 and 232. The semantic map information of block 221 is pre-established, and the semantic SLAM (Simultaneous Localization and The purpose of the positioning processing of block 222 is to improve the robot arm 100's understanding of the environment and increase the flexibility of the task; the path planning processing of block 222 is to enable the mobile platform 200 to plan the path after receiving the semantic information and transmit the result to the navigation controller 232; the laser SLAM positioning processing of block 231 is to input the signal from the laser scanner 230 and cooperate with the navigation controller 232 to autonomously navigate and dock to the target point; and the semantic navigation system 2 0 will transmit the object posture and recognition results to the mobile platform controller 202 and the robot arm 100. Using the movement planning algorithm and the grasping planning algorithm, the robot arm 100 and the mobile platform 200 can move simultaneously to improve the grasping efficiency of the robot arm 100. The mobile platform 100 can smoothly dock in front of the workstation to facilitate the grasping task, and the robot arm 100 reaches the top of the object to prepare for grasping. In block 218, the robot moves the robot arm 100 to grasp the target object.

如圖3係該物件辨識及姿態估測系統102之架構圖,其中該影像預處理模組1021藉由第二攝影機11輸入之RGB影像彩圖進行方塊301的預處理,以得到物體深度點雲資訊,透過方塊301的預處理的方式能夠降低環境中之光影變化,提高該物件辨識及姿態估測系統102之穩定性;經預處理之RGB影像則傳輸至方塊302的語義分割處理,其中該語義分割模組1022則使用方塊305的ESPNetV2模型將影像預處理後的RGB影像進行語義分割,因此能夠分割出辨識物件像素維度(pixel-wise)之位置;接著,該分割影像傳輸至方塊303進行物體姿態估計處理,其中姿態估測模組1023使用方塊306的DenseFusion模型進行處理,將上述的彩圖分割影像結合RGB-D camera所得之物體深度點雲資訊,進行物體姿態估計以計算出方塊304中有關目標物體的6-DOF姿態估測結果。 FIG3 is a schematic diagram of the object recognition and posture estimation system 102, wherein the image preprocessing module 1021 preprocesses the block 301 with the RGB image color map input by the second camera 11 to obtain the object depth point cloud information. The preprocessing of the block 301 can reduce the light and shadow changes in the environment and improve the stability of the object recognition and posture estimation system 102. The preprocessed RGB image is transmitted to the semantic segmentation processing of the block 302, wherein the semantic segmentation processing is performed by the semantic segmentation module 1021. The semantic segmentation module 1022 uses the ESPNetV2 model in block 305 to perform semantic segmentation on the RGB image after image preprocessing, so that the pixel-wise position of the object can be segmented; then, the segmented image is transmitted to block 303 for object pose estimation processing, wherein the pose estimation module 1023 uses the DenseFusion model in block 306 for processing, combining the above-mentioned color image segmentation image with the object depth point cloud information obtained by the RGB-D camera, and performing object pose estimation to calculate the 6-DOF pose estimation result of the target object in block 304.

該移動抓取控制器104係運用抓取規劃(Grasp planning)演算法是基於已知之物體外觀輪廓進行該機械手臂100之抓取姿態設計,主要核心概念為在得到欲抓取目標物體之姿態與輪廓後,透過該條件選擇目標物體較好抓取的範圍來進行抓取,其中”較好抓取”也代表可以透過此方式以增加抓取成功率,也就是對於已知目標物體抓取之穩定性。 The mobile grasping controller 104 uses a grasping planning algorithm to design the grasping posture of the robot arm 100 based on the known object appearance contour. The main core concept is to select a better grasping range of the target object based on the condition after obtaining the posture and contour of the target object to be grasped. "Better grasping" also means that the grasping success rate can be increased through this method, that is, the stability of grasping the known target object.

如圖4A所示,係機械手臂100的抓取規劃架構,其中機械手臂100抓取規劃之演算法,分為:質心及輪廓片段取得、抓取指標(Grasp Index)設計及抓取姿態轉換三個部分,第一部分的質心及輪廓片段取得,是將物體輪廓和6D姿態400傳輸至方塊401的取得影像物體質心和物體切片處理,以質心及輪廓片段取得透過該設計能先了解物件質心位置,並且以此位置為基準,選擇目標物體之輪廓片段進行抓取位置之評估,而在方塊402的透過抓取指標決定抓取位置處理,藉著抓 取指標之設計以評估目標物體各處的抓取位置,以選擇所得出最優之位置進行目標物體的抓取,最後則進行方塊403的抓取姿態座標轉換處理,藉由該機械手臂100之抓取姿態的座標轉換關係,由於所得出之抓取位置是基於影像平面的目標物體輪廓,所以藉由抓取姿態的轉換,能夠將目標物體抓取資訊從平面轉換到三維空間中進而達到方塊404的機器人控制。 As shown in FIG4A, it is the grasping planning framework of the robot arm 100, wherein the grasping planning algorithm of the robot arm 100 is divided into three parts: obtaining the centroid and contour fragments, designing the grasping index, and transforming the grasping posture. The first part of the centroid and contour fragment acquisition is to transmit the object contour and 6D posture 400 to the block 401 to obtain the image object centroid and object slice processing. The centroid and contour fragment acquisition can first understand the object centroid position through the design, and based on this position, select the contour fragment of the target object to evaluate the grasping position. In the block 402, the grasping position processing is determined by the grasping index. By grasping the index, the grasping position is determined. The design of the marker is to evaluate the grasping positions of the target object at various locations, so as to select the best position to grasp the target object, and finally perform the grasping posture coordinate conversion processing of block 403. Through the coordinate conversion relationship of the grasping posture of the robot arm 100, since the grasping position obtained is based on the outline of the target object on the image plane, the grasping posture conversion can convert the target object grasping information from the plane to the three-dimensional space, thereby achieving the robot control of block 404.

繼續根據圖4A說明機械手臂100抓取規劃之演算法,第一部分,透過質心及輪廓片段之取得,能先了解物件在影像平面中質心位置,並且以此位置為基準,透過物件之3D模型並且結合姿態估測演算法之DenseFusion模型,了解物件之朝向。藉由物件朝向來定義出物件的長軸,夾爪針對垂直於長軸的方式進行抓取,物件之輪廓片段也透過長軸的方向進行片段切割,進行抓取位置之評估。在物件輪廓彩圖圖像中(Contour Image)透過該輪廓彩圖圖像上各個像素點之標示,便能夠計算出物體的質心,公式如式3.1~3.4:M 00 X Σ y v(x,y) (3.1) Continuing with FIG. 4A , the algorithm for grasping planning of the robot arm 100 is described. In the first part, the center of mass position of the object in the image plane can be understood by obtaining the center of mass and contour segments. Based on this position, the orientation of the object can be understood through the 3D model of the object and the DenseFusion model of the pose estimation algorithm. The long axis of the object is defined by the orientation of the object. The gripper grasps the object in a manner perpendicular to the long axis. The contour segments of the object are also segmented in the direction of the long axis to evaluate the grasping position. In the object contour color image (Contour Image), the center of mass of the object can be calculated by marking each pixel on the contour color image. The formula is as shown in Equations 3.1~3.4: M 00 X Σ y v ( x,y ) (3.1)

M 10 X .Σ y v(x,y)x (3.2) M 10 = ΣX . Σ y v ( x,y ) x (3.2)

M 01=Σ X .Σ y v(x,y)y (3.3) M 01 = Σ X. Σ y v ( x,y ) y (3.3)

Figure 112124363-A0305-02-0011-1
Figure 112124363-A0305-02-0011-1

M 00為物體的面積,M 10M 01分別為物件在影像平面上,分別對於x軸及y軸的轉動慣量,經由計算可得物在影像平面上的質心m x 、m y M00 is the area of the object, M10 and M01 are the rotational inertia of the object on the image plane about the x - axis and y - axis respectively. The center of mass mx and my of the object on the image plane can be obtained by calculation .

有關物件之輪廓片段之設計透過前方所述之長軸的方向進行片段切割,來進行抓取位置之評估,實踐方式為藉由物件之3D模型,可得垂直於物體抓取面之正向,如圖4B所示,我們定義機械臂夾爪40的Z軸垂直於物體的Z軸也就是物體的長軸方向以R Z_tcp R Z_obj 表示,藉此垂直關係不僅可以讓夾爪更容易 抓取物體,也可以藉由長軸方向轉換為物體在輪廓彩圖圖像中的軸向,進而求得到垂直於物體輪廓彩圖圖像正向的片段寬度,實踐之方式為由質心開始分別向正向軸向之兩側,每間隔5%位移計算一次片段寬度值,所得之輪廓影像及寬度l,如圖4C所示。 The design of the object contour segment is performed by segment cutting in the direction of the long axis mentioned above to evaluate the grasping position. The practical method is to obtain the positive direction perpendicular to the grasping surface of the object through the 3D model of the object, as shown in Figure 4B. We define the Z axis of the robot gripper 40 to be perpendicular to the Z axis of the object, that is, the long axis direction of the object, with R Z_tcp R Z_obj indicates that this vertical relationship not only makes it easier for the gripper to grasp the object, but also can convert the long axis direction into the axis direction of the object in the contour color image, and then obtain the segment width perpendicular to the positive direction of the object contour color image. The practical method is to start from the center of mass to both sides of the positive axis, and calculate the segment width value every 5% displacement. The obtained contour image and width l are shown in Figure 4C.

第二部分,抓取指標(Grasp Index)之設計用來決定最適當的抓取姿態,透過質心及輪廓片段的取得,能夠以質心為基準增加對於物件位置的了解,藉由物件的3D模型所得到之物體長軸為正向,對於輪廓彩圖圖像(Contour image)所切割出來的各個片段之寬度,由抓取指標之設計,增加對於感興趣物件之抓取成功率,抓取指標的計算方式,係兩個參數進行討論,首先第一個為抓取位置於質心之間的距離,對於物件的抓取,抓取位置距離質心越近越好,但物體的質心位置並不一定是好的抓取位置,有些物體質心可能是彎曲處或是有些物體不是均勻分布,所以第二個參數為抓取位置的寬度,抓取寬度越小,夾爪的抓取會越穩,在抓取指標的計算上,結合兩個參數之考量,選取抓取位置距離質心近且該處物體的寬度小者為優先,物體寬度以及和質心的距離,兩者所構成了抓取指標(Grasp Index,GI)算法如式3.5,而後經由抓取指標計算出GI最大者(最優)為選定之抓取位置。 The second part, the design of the Grasp Index is used to determine the most appropriate grasping posture. By obtaining the center of mass and contour segments, the center of mass can be used as a reference to increase the understanding of the object's position. The long axis of the object obtained by the 3D model of the object is the positive direction. The width of each segment cut out from the contour image is used to increase the success rate of grasping the object of interest through the design of the Grasp Index. The calculation method of the Grasp Index is discussed based on two parameters. The first is the distance between the grasping position and the center of mass. For object grasping, the closer the grasping position is to the center of mass, the better. However, the center of mass position of the object is not necessarily a good grasping position. Some objects are The body center of mass may be a bend or some objects may not be evenly distributed, so the second parameter is the width of the grasping position. The smaller the grasping width, the more stable the gripper will be. In the calculation of the grasping index, the two parameters are considered together, and the grasping position that is close to the center of mass and has a smaller width of the object is selected as the priority. The object width and the distance from the center of mass constitute the grasping index (GI) algorithm as shown in formula 3.5. Then, the grasping position with the largest GI (optimal) is calculated by the grasping index as the selected grasping position.

GI=(1-r i )(l max -l i ) (3.5) GI=(1- r i )( l max - l i ) (3.5)

在抓取指標相關參數上,r i 為候選抓取位置距離物件質心的遠離比例,該比例係百分比單位,用以衡量抓取位置距離質心的遠近,而l max 的設計為在彩圖圖像的輪廓片段之中,最長的寬度(單位:pixels),該長度會和其他的候選長度進行比較,l i 為抓取候選位置該處的彩圖圖像輪廓片段寬度(單位:pixels),藉由演算法的計算,對質心兩側取之抓取指標GI值取最大者為抓取位置。 In terms of the grasping index related parameters, ri is the ratio of the distance between the candidate grasping position and the object's centroid. The ratio is a percentage unit and is used to measure the distance between the grasping position and the centroid. lmax is designed to be the longest width (unit: pixels) among the contour segments of the color image. This length will be compared with other candidate lengths. li is the width of the contour segment of the color image at the grasping candidate position (unit: pixels). Through the calculation of the algorithm, the grasping index GI value taken on both sides of the centroid is taken as the grasping position.

第三部分,對於該機械手臂100來說,抓取的規劃是基於三維的空間,於是需要透過抓取姿態的轉換,使本來在二維影像平面上藉由抓取指標規劃出來的最佳抓取位置,使之轉換到機械手臂能夠藉由逆運動學,進行抓取規劃之三維空間座標,分別為物體在三維空間之位移量(Translation)及旋轉量(Rotation),三維空間中的位移量的轉換,經由針孔模型(Pinhole model)的計算,能將物體在二維平面上的點,透過投影的方式轉換到空間座標中,得到物體抓取位置的位移量(X tcp ,Y tcp ,Z tcp )如式3.6:

Figure 112124363-A0305-02-0013-2
In the third part, for the robot arm 100, the grasping planning is based on three-dimensional space, so it is necessary to transform the grasping posture so that the optimal grasping position originally planned by the grasping index on the two-dimensional image plane can be converted into three-dimensional space coordinates for the robot arm to carry out grasping planning through inverse kinematics, which are the displacement (Translation) and rotation (Rotation) of the object in the three-dimensional space. The conversion of the displacement in the three-dimensional space can be calculated by the pinhole model, and the point of the object on the two-dimensional plane can be converted into the space coordinate by projection, and the displacement ( X tcp , Y tcp , Z tcp ) of the grasping position of the object is obtained as shown in Formula 3.6:
Figure 112124363-A0305-02-0013-2

X obj ,Y obj ,Z obj 係姿態估測後物體在三維座標中的位置,u O ,v O 係二維影像平面上物體中心位置,u p ,v p 為經由式(3.5)抓取指標所計算出抓取位置的之中心,如圖4D所示,而c x c y 係該第一攝影機10之內參,分別代表影像中心的x座標及y座標。 X obj , Y obj , Z obj are the positions of the object in three-dimensional coordinates after posture estimation, u O , v O are the center positions of the object on the two-dimensional image plane, u p , v p are the centers of the grasping positions calculated by the grasping indexes according to formula (3.5), as shown in FIG4D , and c x and c y are the internal references of the first camera 10, representing the x-coordinate and y-coordinate of the image center, respectively.

在得到物體在三維空間之位移量之後接著需要進行旋轉量之計算,由於藉由物件之3D模型,可得終端工具姿態垂直於物體抓取面之正向進行抓取即R Z_tcp R Z_obj ,垂直的關係能進行較穩定的抓取,加上在抓取平面上,夾爪的抓取位置應該要平行於計算出來之彩圖圖像輪廓片段寬度l,即R x_tcp

Figure 112124363-A0305-02-0013-3
,而R y_tcp 便能藉由另外兩旋轉量之外積取得,所以經由計算之後,終端工具在空間座標中之旋轉量(R x_tcp ,R y_tcp ,R z_tcp )如公式3.7:
Figure 112124363-A0305-02-0013-4
After obtaining the displacement of the object in three-dimensional space, the rotation needs to be calculated. Since the 3D model of the object can be used to obtain the terminal tool posture perpendicular to the positive direction of the object grasping surface, that is, R Z_tcp R Z_obj . The perpendicular relationship can be used for more stable grasping. In addition, on the grasping plane, the grasping position of the gripper should be parallel to the calculated color image outline segment width l , that is, R x_tcp
Figure 112124363-A0305-02-0013-3
, and R y_tcp can be obtained by the product of the other two rotation quantities. Therefore, after calculation, the rotation quantity of the terminal tool in the spatial coordinate ( R x_tcp , R y_tcp , R z_tcp ) is as shown in formula 3.7:
Figure 112124363-A0305-02-0013-4

經由所設計之抓取姿態轉換的計算後,能從物體在二維影像平面中之抓取位置結合抓取物之3D模型,計算出最終在三維空間中終端工具抓取位置的位移量(X tcp ,Y tcp ,Z tcp ),及終端工具之旋轉量(R x_tcp ,R y_tcp ,R z_tcp )。 After calculating the designed grasping posture transformation, the displacement ( X tcp , Y tcp , Z tcp) of the final terminal tool grasping position in three-dimensional space and the rotation ( R x_tcp , R y_tcp , R z_tcp ) of the terminal tool can be calculated from the grasping position of the object in the two-dimensional image plane combined with the 3D model of the grasped object.

該第一攝影機10在該機械手臂100上方進行物件的辨識,再由物件姿態估測後,取得欲抓取物件之語義分割影像及三維姿態,透過已知3D模型能夠由抓取指標(Grasp Index)計算能得欲抓取物件在二維空間平面中之最優抓取位置,再利用抓取姿態轉換方式,將二維平面的抓置位置,轉換為在三維空間之終端工具之位置,用以進行物件之抓取。 The first camera 10 recognizes the object above the robot arm 100, and then obtains the semantic segmentation image and three-dimensional posture of the object to be grasped after estimating the object posture. The optimal grasping position of the object to be grasped in the two-dimensional space plane can be calculated by the grasping index through the known 3D model. Then, the grasping posture conversion method is used to convert the grasping position of the two-dimensional plane into the position of the terminal tool in the three-dimensional space for grasping the object.

如圖5所示,係機械手臂100的自主抓取流程圖,首先在步驟500進行物體姿態及輪廓處理,將該物件辨識及姿態估測系統102所得到之抓取物姿態及輪廓,透過抓取規劃演算法,得到終端工具在空間中對於該物件的抓取位置,再進行步驟501抓取規劃處理,將抓取規劃之終端工具的抓取座標,透過步驟502的逆向運動學計算,在步驟503下達機械臂六軸的速度控制命令給機械臂控制器,便達成步驟504的機械手臂100和夾爪完成物件抓取。 As shown in FIG5 , it is a flowchart of autonomous grasping of the robot arm 100. First, in step 500, the posture and contour of the object are processed. The posture and contour of the grasped object obtained by the object recognition and posture estimation system 102 are used through the grasping planning algorithm to obtain the grasping position of the terminal tool in space for the object. Then, in step 501, grasping planning processing is performed. The grasping coordinates of the terminal tool of the grasping planning are calculated through inverse kinematics in step 502. In step 503, the speed control command of the six axes of the robot arm is issued to the robot arm controller, and then the robot arm 100 and the gripper complete the object grasping in step 504.

如圖6所示,係移動式機械手臂100的運動規劃流程圖,首先,在步驟600中,判斷該機器人1是否抵達目標,若還沒,流程走向N回到步驟600,繼續透過建立在語義地圖上方之標地物語義資訊進行導航,使機器人能夠導航至目標語義空間;當機器人抵達目標空間時,流程走向Y,進行步驟602,透過該機械手臂100上之第二攝影機11進行物件姿態辨識的目標物件偵測,接著進行步驟603,透過物件姿態辨識的結果進行該機械手臂100二次運動規劃以達到目標姿態估計,最後進行步驟30自動靠站協調控制器使該機械手臂100正確停靠到抓取物件前方進行抓取。 As shown in FIG. 6 , it is a flow chart of the motion planning of the mobile robot arm 100. First, in step 600, it is determined whether the robot 1 has reached the target. If not, the process goes to step N and returns to step 600, and continues to navigate through the semantic information of the landmarks established on the semantic map, so that the robot can navigate to the target semantic space. When the robot reaches the target space, the process goes to step Y and proceeds. Step 602, the target object detection is performed through the second camera 11 on the robot arm 100 for object posture recognition, and then step 603 is performed to perform secondary motion planning of the robot arm 100 based on the result of object posture recognition to achieve target posture estimation, and finally step 30 is performed to automatically dock the coordination controller to make the robot arm 100 correctly dock in front of the grasped object for grasping.

由於該機器人1是一個冗餘自由度的系統(Redundant system),有無限多種路徑規劃可使該機器人1完成物件之抓取,但有些路徑可能會對該機器人1產生碰撞的危險,或是使之不易完成抓取任務,於是使用該自動靠站協調控制器30,基於位置的視覺伺服閉迴路優化速度控制,能夠透過速度優化控制的方式控制移動式機器人,首先,計算當前和目標終端工具姿態之間的姿態誤差,讓該移動抓取控制器104產生可行的關節速度來最小化此誤差,最小化問題被公式化為約束優化問題(Constrained optimization problem),其中約束是關節速度極限。 Since the robot 1 is a redundant system, there are infinite path planning that can enable the robot 1 to complete the grasping of objects, but some paths may cause collision danger to the robot 1, or make it difficult to complete the grasping task. Therefore, the automatic docking coordination controller 30 is used to optimize the speed control based on the position vision servo closed loop. The mobile robot can be controlled by speed optimization control. First, the posture error between the current and target terminal tool postures is calculated, and the mobile grasping controller 104 generates a feasible joint speed to minimize this error. The minimization problem is formulated as a constrained optimization problem, where the constraint is the joint speed limit.

該移動平台控制器202運用移動平台停靠運動規劃的演算法,用以避免該機械手臂100與停靠站發生碰撞,並且停靠到適宜抓取的位置進行物件雜亂堆疊之抓取,考量到該機器人1外形較長,為了順利完成抓取,避免與停靠站碰撞,提高抓取的安全性以及穩健性,如圖7所示,係機器人1靠站位置規劃流程圖。首先,在步驟700,透過該機械手臂100上方之該第二攝影機11經該物件辨識及姿態估測模組102進行物體辨識;之後在步驟701,該姿態估測模組1023估測出物體的三維姿態,再透過該機械手臂100之轉換關係,便能把物體定位在二維的地圖中完成物體定位,對於物件姿態轉換到二維地圖中的表示式如式(4.1):

Figure 112124363-A0305-02-0015-5
The mobile platform controller 202 uses the mobile platform docking motion planning algorithm to avoid the robot arm 100 from colliding with the docking station, and docks at a suitable grasping position to grasp the objects in a disorderly stack. Considering that the robot 1 is long in appearance, in order to successfully complete the grasping and avoid collision with the docking station, the safety and stability of the grasping are improved, as shown in FIG. 7 , which is a flowchart of the docking position planning of the robot 1. First, in step 700, the second camera 11 above the robot arm 100 is used to perform object recognition through the object recognition and posture estimation module 102; then in step 701, the posture estimation module 1023 estimates the three-dimensional posture of the object, and then through the conversion relationship of the robot arm 100, the object can be located in the two-dimensional map to complete the object positioning. The expression for converting the object posture into the two-dimensional map is as shown in formula (4.1):
Figure 112124363-A0305-02-0015-5

其中

Figure 112124363-A0305-02-0015-6
為抓取物基於相機的坐標系,透過該姿態估測模組1023能得到此轉換關係,已知的相機座標與基座之間的座標轉換為
Figure 112124363-A0305-02-0015-9
,並且透過該機械手臂100之語義導航系統20定位所得之該機械手臂100在地圖上的位置關係
Figure 112124363-A0305-02-0015-7
,由矩陣相乘運算能得到
Figure 112124363-A0305-02-0015-11
,代表物體與地圖之間的相應位置。 in
Figure 112124363-A0305-02-0015-6
The coordinate system of the camera is used to grasp the object. The conversion relationship between the known camera coordinates and the base coordinates is obtained through the posture estimation module 1023.
Figure 112124363-A0305-02-0015-9
, and the position relationship of the robot arm 100 on the map obtained by positioning the robot arm 100 through the semantic navigation system 20
Figure 112124363-A0305-02-0015-7
, which can be obtained by matrix multiplication operation
Figure 112124363-A0305-02-0015-11
, representing the corresponding position between the object and the map.

請繼續參照圖7,繼續進行步驟702,識別物件在地圖中之位置後,能將物件的座標透過地圖坐標系進行描述,使該移動平台200平行於物件在地圖座 標上之x軸並且和物件擁有相同的x座標,y座標停靠在物件旁邊的80公分處,決定該移動平台200之停靠位置,如式(4.2):

Figure 112124363-A0305-02-0016-12
Please continue to refer to FIG. 7 and proceed to step 702. After identifying the position of the object in the map, the coordinates of the object can be described through the map coordinate system, so that the mobile platform 200 is parallel to the x-axis of the object on the map coordinate system and has the same x-coordinate as the object, and the y-coordinate is docked at 80 cm next to the object. The docking position of the mobile platform 200 is determined as shown in formula (4.2):
Figure 112124363-A0305-02-0016-12

其中X b ,Y b ,θ b 係該移動平台200在地圖座標的停靠位置,X o ,Y o 係停靠站的站上物體透過上述式(4.1)定位在地圖上的位置,藉此方式便能決定該移動平台200之停靠位置。 Where Xb , Yb , θb are the docking positions of the mobile platform 200 in map coordinates, and Xo , Yo are the positions of the objects on the docking station located on the map by the above formula (4.1). In this way, the docking position of the mobile platform 200 can be determined.

請繼續參照圖7,最後進行步驟703,透過該語義分割模組1022之語義分割與辨識演算法,得到物體姿態估測的結果T obj R obj 代表物體的位移及旋轉,得知在靠站處之物體位置轉換關係如式(4.3):

Figure 112124363-A0305-02-0016-13
Please continue to refer to FIG. 7 and finally proceed to step 703. Through the semantic segmentation and recognition algorithm of the semantic segmentation module 1022, the object posture estimation results T obj and R obj represent the displacement and rotation of the object. The object position conversion relationship at the docking station is obtained as shown in formula (4.3):
Figure 112124363-A0305-02-0016-13

其中

Figure 112124363-A0305-02-0016-14
系抓取物基於相機之座標關係可由T obj R obj 得知,
Figure 112124363-A0305-02-0016-15
為已知的相機座標與基座之座標轉換,
Figure 112124363-A0305-02-0016-16
係該移動平台200位置與靠站位置之關係,因此
Figure 112124363-A0305-02-0016-18
係靠站位置與目標物體之間的關係。結合該移動平台200停靠位置及該機械手臂100終點時的姿態,得到目標姿態P target ,R target ,P target 代表該移動平台200之位置(X b ,Y b ,θ b )及該機械手臂1最終之目標姿態位移量(T tcp ),R target 係該機械手臂100最終抵達目標點時之旋轉角度(R tcp )。 in
Figure 112124363-A0305-02-0016-14
The coordinate relationship of the captured object based on the camera can be obtained from T obj and R obj .
Figure 112124363-A0305-02-0016-15
The coordinate conversion between the known camera coordinates and the base.
Figure 112124363-A0305-02-0016-16
is the relationship between the position of the mobile platform 200 and the docking position, so
Figure 112124363-A0305-02-0016-18
The relationship between the docking position and the target object. Combining the docking position of the mobile platform 200 and the posture of the robot arm 100 at the end point, the target posture P target , R target is obtained . P target represents the position ( X b , Y b , θ b ) of the mobile platform 200 and the final target posture displacement ( T tcp ) of the robot arm 1, and R target is the rotation angle ( R tcp ) of the robot arm 100 when it finally reaches the target point.

如圖8所示,係說明移動抓取運動控制系統架構圖,在步驟800,進行靠站位置規劃,透過該物件辨識及姿態估測系統102以及抓取規劃演算法的設計,最終結果在步驟801,藉由順向運動學能夠使機械手臂100獲得欲抓取物件之終端工具抓取姿態,而在獲得終端夾爪的6-DOF姿態後,計算其和目標終端工具姿態之間的姿態誤差,透過自動靠站協調控制器30來移動到適合抓取的姿態進行抓取。 由於移動機械手臂8的關節限制,在步驟802及步驟803分別控制機械手臂100及移動平台200也可以透過該自動靠站協調控制器30同時動作來消除這個誤差,移動到合適的位置自主進行抓取。 As shown in FIG8 , it is a diagram illustrating the architecture of the mobile grasping motion control system. In step 800, the docking position planning is performed. Through the design of the object recognition and posture estimation system 102 and the grasping planning algorithm, the final result is in step 801. Through the directional kinematics, the robot arm 100 can obtain the terminal tool grasping posture of the object to be grasped. After obtaining the 6-DOF posture of the terminal gripper, the posture error between it and the target terminal tool posture is calculated, and the automatic docking coordination controller 30 is used to move to a posture suitable for grasping to grasp. Due to the joint limitation of the mobile robot arm 8, the robot arm 100 and the mobile platform 200 can be controlled in steps 802 and 803 respectively, and the automatic docking coordination controller 30 can also be used to eliminate this error and move to the appropriate position for autonomous grasping.

該自動靠站協調控制器30係基於在卡氏座標上的終端工具速度誤差進行控制,首先,對於終端工具速度V進行描述,當下終端工具速度計算方式如式(4.4):

Figure 112124363-A0305-02-0017-19
The automatic docking coordination controller 30 is controlled based on the terminal tool speed error in the Cartesian coordinates. First, the terminal tool speed V is described. The current terminal tool speed calculation method is as shown in formula (4.4):
Figure 112124363-A0305-02-0017-19

其中v x ,v y ,v z 係終端工具之線速度,ω x ,ω y ,ω z 係終端工具之角速度,x,y代表終端工具在X-Y平面上的投影,

Figure 112124363-A0305-02-0017-21
Figure 112124363-A0305-02-0017-22
Figure 112124363-A0305-02-0017-23
係該移動平台200之速度,J(θ)為雅可比矩陣(Jacobian matrix),
Figure 112124363-A0305-02-0017-24
為機械臂各軸之軸速度。 Where v x , v y , v z are the linear velocities of the end tool, ω x , ω y , ω z are the angular velocities of the end tool, x , y represent the projection of the end tool on the XY plane,
Figure 112124363-A0305-02-0017-21
,
Figure 112124363-A0305-02-0017-22
,
Figure 112124363-A0305-02-0017-23
is the velocity of the mobile platform 200, J ( θ ) is the Jacobian matrix,
Figure 112124363-A0305-02-0017-24
is the axis speed of each axis of the robot arm.

該移動平台200靠站位置與該機械手臂100目標姿態,整合機械手臂理想速度V r 之計算方式如式(4.5):

Figure 112124363-A0305-02-0017-20
The station position of the mobile platform 200 and the target posture of the robot arm 100 are integrated with the calculation method of the ideal speed Vr of the robot arm as shown in formula (4.5):
Figure 112124363-A0305-02-0017-20

其中P current 係該移動平台200及該機械手臂100之位置,R current 係該機械手臂100之旋轉角度,P target 、R target 分別係該移動平台100停靠位置與該機械手臂100目標抓取點之位置及旋轉角度,Δt係該優化自動靠站協調控制器30的取樣時間,理想速度的設計,係為了在單一取樣時間到達目標位置,透過目標與該機械手臂100的位置差距P target -P current ,及目標與該機械手臂100之角度差 log(

Figure 112124363-A0305-02-0018-30
)進行速度控制,當位置差距及該機械手臂100之角度差距漸漸縮小時,接近目標位置,速度會漸漸縮小,且順利停靠至目標位置。 Wherein P current is the position of the mobile platform 200 and the robot arm 100, R current is the rotation angle of the robot arm 100, P target and R target are the docking position of the mobile platform 100 and the position and rotation angle of the target grasping point of the robot arm 100, respectively, Δt is the sampling time of the optimized automatic docking coordination controller 30, and the ideal speed is designed to reach the target position in a single sampling time, through the position difference P target - P current between the target and the robot arm 100, and the angle difference log(
Figure 112124363-A0305-02-0018-30
) is used for speed control. When the position difference and the angle difference of the robot arm 100 gradually decrease, the speed will gradually decrease as it approaches the target position, and it will smoothly stop at the target position.

針對該移動平台200靠站設計之需求,使用該優化自動靠站協調控制器30中之優化的演算法,最小化問題,使該機械手臂100在運動控制的過程中,該移動平台200能順利完成靠站,進行抓取,其中,優化的方法目的係使懲罰函數所得到的數值為最小即為最佳解,如式(4.6):

Figure 112124363-A0305-02-0018-25
In view of the requirement of the docking design of the mobile platform 200, the optimization algorithm in the optimized automatic docking coordination controller 30 is used to minimize the problem so that the mobile platform 200 can successfully complete the docking and grasping during the motion control process of the robot arm 100. The purpose of the optimization method is to minimize the value obtained by the penalty function, which is the best solution, as shown in formula (4.6):
Figure 112124363-A0305-02-0018-25

其中,V r 及V代表該機械手臂100理想速度及關節速度,P(

Figure 112124363-A0305-02-0018-31
)係對數的屏障函數,W(
Figure 112124363-A0305-02-0018-32
,
Figure 112124363-A0305-02-0018-33
,
Figure 112124363-A0305-02-0018-34
,
Figure 112124363-A0305-02-0018-35
)係一階正規化的函數,其中,
Figure 112124363-A0305-02-0018-36
係該機械手臂100各軸角度,受限在
Figure 112124363-A0305-02-0018-26
Figure 112124363-A0305-02-0018-27
之間,
Figure 112124363-A0305-02-0018-37
,
Figure 112124363-A0305-02-0018-38
,
Figure 112124363-A0305-02-0018-39
係該移動平台200之線速度及角速度,受限在
Figure 112124363-A0305-02-0018-28
Figure 112124363-A0305-02-0018-29
之間。其中第一項Minimize∥V r -V2,係為了縮小該機械手臂100之速度V和理想速度V r 兩者之間的速度誤差,該最小化誤差的方式係為了將理想速度V r 在單個取樣時間中到達目標終端工具姿態,並且在速度和關節角度極限的限制下調整終端工具的速度,讓移動手臂車上的手臂及該移動平台200接近理想速度,選用此兩者的平方誤差(Quadratic error)來當作代價函數(Cost function),且能加快誤差的收斂,但是機器人1上有許多硬體限制例如:機械手臂100上每個軸運動角度的上限以及移動平台200的硬體限制等,所以必須增加一些懲罰函數(Penalty function)。P(
Figure 112124363-A0305-02-0018-40
)是一個對數的屏障函數(Log barrier function),來避免軸上的角度接近極限,W(
Figure 112124363-A0305-02-0018-41
,
Figure 112124363-A0305-02-0018-42
,
Figure 112124363-A0305-02-0018-43
,
Figure 112124363-A0305-02-0018-44
)是一個一階正規化的函 數(Normalization to first normal form),透過給每個軸及該移動平台200不同的權重,來平衡各軸的運動。 Wherein, V r and V represent the ideal speed and joint speed of the robot arm 100, P (
Figure 112124363-A0305-02-0018-31
) is the logarithmic barrier function, W (
Figure 112124363-A0305-02-0018-32
,
Figure 112124363-A0305-02-0018-33
,
Figure 112124363-A0305-02-0018-34
,
Figure 112124363-A0305-02-0018-35
) is a first-order regularized function, where
Figure 112124363-A0305-02-0018-36
The robot arm 100 has an angle of each axis, which is limited by
Figure 112124363-A0305-02-0018-26
and
Figure 112124363-A0305-02-0018-27
In between,
Figure 112124363-A0305-02-0018-37
,
Figure 112124363-A0305-02-0018-38
,
Figure 112124363-A0305-02-0018-39
is the linear velocity and angular velocity of the mobile platform 200, which is limited by
Figure 112124363-A0305-02-0018-28
and
Figure 112124363-A0305-02-0018-29
The first item Minimize ∥ V r - V2 is to reduce the speed error between the speed V of the robot arm 100 and the ideal speed V r . The method of minimizing the error is to make the ideal speed V r reach the target terminal tool posture in a single sampling time, and adjust the speed of the terminal tool under the speed and joint angle limits, so that the arm on the mobile arm vehicle and the mobile platform 200 are close to the ideal speed. The quadratic error of the two is used as the cost function, and the convergence of the error can be accelerated. However, there are many hardware limitations on the robot 1, such as the upper limit of the motion angle of each axis on the robot arm 100 and the hardware limitations of the mobile platform 200, so some penalty functions must be added. P (
Figure 112124363-A0305-02-0018-40
) is a logarithmic barrier function to prevent the angle on the axis from approaching the limit, W (
Figure 112124363-A0305-02-0018-41
,
Figure 112124363-A0305-02-0018-42
,
Figure 112124363-A0305-02-0018-43
,
Figure 112124363-A0305-02-0018-44
) is a first-order normalization function (Normalization to first normal form), which balances the motion of each axis by giving different weights to each axis and the mobile platform 200.

如圖9所示,係本發明之一種自主移動抓取物件之方法,首先,步驟S1,該第一攝影機10對外部環境進行拍攝傳送至該語義導航系統20;步驟S2,運用語義導航系統20配合第一攝影機10透過移動平台控制器202將該機器人1自主導航至目標物件所在之位置;步驟S3,透過自動靠站協調控制器30,藉由優化演算法,且根據該機械手臂100與該移動平台200當下位置與目標位置的差異,進行冗餘自由度最佳化度速度控制,讓該移動平台200能夠自動協調控制移動式基座及機械臂,同時動作使該機械手臂100移動到目標物上方之抓取位置讓移動平台200能停靠到最佳位置;步驟S4,移動平台200透過第二攝影機11由上而下對目標所在之工作台之堆疊物件進行拍攝;步驟S5,機械手臂100接收第二攝影機11獲取之物件影像畫面;接著,步驟S6,由語義分割模組1022及姿態估測模組1023中分別之辨識物件姿態,該分別模型為ESPNetV2模型及DenseFusion模型;步驟S7,由移動抓取控制器104得出之抓取規劃演算法,計算出物件較好的抓取位置提高抓取成功率;步驟S8,移動抓取控制器104根據演算法控制機械手臂100進行物件抓取。 As shown in FIG. 9 , a method of autonomously moving and grasping an object of the present invention is shown. First, in step S1, the first camera 10 takes a picture of the external environment and transmits it to the semantic navigation system 20; in step S2, the semantic navigation system 20 is used to cooperate with the first camera 10 to autonomously navigate the robot 1 to the position where the target object is located through the mobile platform controller 202; in step S3, the automatic docking coordination controller 30 is used to optimize the redundant degree of freedom and speed control according to the difference between the current position of the robot arm 100 and the mobile platform 200 and the target position by using an optimization algorithm, so that the mobile platform 200 can automatically coordinate and control the mobile base and the robot arm, and simultaneously move the robot arm 100 to the grasping position above the target object. The mobile platform 200 is positioned to dock at the best position; step S4, the mobile platform 200 uses the second camera 11 to shoot the stacked objects on the target workbench from top to bottom; step S5, the robot arm 100 receives the object image captured by the second camera 11; then, step S6, the semantic segmentation module 1022 and the posture estimation module 1023 respectively identify the object posture, and the respective models are ESPNetV2 model and DenseFusion model; step S7, the grasping planning algorithm obtained by the mobile grasping controller 104 calculates the better grasping position of the object to improve the grasping success rate; step S8, the mobile grasping controller 104 controls the robot arm 100 to grasp the object according to the algorithm.

綜上說明,本發明可達成使機械手臂能在日常生活的環境中,自主地完成抓取的任務,本發明結合語義資訊,使機器人能夠了解環境,並且透過物體姿態估測及抓取規劃設計,以提升抓取成功率,並結合移動抓取控制器使機械手臂姿態及移動平台同時抵達抓取位置,以有效率的完成抓取。 In summary, the present invention can achieve the goal of enabling the robot arm to autonomously complete the grasping task in the environment of daily life. The present invention combines semantic information to enable the robot to understand the environment, and through object posture estimation and grasping planning design, to improve the grasping success rate, and combines the mobile grasping controller to make the robot arm posture and mobile platform reach the grasping position at the same time, so as to efficiently complete the grasping.

1:機器人 1:Robot

10:第一攝影機 10: First Camera

11:第二攝影機 11: Second camera

20:語義導航系統 20:Semantic Navigation System

30:自動靠站協調控制器 30: Automatic docking coordination controller

100:機械手臂 100:Robotic arm

200:移動平台 200: Mobile platform

102:物件辨識及姿態估測系統 102: Object recognition and posture estimation system

104:移動抓取控制器 104: Mobile grabbing controller

202:移動平台控制器 202: Mobile platform controller

203:驅動系統 203: Drive system

1021:影像預處理模組 1021: Image preprocessing module

1022:語義分割模組 1022: Semantic segmentation module

1023:姿態估測模組 1023: Posture estimation module

Claims (10)

一種自主移動抓取物件之機器人,包括:一機械手臂用於抓取一目標物件;一移動平台,承載該機械手臂用於移動該機械手臂至所抓取該目標物件之位置;一語義導航系統,與該移動平台電性連接,係用於將該移動平台導航至該目標物件所在之位置;一第一攝影機,與該語義導航系統電性連接,係用於在該語義導航系統導航時拍攝外部環境;一第二攝影機,與該機械手臂電性連接,用於取得該機械手臂對環境之相對影像;一物件辨識及姿態估測系統,與該機械手臂電性連接,係透過該第二攝影機以語義辨識分割及及姿態估計以控制該機械手臂抓取該目標物件,其中該物件辨識及姿態估測系統透過取得該目標物件的質心及輪廓片段,瞭解該目標物件在影像平面中的一質心位置,並且以該質心位置為基準,產生一抓取指標,並使用DenseFusion模型進行姿態估測,根據抓取指標選擇最適當之抓取位置;一自動靠站協調控制器,與該機械手臂及該移動平台電性連接,係藉由該物件辨識及姿態估測系統取得該機械手臂及該移動平台最佳的移動抓取路徑及姿態,並控制機械手臂及移動平台,使其能同步移動進行抓取;一移動抓取控制器,與該機械手臂電性連接,係藉由該物件辨識及姿態估測系統以控制機械手臂之移動,讓該機械手臂依照該抓取指標進行抓取;及,一移動平台控制器,與該移動平台電性連接,係用於控制該機器人之移動。 A robot for autonomously moving and grasping an object comprises: a mechanical arm for grasping a target object; a mobile platform, carrying the mechanical arm and moving the mechanical arm to a position where the target object is grasped; a semantic navigation system, electrically connected to the mobile platform, for navigating the mobile platform to the position where the target object is located; a first camera, electrically connected to the semantic navigation system, for The navigation system takes photos of the external environment during navigation; a second camera is electrically connected to the robot arm and is used to obtain the relative image of the robot arm to the environment; an object recognition and posture estimation system is electrically connected to the robot arm and controls the robot arm to grasp the target object through semantic recognition segmentation and posture estimation by the second camera, wherein the object recognition and posture estimation system obtains the center of mass and wheel of the target object The object recognition and posture estimation system is used to obtain the center of mass position of the target object in the image plane, and a grasping indicator is generated based on the center of mass position. The DenseFusion model is used to perform posture estimation, and the most appropriate grasping position is selected according to the grasping indicator. An automatic docking coordination controller is electrically connected to the robot arm and the mobile platform, and obtains the robot arm and the mobile platform through the object recognition and posture estimation system. The system can determine the optimal mobile grasping path and posture of the robot, and control the robot arm and the mobile platform so that they can move synchronously for grasping; a mobile grasping controller is electrically connected to the robot arm, and controls the movement of the robot arm through the object recognition and posture estimation system, so that the robot arm grasps according to the grasping index; and a mobile platform controller is electrically connected to the mobile platform and is used to control the movement of the robot. 如請求項1所述之可自主移動抓取物件之機器人,其中該物件辨識及姿態估測系統還包括:一影像預處理模組,係用於處理該第一攝影機及該第二攝影機拍攝到之畫面,並做為圖像深度影像的預處理;一語義分割模組,與該預處理模組電性連接,係將預處理後影像的RGB彩圖進行影像語義分割處理;及,一姿態估測模組,與該語義分割模組及該姿態估測模組電性連接,係用於估測姿態並計算出進行影像預處理及語義分割後物體的六自由度姿態估測結果。 The robot capable of autonomously moving and grasping objects as described in claim 1, wherein the object recognition and posture estimation system further comprises: an image pre-processing module, which is used to process the images captured by the first camera and the second camera and serve as pre-processing of the image depth image; a semantic segmentation module, which is electrically connected to the pre-processing module, and performs image semantic segmentation processing on the RGB color image of the pre-processed image; and, a posture estimation module, which is electrically connected to the semantic segmentation module and the posture estimation module, and is used to estimate the posture and calculate the six-degree-of-freedom posture estimation result of the object after image pre-processing and semantic segmentation. 如請求項1所述之可自主移動抓取物件之機器人,其中該第一攝影機為一圖像深度攝影機,能夠提供高質量同步視頻之顏色和深度。 A robot capable of autonomously moving and grasping objects as described in claim 1, wherein the first camera is an image depth camera capable of providing high-quality synchronized video with color and depth. 如請求項1所述之可自主移動抓取物件之機器人,其中,該第二攝影機為一手眼視覺圖像深度攝影機。 A robot capable of autonomously moving and grasping objects as described in claim 1, wherein the second camera is a hand-eye visual image depth camera. 如請求項1所述之可自主移動抓取物件之機器人,其中,該語義導航系統可預先建立一語義地圖,使該機械手臂透過該語義地圖到達該目標物件之位置。 A robot capable of autonomously moving and grasping objects as described in claim 1, wherein the semantic navigation system can pre-establish a semantic map so that the robot arm can reach the location of the target object through the semantic map. 如請求項1所述之可自主移動抓取物件之機器人,其中,該移動抓取控制器係運用一機械手臂抓取規劃演算法,且該機械手臂抓取規劃演算法分為三個部分:質心及輪廓片段取得、抓取指標設計及抓取姿態轉換。 A robot capable of autonomously moving and grasping objects as described in claim 1, wherein the mobile grasping controller uses a robotic arm grasping planning algorithm, and the robotic arm grasping planning algorithm is divided into three parts: center of mass and contour segment acquisition, grasping indicator design, and grasping posture transformation. 如請求項6所述之可自主移動抓取物件之機器人,其中,該抓取姿態轉換係經由針孔模型的計算,將物體在二維平面上的點,透過投影的方式轉換到空間座標中。 A robot capable of autonomously moving and grasping objects as described in claim 6, wherein the grasping posture transformation is performed by calculating a pinhole model to transform the points of the object on a two-dimensional plane into spatial coordinates by projection. 一種自主移動抓取物件之方法,包括下列步驟: 一第一攝影機對外部環境進行拍攝傳送至一語義導航系統;運用該語義導航系統配合該第一攝影機透過一移動平台將一機器人自主導航至一目標物件所在之位置;提供一自動靠站協調控制器,藉由一演算法,讓該移動平台能停靠到最佳位置;一第二攝影機由上而下對該目標物件所在之一工作台之堆疊物件進行拍攝;一機械手臂接收該第二攝影機獲取該目標物件之影像畫面;由一語義分割模組及一姿態估測模組中分別計算辨識該目標物件之姿態;由一移動抓取控制器得出之一抓取規劃設計演算法,並計算出該目標物件較佳的抓取位置;該移動抓取控制器根據該演算法控制該機械手臂進行物件抓取。 A method for autonomously moving and grasping objects includes the following steps: A first camera takes pictures of the external environment and transmits them to a semantic navigation system; the semantic navigation system is used to cooperate with the first camera to autonomously navigate a robot to the location of a target object through a mobile platform; an automatic docking coordination controller is provided, and an algorithm is used to allow the mobile platform to dock at the best position; a second camera moves the target object from top to bottom The stacked objects on a workbench are photographed; a robot arm receives the image of the target object captured by the second camera; a semantic segmentation module and a posture estimation module respectively calculate and identify the posture of the target object; a mobile grasping controller derives a grasping planning design algorithm and calculates the optimal grasping position of the target object; the mobile grasping controller controls the robot arm to grasp the object according to the algorithm. 如請求項8所述之可自主移動抓取物件之方法,其中該自動靠站協調控制器,係根據該機械手臂與該移動平台當下位置與目標位置的差異,進行冗餘自由度最佳化度速度控制,讓該移動平台能夠自動協調控制移動式基座及機械臂同時動作,以使該機械手臂移動到該目標物件上方之抓取位置。 The method for autonomously moving and grasping an object as described in claim 8, wherein the automatic docking coordination controller performs redundant degree of freedom optimization speed control based on the difference between the current position of the robot arm and the mobile platform and the target position, so that the mobile platform can automatically coordinate and control the simultaneous movement of the mobile base and the robot arm, so that the robot arm moves to the grasping position above the target object. 如請求項8所述之可自主移動抓取物件之方法,其中,該語義分割模組及該姿態估測模組中分別之模型為一ESPNetV2模型及一DenseFusion模型。 The method for autonomously moving and grasping an object as described in claim 8, wherein the models in the semantic segmentation module and the posture estimation module are an ESPNetV2 model and a DenseFusion model respectively.
TW112124363A 2023-03-21 2023-06-29 Robot and method for autonomously moving and grabbing objects TWI851310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/384,377 US20240316767A1 (en) 2023-03-21 2023-10-26 Robot and method for autonomously moving and grasping objects

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW112110436 2023-03-21
TW112110436 2023-03-21

Publications (2)

Publication Number Publication Date
TWI851310B true TWI851310B (en) 2024-08-01
TW202438256A TW202438256A (en) 2024-10-01

Family

ID=93283798

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112124363A TWI851310B (en) 2023-03-21 2023-06-29 Robot and method for autonomously moving and grabbing objects

Country Status (1)

Country Link
TW (1) TWI851310B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111319044A (en) * 2020-03-04 2020-06-23 达闼科技(北京)有限公司 Object grasping method, device, readable storage medium and grasping robot
CN111360780A (en) * 2020-03-20 2020-07-03 北京工业大学 A Garbage Picking Robot Based on Visual Semantic SLAM
CN112074382A (en) * 2018-05-02 2020-12-11 X开发有限责任公司 Positioning robotic sensors for object classification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112074382A (en) * 2018-05-02 2020-12-11 X开发有限责任公司 Positioning robotic sensors for object classification
CN111319044A (en) * 2020-03-04 2020-06-23 达闼科技(北京)有限公司 Object grasping method, device, readable storage medium and grasping robot
CN111360780A (en) * 2020-03-20 2020-07-03 北京工业大学 A Garbage Picking Robot Based on Visual Semantic SLAM

Also Published As

Publication number Publication date
TW202438256A (en) 2024-10-01

Similar Documents

Publication Publication Date Title
Lippiello et al. Position-based visual servoing in industrial multirobot cells using a hybrid camera configuration
US11396101B2 (en) Operating system, control device, and computer program product
Vahrenkamp et al. Visual servoing for humanoid grasping and manipulation tasks
US20240316767A1 (en) Robot and method for autonomously moving and grasping objects
CN108942923A (en) A kind of mechanical arm crawl control method
US11945106B2 (en) Shared dense network with robot task-specific heads
US20220355495A1 (en) Robot Docking Station Identification Surface
US11766783B2 (en) Object association using machine learning models
US11769269B2 (en) Fusing multiple depth sensing modalities
Prats et al. Combining template tracking and laser peak detection for 3D reconstruction and grasping in underwater environments
US12304070B2 (en) Grasp teach by human demonstration
US11436869B1 (en) Engagement detection and attention estimation for human-robot interaction
JP7353948B2 (en) Robot system and robot system control method
CN116529760A (en) Grabbing control method, grabbing control device, electronic equipment and storage medium
EP4095486A1 (en) Systems and methods for navigating a robot using semantic mapping
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system
CN119458364A (en) A humanoid robot grasping method based on three-dimensional vision
Kragic et al. A framework for visual servoing
Kanellakis et al. Guidance for autonomous aerial manipulator using stereo vision
CN119660058B (en) Dynamic grasping and packing method and system for automobile stamping parts based on flexible manipulator
TWI851310B (en) Robot and method for autonomously moving and grabbing objects
Ren et al. Vision based object grasping of robotic manipulator
US11618167B2 (en) Pixelwise filterable depth maps for robots
Gratal et al. Virtual visual servoing for real-time robot pose estimation
US11818328B2 (en) Systems and methods for automatically calibrating multiscopic image capture systems