[go: up one dir, main page]

CN117817671A - Robot system based on visual guidance and robot system calibration method - Google Patents

Robot system based on visual guidance and robot system calibration method Download PDF

Info

Publication number
CN117817671A
CN117817671A CN202410192128.6A CN202410192128A CN117817671A CN 117817671 A CN117817671 A CN 117817671A CN 202410192128 A CN202410192128 A CN 202410192128A CN 117817671 A CN117817671 A CN 117817671A
Authority
CN
China
Prior art keywords
pose
coordinate system
actuator
error
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410192128.6A
Other languages
Chinese (zh)
Other versions
CN117817671B (en
Inventor
牛群
赵杰亮
李宏坤
樊钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Migration Technology Co ltd
Original Assignee
Beijing Migration Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Migration Technology Co ltd filed Critical Beijing Migration Technology Co ltd
Priority to CN202410192128.6A priority Critical patent/CN117817671B/en
Publication of CN117817671A publication Critical patent/CN117817671A/en
Application granted granted Critical
Publication of CN117817671B publication Critical patent/CN117817671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The present disclosure provides a vision guidance-based robotic system, a robotic system calibration method, an electronic device, a readable storage medium, and a program product. The vision guidance-based robotic system includes: the camera is used for acquiring the pose of the target object in a camera coordinate system; a robot, wherein a tool is arranged at the end of an actuator, and the tool operates the target object based on the pose of the end of the actuator in a robot base coordinate system, the pose of the target object, the relative relation of a first coordinate system, the relative relation of a second coordinate system and the relative relation of a third coordinate system; the first coordinate system relative relationship is the relative relationship between the camera coordinate system and the robot base coordinate system, and the third coordinate system relative relationship is the relative relationship between the tool coordinate system and the end coordinate system of the actuator; the first coordinate system relative relation and the third coordinate system relative relation are coordinate system relative relation after synchronous calibration based on the pose of the target object and the pose of the end of the actuator with an associated relation.

Description

Robot system based on visual guidance and robot system calibration method
Technical Field
The present disclosure relates to the technical field of robotics, vision technologies, and the like, and in particular, to a vision guidance-based robotic system, a robot system calibration method, an electronic device, a readable storage medium, and a program product.
Background
In applications where robotics and vision are combined, for example in fields where robots (e.g. multi-degree of freedom mechanical arms) are used for gripping objects, multiple coordinate systems are required.
The method for representing the coordinate system comprises the following steps: the rotation matrix R represents the direction (azimuth/attitude) and the translation vector t represents the position. The combination of rotation plus translation is called pose, i.e., a collection of positions and poses.
In the application of the combination of robotics and vision techniques, hand-eye calibration and tool calibration involve the establishment of relationships between different coordinate systems, ensuring that the robot is able to accurately sense and manipulate the surrounding environment or objects.
The hand-eye calibration is to correlate the robot base coordinate system with the camera coordinate system to ensure that the robot can understand the position and posture of the object observed in the camera field of view.
Tool calibration refers to associating a tool coordinate system on a robot end effector with a robot-based coordinate system to ensure that the robot is able to perform tasks accurately, especially if special tools (e.g., gripping tools, jigs, suction cups, etc.) are required.
These calibration processes typically require accurate data acquisition and computation to ensure accurate coordinate transformation relationships are obtained.
In the related art, a standard calibration plate is generally used as a reference, and a plurality of sets of calibration plate pose (in a camera coordinate system) and robot pose (in a robot base coordinate system) are obtained by moving the calibration plate or the camera, so as to calculate the hand-eye calibration transformation.
In the related art, the pose of the calibration plate is generally obtained through methods such as feature recognition, pose estimation or point cloud registration, and the pose of the robot can be obtained through a controller of the robot.
In the related art, TCP (tool center point) marks a tip object which is usually required to be fixed in a working space, and a plurality of groups of robot pose lists with the same positions and the same pose are obtained by touching the tip object through a mobile robot in a plurality of different poses, so that the positions of the TCP are calculated; the attitude (azimuth, i.e., rotation matrix) of the TCS is generally the same as the flange coordinate system, i.e., the robot effector end coordinate system, to which the tool (e.g., gripping tool, gripper, suction cup, etc.) is typically mounted via a flange. There are also some methods in the related art for calibrating TCS using visual, laser sensor, etc.
The traditional method generally performs hand-eye calibration and TCS calibration separately, two times of calibration are needed, the calibration process is complex, errors of the hand-eye calibration and the TCS are introduced in the calibration process, the errors of the hand-eye calibration and the TCS are introduced through the two times of calibration, no correlation exists, and the final robot operation (grabbing, puncturing and other operations) precision is low.
Disclosure of Invention
The present disclosure provides a vision guidance-based robotic system, a robotic system calibration method, an electronic device, a readable storage medium, and a program product.
According to one aspect of the present disclosure, there is provided a vision guidance-based robot system, comprising:
the camera is used for acquiring the pose of the target object in a camera coordinate system;
a robot, wherein a tool is configured at an actuator end of the robot, and the tool operates the target object based on a pose of the actuator end in a robot base coordinate system, the pose of the target object, a first coordinate system relative relationship, a second coordinate system relative relationship, and a third coordinate system relative relationship;
the first coordinate system relative relationship is a relative relationship between a camera coordinate system and a robot base coordinate system, the second coordinate system relative relationship is a relative relationship between an actuator end coordinate system and the robot base coordinate system, and the third coordinate system relative relationship is a relative relationship between a tool coordinate system and the actuator end coordinate system;
The first coordinate system relative relation and the third coordinate system relative relation are coordinate system relative relation after synchronous calibration based on the pose of the target object and the pose of the tail end of the actuator with association relation.
A vision guidance-based robotic system according to at least one embodiment of the present disclosure, the target object pose and the actuator end pose with an associated relationship is obtained based on the following process:
the tail end of the actuator drives the tool and a target object fixed on the tool to act to a target position of a working space;
synchronously acquiring the pose of the target object and the pose of the tail end of the actuator at the target position of the working space, and taking the pose of the target object and the pose of the tail end of the actuator with the association relationship as the pose of the target object and the pose of the tail end of the actuator;
wherein the camera is fixedly arranged in the working space.
A vision guidance-based robotic system according to at least one embodiment of the present disclosure, the target object pose and the actuator end pose with an associated relationship is obtained based on the following process:
the end of the actuator drives the tool and a target object fixed on the tool to act to a first target position of a working space, and the position and the pose of the end of the actuator at the first target position are obtained;
Holding the target object at the first target position, and moving the end of the actuator to a second target position different from the first target position;
and acquiring the end pose of the actuator of the second target position, acquiring the end pose of the first target position by using a camera fixedly arranged on the actuator, and acquiring the end pose of the first target position and the end pose of the actuator with the association relationship based on the end pose of the first target position, the end pose of the actuator of the first target position and the end pose of the actuator of the second target position.
A vision-guided robotic system in accordance with at least one embodiment of the present disclosure, the tool includes a gripper, clamp, or suction for performing a gripping operation, or a suction operation on a target object.
The vision-guided robotic system according to at least one embodiment of the present disclosure, the actuator includes a robotic arm, and the actuator end is a robotic arm end.
The vision-guided robot system according to at least one embodiment of the present disclosure performs the synchronous calibration based on a plurality of sets of the target object pose and the actuator end pose having an association relationship.
A vision guidance-based robotic system in accordance with at least one embodiment of the present disclosure, the synchronous calibration comprising:
taking the difference between the first product and the second product as a relative error between the pose of the tool coordinate system under the robot base coordinate system and the pose of the target object under the robot base coordinate system;
the first product is a product of the multiplication of the pose of the end of the actuator and the first error, the second product is a product of the multiplication of the second error and the pose of the target object, the pose of the end of the actuator used for obtaining the first product and the pose of the target object used for obtaining the second product are the poses with the association relation, the pose of the end of the actuator is the pose in the robot base coordinate system, and the pose of the target object is the pose in the camera coordinate system;
the first error is an error between the initial relative relation of the third coordinate system and the real relative relation of the third coordinate system, and the second error is an error between the initial relative relation of the first coordinate system and the real relative relation of the first coordinate system;
based on the acquired multiple groups of target object pose with association relation and actuator tail end pose, determining a first error and a second error by taking the relative error as an optimization object;
And calibrating the initial third coordinate system relative relation based on the determined first error, and calibrating the initial first coordinate system relative relation based on the determined second error.
According to at least one embodiment of the present disclosure, the synchronous calibration is performed based on a calibration model, which is:
C=AX-YB;
wherein A represents the pose of the end of the actuator of the robot in the robot base coordinate system, B represents the pose of the target object in the camera coordinate system, error X represents the error between the initial third coordinate system relative relation and the real third coordinate system relative relation, error Y represents the error between the initial first coordinate system relative relation and the real first coordinate system relative relation, and error C represents the relative error between the pose of the tool coordinate system in the robot base coordinate system and the pose of the target object in the robot base coordinate system.
A vision guidance-based robotic system in accordance with at least one embodiment of the present disclosure, the synchronous calibration comprising: inputting the acquired multiple groups of target object pose with association relation and the end pose of the actuator into the calibration model, and determining an error X and an error Y by taking the error C as an optimization object; and calibrating the initial third coordinate system relative relationship based on the determined error X, and calibrating the initial first coordinate system relative relationship based on the determined error Y.
In accordance with at least one embodiment of the present disclosure, the vision guidance-based robotic system, the effector tip pose is acquired based on a controller and/or a teach pendant of the robot.
According to another aspect of the present disclosure, there is provided a robot system calibration method, comprising:
acquiring a plurality of groups of target object pose and actuator tail end pose with association relation, wherein the target object pose is the pose of a target object in a camera coordinate system of a robot system, and the actuator tail end pose is the pose of an actuator tail end of the robot system in a robot base coordinate system;
based on a plurality of groups of target object pose with association relation and actuator end pose, taking the relative error between the pose of a tool coordinate system under a robot base coordinate system and the pose of a target object under the robot base coordinate system as an optimization object, determining the error between an initial third coordinate system relative relation and a real third coordinate system relative relation and the error between an initial first coordinate system relative relation and a real first coordinate system relative relation, wherein the third coordinate system relative relation is the relative relation between the tool coordinate system and the actuator end coordinate system, and the first coordinate system relative relation is the relative relation between a camera coordinate system and the robot base coordinate system;
Performing first calibration on the initial third coordinate system relative relation based on the determined error between the initial third coordinate system relative relation and the real third coordinate system relative relation, and performing second calibration on the initial first coordinate system relative relation based on the determined error between the initial first coordinate system relative relation and the real first coordinate system relative relation, wherein the first calibration and the second calibration are synchronous calibration.
According to a robot system calibration method of at least one embodiment of the present disclosure, based on a plurality of sets of the target object pose and the actuator end pose having the association relationship, with a relative error between the pose of the tool coordinate system in the robot base coordinate system and the pose of the target object in the robot base coordinate system as an optimization object, determining an error between an initial third coordinate system relative relationship and a true third coordinate system relative relationship and an error between an initial first coordinate system relative relationship and a true first coordinate system relative relationship, includes:
and inputting the acquired multiple groups of target object pose with association relation and the end pose of the actuator into a calibration model to determine the error between the initial third coordinate system relative relation and the real third coordinate system relative relation and the error between the initial first coordinate system relative relation and the real first coordinate system relative relation.
According to a robot system calibration method of at least one embodiment of the present disclosure, the calibration model is: c=ax-YB;
wherein A represents the pose of the end of the actuator of the robot in the robot base coordinate system, B represents the pose of the target object in the camera coordinate system, error X represents the error between the initial third coordinate system relative relation and the real third coordinate system relative relation, error Y represents the error between the initial first coordinate system relative relation and the real first coordinate system relative relation, and error C represents the relative error between the pose of the tool coordinate system in the robot base coordinate system and the pose of the target object in the robot base coordinate system.
According to the robot system calibration method of at least one embodiment of the present disclosure, the target object pose and the actuator end pose having an association relationship are obtained based on the following processes:
the tail end of the actuator drives the tool and a target object fixed on the tool to act to a target position of a working space;
synchronously acquiring the pose of the target object and the pose of the tail end of the actuator at the target position of the working space, and taking the pose of the target object and the pose of the tail end of the actuator with the association relationship as the pose of the target object and the pose of the tail end of the actuator;
Wherein the camera is fixedly arranged in the working space.
According to the robot system calibration method of at least one embodiment of the present disclosure, the target object pose and the actuator end pose having an association relationship are obtained based on the following processes:
the end of the actuator drives the tool and a target object fixed on the tool to act to a first target position of a working space, and the position and the pose of the end of the actuator at the first target position are obtained;
holding the target object at the first target position, and moving the end of the actuator to a second target position different from the first target position;
and acquiring the end pose of the actuator of the second target position, acquiring the end pose of the first target position by using a camera fixedly arranged on the actuator, and acquiring the end pose of the first target position and the end pose of the actuator with the association relationship based on the end pose of the first target position, the end pose of the actuator of the first target position and the end pose of the actuator of the second target position.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including: a memory storing execution instructions; and a processor executing the execution instructions stored by the memory, causing the processor to perform the robotic system calibration method of any one of the embodiments of the disclosure.
According to yet another aspect of the present disclosure, there is provided a readable storage medium having stored therein execution instructions which, when executed by a processor, are to implement the robotic system calibration method of any one of the embodiments of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the robot system calibration method of any of the embodiments of the present disclosure.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of a typical robotic grasping system.
Fig. 2 shows the relationship between the coordinate systems in a typical robotic grasping system.
Fig. 3 is a flow diagram of a robotic system calibration method of one embodiment of the present disclosure.
Fig. 4 is a flow diagram of acquiring a target object pose and an actuator end pose with an associated relationship in some embodiments of the present disclosure.
Fig. 5 is a flow chart of acquiring a target object pose and an actuator end pose with an associated relationship in other embodiments of the present disclosure.
Fig. 6 is a comparison between the method of the present disclosure and a reference method: comparison of hand-eye transformation and TCS position error.
Fig. 7 is a comparison result of an error C of the robot system calibration method of the present disclosure and the individual calibration method in the related art, which is illustrated using a violin chart.
FIG. 8 is a ROS simulation system diagram: a) is a system component, b) is a pattern of a gripped object, c) is a gripping state, and d) is a placing state.
Fig. 9 is a comparison of grip errors between the robot system calibration method of the present disclosure and the method in the related art: a) -f) is the grip deviation calibrated using the method in the related art, g) -k) is the grip deviation calibrated using the robotic system calibration method of the present disclosure.
Fig. 10 is a block schematic diagram of the configuration of a robot system calibration device employing a hardware implementation of a processor according to one embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and the embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant content and not limiting of the present disclosure. It should be further noted that, for convenience of description, only a portion relevant to the present disclosure is shown in the drawings.
In addition, embodiments of the present disclosure and features of the embodiments may be combined with each other without conflict. The technical aspects of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Unless otherwise indicated, the exemplary implementations/embodiments shown are to be understood as providing exemplary features of various details of some ways in which the technical concepts of the present disclosure may be practiced. Thus, unless otherwise indicated, features of the various implementations/embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concepts of the present disclosure.
The use of cross-hatching and/or shading in the drawings is typically used to clarify the boundaries between adjacent components. As such, the presence or absence of cross-hatching or shading does not convey or represent any preference or requirement for a particular material, material property, dimension, proportion, commonality between illustrated components, and/or any other characteristic, attribute, property, etc. of a component, unless indicated. In addition, in the drawings, the size and relative sizes of elements may be exaggerated for clarity and/or descriptive purposes. While the exemplary embodiments may be variously implemented, the specific process sequences may be performed in a different order than that described. For example, two consecutively described processes may be performed substantially simultaneously or in reverse order from that described. Moreover, like reference numerals designate like parts.
When an element is referred to as being "on" or "over", "connected to" or "coupled to" another element, it can be directly on, connected or coupled to the other element or intervening elements may be present. However, when an element is referred to as being "directly on," "directly connected to," or "directly coupled to" another element, there are no intervening elements present. For this reason, the term "connected" may refer to physical connections, electrical connections, and the like, with or without intermediate components.
The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising," and variations thereof, are used in the present specification, the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof is described, but the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not precluded. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximation terms and not as degree terms, and as such, are used to explain the inherent deviations of measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art.
Term interpretation:
target object coordinate system: object Coordinate System (OCS), the target object is the object that is gripped.
Tool coordinate system: tool Coordinate System (TCS).
Tool center point: tool Center Point (TCP), i.e. the position of the tool coordinate system.
Camera coordinate system: camera Coordinate System (CCS).
Robot base coordinate system: base Coordinate System (BCS).
Actuator end coordinate system: flange Coordinate System (FCS).
Fig. 1 is a schematic diagram of a typical robotic grasping system. Robotic grasping is a typical application that combines a vision system (3D camera) with a robot.
Referring to fig. 1, the robot gripping system includes a robot, a 3D camera (e.g., a 3D camera), a tool provided at an end of an actuator of the robot, and may perform a gripping operation on a target object (gripped object) in a case, for example. The 3D camera captures an environmental point cloud (including a point cloud of a target object), then the pose of the target object is determined through a visual algorithm, then a robot operating tool reaches the target pose, and the target object is subjected to grabbing operation to complete the grabbing task.
Since the reference coordinate systems of the 3D camera and the robot are different, it is necessary to perform hand-eye calibration before task execution, and a transformation relationship (relative relationship) between the Camera Coordinate System (CCS) and the robot Base Coordinate System (BCS) is established. After the hand-eye transformation relation is obtained, the pose of the target object in the BCS can be obtained through a 3D camera. In order to successfully perform the clamping task (grabbing task), it is also necessary to determine the relative relationship between the grabbing pose (TCS) and the target object pose (OCS), which is an arbitrarily specified fixed transformation, which may be an identity matrix.
Fig. 2 shows the relationship between the coordinate systems in a typical robotic grasping system.
Referring to fig. 2, the robot takes its BCS as a reference. The robot provides a transformation relationship from FCS to BCS. By establishing a relative relationship between the calibrated TCS and FCS, a transformation relationship between the TCS and BCS can be determined. The point cloud (including the target object) acquired by the 3D camera is referenced by CCS, and the point cloud referenced by BCS can be obtained through hand-eye transformation. Then, based on the relative relationship between the tool and the target object, the target pose of the tool to complete the gripping task, commonly referred to as the gripping point, may be obtained.
In the view of figure 2, B T F the relation of FCS with respect to BCS is shown, F T T the relationship of TCS with respect to FCS is shown, B T C representing the hand-eye transformation of CCS relative to BCS, C T O the relationship of OCS with respect to CCS is shown.
In robotic grasping systems, there may be various errors that may ultimately lead to grasping failure. These errors mainly include robot absolute positioning errors, vision errors, hand-eye alignment errors, and TCS errors. Among these errors, the absolute positioning error and the visual error of the robot are generally determined by hardware devices and have a defined range, which is generally small. On the other hand-eye calibration errors and TCS errors are determined by calibration algorithms, which directly affect the final gripping performance.
The actual target position of the tool of the robot is provided by the vision system. After applying the hand-eye transformation, the kinematic model of the robot and the TCS, the final position of the tool of the robot (i.e. the actual target position) can be determined.
Since the robot absolute positioning error and the vision error are typically determined by hardware devices and have a defined range, and are typically small, the robot absolute positioning error and the vision error are ignored in this disclosure, mainly focusing on the hand-eye calibration and TCS calibration.
After the vision system provides the pose of the target object, the tool is required to actually move to the exact position where the target object is located, as shown in fig. 2.
In the ideal gripping case, the target TCS coincides with the OCS, meaning that the pose of the tool in the BCS is equal to the pose of the target object in the BCS, as shown in equation (1):
wherein t= [ R, T]E SE (3), representing a rigid transformation between two coordinates, T comprising a rotation matrix R E SO (3) and a translation vector
However, due to errors, the robot inevitably cannot move to an exact target pose. The error may be expressed as the difference between the actual position (i.e., the position that the tool actually reaches) and the target position (i.e., the target position of the tool), as shown in equation (2):
B T F · F T′ T - B T′ C · C T O =Error (2)
Wherein, F T′ TF T T +E T and B T′ CB T C +E C the initial TCS transform (the transform between TCS and FCS) and the initial hand-eye transform are represented, respectively. E (E) T For the error between the initial TCS-transformation and the actual (to-be-solved) TCS-transformation, E C Is the error between the initial hand-eye transformation and the actual (to-be-solved) hand-eye transformation.
Wherein,
expanding equation (2) into a matrix form, resulting in equation (3):
expanding the right side of the formula (3) by using a block matrix multiplication to obtain a formula (4):
from equation (4) and equation (1), equation (5) can be derived:
Error= B T F ·E T -E C · C T O (5)
as can be seen, the present disclosure constructs a calibration model in the form of c=ax-YB. X corresponds to E T Y corresponds to E C
The present disclosure preferably solves equation (5) using a least squares method (other methods of solving a system of linear equations are possible) to obtain the error E between the initial TCS transform and the true (to be solved) TCS transform T Error E between initial hand-eye transformation and actual (to-be-solved) hand-eye transformation C
The following is an exemplary solution process:
expanding equation (5) to obtain equation (6):
removing the 0 term yields equation (7):
E 12×1 =M 12×24 ·Z 24×1 (7)
in formula (7):
solving equation (7) using Least Squares (LSM) to obtain E T And E is C Will E T And E is C Taken together as a calibration result (i.e., the relative error described in this disclosure).
As can be seen, the present disclosure is able to obtain E based on the calibration model constructed above T And E is C Combined relative error.
In the equation (5) for the case of the liquid, B T F obtained from the controller (or teach pendant) of the robot, C T O is obtained from the scene point cloud and the target object (i.e., target object).
Thus, for the calibration model c=ax-YB constructed in the present disclosure, where a represents the pose of the robot's actuator tip in the BCS, B represents the pose of the target object in the CCS, and X represents the TCS transformation to be solved F T T ) Y represents the hand-eye transformation to be solved B T C ). C represents the relative error between the pose of the Tool Coordinate System (TCS) in the robot base coordinate system and the pose of the target object in the robot Base Coordinate System (BCS). The error C is taken as an actual grabbing (clamping) error, and the accuracy degree of grabbing (clamping) of the robot is directly determined.
The calibration process of the present disclosure may be performed directly using components of the vision guidance-based robotic system itself without additional instrument assistance.
Based on the above description, the present disclosure may provide a vision guidance-based robot system, a robot system calibration method of the following embodiments.
In some embodiments of the present disclosure, a vision guidance-based robotic system of the present disclosure includes:
The camera is used for acquiring the pose of the target object in a camera coordinate system;
the robot, the end of the actuator of the robot is provided with a tool, and the tool operates the target object based on the pose of the end of the actuator in the robot base coordinate system, the pose of the target object, the relative relation of the first coordinate system, the relative relation of the second coordinate system and the relative relation of the third coordinate system;
the first coordinate system relative relationship is a relative relationship (i.e. hand-eye transformation relationship) between a Camera Coordinate System (CCS) and a robot Base Coordinate System (BCS), the second coordinate system relative relationship is a relative relationship (i.e. TCS transformation relationship) between an actuator end coordinate system (FCS) and the robot Base Coordinate System (BCS), and the third coordinate system relative relationship is a relative relationship (i.e. TCS transformation relationship) between a Tool Coordinate System (TCS) and the actuator end coordinate system (FCS);
the first coordinate system relative relation and the third coordinate system relative relation are coordinate system relative relation after synchronous calibration based on the pose of the target object and the pose of the end of the actuator with an associated relation.
The camera of the robotic system of the present disclosure may be a 3D camera, and the robot of the robotic system of the present disclosure may be a robot having a robotic arm, please refer to fig. 1. The selection and adjustment of the type of the 3D camera and the type of the robot (e.g. the degree of freedom of the mechanical arm) all fall within the scope of the present disclosure.
Referring to fig. 1, the robot system of the present disclosure has an actuator, which may be a robot arm, which may be a multi-degree-of-freedom robot arm, and at an end of the actuator, a tool such as a gripper, a clamp, or a suction tool may be mounted, for example, by a flange, for performing a gripping operation, or a suction operation on a target object (e.g., a block in a case).
In the present disclosure, the pose of a target object in a camera coordinate system C T O ) The position and the posture of the tail end of the actuator can be obtained by methods such as feature recognition, position and posture estimation or point cloud registration and the like B T F ) May be acquired based on a controller and/or a teach pendant of the robot.
When a robot system performs a gripping task by a tool on an actuator, the tool coordinate system TCS is generally coincident with a gripping pose on a gripped object (target object). The grabbing pose on the grabbed object is related to the geometric shape of the grabbed object, and can be expressed by a transformation matrix (rotation and translation), namely, the transformation matrix from the target object pose to the grabbing pose is a fixed value, and the final calibration effect is not influenced. The grabbing pose can also be the pose of the target object, namely the transformation from the pose of the target object to the grabbing pose is a unit array.
According to the robot system, the TCS conversion and the hand-eye conversion of the robot system are synchronously calibrated based on the target object pose with the association relation and the end pose of the actuator, the relative error between the TCS conversion and the hand-eye conversion is used as an optimization object, the hand-eye coordination of the robot system with higher accuracy can be realized, and the accuracy of the robot system for executing tasks such as grabbing is improved.
In some embodiments of the present disclosure, the camera of the robotic system is disposed outside the robot (with the eye outside the hand), e.g., fixedly disposed in a workspace, which is a workspace for the robotic system to perform tasks such as gripping, which is an overlapping portion of the camera workspace and the robot work space.
The pose of the target object with the association relation C T O ) And the position and the posture of the tail end of the actuator B T F ) Obtained based on the following procedure:
the tail end of the actuator drives the tool and a target object fixed on the tool to act to a target position of a working space;
synchronously acquiring the pose of the target object at the target position of the working space C T O ) And the position and the posture of the tail end of the actuator B T F ) As a target object pose and an actuator end pose with an association relationship;
Wherein the camera is fixedly arranged in the working space (the eyes are outside the hands).
In other embodiments of the present disclosure, the above-described target object pose with association relationship of the present disclosure C T O ) And the position and the posture of the tail end of the actuator B T F ) Obtained based on the following procedure:
the end of the actuator drives the tool and the target object fixed on the tool to act to a first target position of the working space, and the pose of the end of the actuator at the first target position is obtained B T 1 F );
Maintaining the target object at a first target position, and enabling the tail end of the actuator to act to a second target position different from the first target position;
obtaining the pose of the target object at the first target position by using a camera fixedly arranged on an actuator C T 1 O ) Obtaining the end pose of the actuator at the second target position B T 2 F ) Target object pose based on first target position C T 1 O ) Actuator end pose at first target position B T 1 F ) And actuator end pose of the second target position B T 2 F ) Obtaining the pose of the target object with the association relation C T O ) And the position and the posture of the tail end of the actuator B T F );
Wherein the camera is fixedly arranged on the actuator (the eye is on the hand). The second target location may be any location in the workspace. The first target position may be any position in the working space.
Wherein, will be B T 1 F (-1) · B T 2 F Actuator end pose as first target position B T 1 F ) And the pose of the tail end of the actuator with an association relation.
For each of the above embodiments, it is preferable to perform the synchronous calibration based on the target object pose and the actuator end pose of plural sets (typically 6 to 20 sets) having an association relationship.
And carrying out synchronous calibration on the positions and the postures of the target objects and the tail end of the actuator, which are acquired based on the present disclosure, wherein the positions and the postures of the target objects and the tail end of the actuator are related.
In some embodiments of the present disclosure, the above-described synchronization calibration of the present disclosure includes:
taking the difference between the first product and the second product as a relative Error (Error) between the pose of the Tool Coordinate System (TCS) in the robot base coordinate system and the pose of the target object in the robot Base Coordinate System (BCS);
wherein the first product is the position of the end of the actuator and the first error (E T ) The product of the multiplication, the second product is the second error (E C ) And the target object positionThe product of pose multiplication is used for obtaining the pose of the end of an actuator of a first product and the pose of a target object of a second product, wherein the pose of the end of the actuator is the pose in a robot base coordinate system, and the pose of the target object is the pose in a camera coordinate system;
Wherein the first error (E T ) As an error between the initial third coordinate system relative relationship and the true third coordinate system relative relationship, a second error (E C ) Is the error between the initial first coordinate system relative relationship and the true first coordinate system relative relationship;
based on the acquired multiple groups of target object pose with association relation and the actuator tail end pose, determining a first error and a second error by taking the relative error as an optimization object;
the initial third coordinate system relative relationship is calibrated based on the determined first error, and the initial first coordinate system relative relationship is calibrated based on the determined second error.
The initial third coordinate system relative relationship is the third coordinate system relative relationship before the synchronization calibration of the present disclosure, and the initial first coordinate system relative relationship is the first coordinate system relative relationship before the synchronization calibration of the present disclosure.
In some embodiments of the present disclosure, the synchronous calibration described above of the present disclosure is performed based on a calibration model, which is:
C=AX-YB;
wherein a represents the pose of the end of the actuator of the robot in the robot-based coordinate system, B represents the pose of the target object in the camera coordinate system, error X represents the error between the initial third coordinate system relative relationship and the real third coordinate system relative relationship, error Y represents the error between the initial first coordinate system relative relationship and the real first coordinate system relative relationship, and error C represents the relative error between the pose of the Tool Coordinate System (TCS) in the robot-based coordinate system and the pose of the target object in the robot-Based Coordinate System (BCS).
Further, the synchronous calibration includes:
inputting the acquired multiple groups of target object poses with association relations and the end poses of the executor into a calibration model, and determining an error X and an error Y by taking the error C as an optimization object;
the initial third coordinate system relative relationship is calibrated based on the determined error X, and the initial first coordinate system relative relationship is calibrated based on the determined error Y.
The present disclosure also provides a robot system calibration method for calibrating a robot system including a camera and a robot.
Referring to fig. 3, in some embodiments of the present disclosure, a robot system calibration method M100 of the present disclosure includes:
s100, acquiring a plurality of groups of target object pose and actuator end pose with association relation, wherein the target object pose is the pose of a target object in a Camera Coordinate System (CCS) of a robot system, and the actuator end pose is the pose of an actuator end of the robot system in a robot Base Coordinate System (BCS).
S200, determining an error between an initial third coordinate system relative relation and a real third coordinate system relative relation and an error between an initial first coordinate system relative relation and a real first coordinate system relative relation based on a plurality of groups of target object pose with an association relation and an actuator end pose, taking a relative error between a pose of a Tool Coordinate System (TCS) under a robot base coordinate system and a pose of a target object under a robot Base Coordinate System (BCS) as an optimization object, wherein the third coordinate system relative relation is a relative relation between the Tool Coordinate System (TCS) and the actuator end coordinate system (FCS), and the first coordinate system relative relation is a relative relation (namely a hand-eye transformation relation) between a Camera Coordinate System (CCS) and the robot Base Coordinate System (BCS).
S300, performing first calibration on the initial third coordinate system relative relation based on the determined error between the initial third coordinate system relative relation and the real third coordinate system relative relation, and performing second calibration on the initial first coordinate system relative relation based on the determined error between the initial first coordinate system relative relation and the real first coordinate system relative relation, wherein the first calibration and the second calibration are synchronous calibration.
In the robot system calibration method M100 of some embodiments of the present disclosure, determining an error between an initial third coordinate system relative relationship and a true third coordinate system relative relationship and an error between the initial first coordinate system relative relationship and the true first coordinate system relative relationship based on a plurality of sets of target object pose and actuator end pose having an association relationship, with a relative error between a pose of a Tool Coordinate System (TCS) in a robot base coordinate system and a pose of a target object in a robot Base Coordinate System (BCS) as an optimization object, includes:
and inputting the acquired multiple groups of target object pose with association relation and the end pose of the actuator into a calibration model to determine the error between the initial third coordinate system relative relation and the real third coordinate system relative relation and the error between the initial first coordinate system relative relation and the real first coordinate system relative relation.
In some embodiments of the present disclosure, in the robot system calibration method M100 of the present disclosure, the calibration model is configured to:
C=AX-YB;
wherein a represents the pose of the end of the actuator of the robot in the robot-based coordinate system, B represents the pose of the target object in the camera coordinate system, error X represents the error between the initial third coordinate system relative relationship and the real third coordinate system relative relationship, error Y represents the error between the initial first coordinate system relative relationship and the real first coordinate system relative relationship, and error C represents the relative error between the pose of the Tool Coordinate System (TCS) in the robot-based coordinate system and the pose of the target object in the robot-Based Coordinate System (BCS).
In some embodiments of the present disclosure, with respect to a robot system in which a camera is fixedly disposed in a working space other than a robot, referring to fig. 4, in a robot system calibration method M100 of the present disclosure, S110, a target object pose having an association relationship, and an actuator end pose are obtained based on the following processes:
s111, the tail end of the actuator drives the tool and the target object fixed on the tool to move to the target position of the working space.
S112, synchronously acquiring the pose of the target object and the pose of the end of the actuator at the target position of the working space, and taking the pose of the target object and the pose of the end of the actuator as the pose of the target object and the pose of the end of the actuator with association relation.
In other embodiments of the present disclosure, with respect to a robot system in which a camera is fixedly disposed on an actuator of a robot, referring to fig. 5, in a robot system calibration method M100 of the present disclosure, S110, a target object pose having an association relationship, and an actuator end pose are obtained based on the following processes:
s111, the tail end of the actuator drives the tool and a target object fixed on the tool to act to a first target position of a working space, and the tail end pose of the actuator at the first target position is obtained.
And S112, keeping the target object at a first target position, and enabling the tail end of the actuator to move to a second target position different from the first target position.
S113, acquiring the end pose of the actuator of the second target position, and acquiring the pose of the target object of the first target position by using a camera fixedly arranged on the actuator.
S114, acquiring the target object pose and the actuator end pose with the association relation based on the target object pose of the first target position, the actuator end pose of the first target position and the actuator end pose of the second target position.
The present disclosure also provides for a comprehensive assessment of the robotic system calibration method of the present disclosure through a data generation and simulation environment. The performance of the robotic system calibration method of the present disclosure was compared to the separate hand-eye calibration and TCP calibration methods of the related art.
The comparison method of the hand-eye calibration is a double quaternion method proposed by Danilidis. The comparison method of the tool calibration method is a four-point calibration method. The comparison includes an assessment of the error between the calibrated hand-eye transformations, the tool TCP and their true values. In addition, the actual error C is also compared. The accuracy of clamping is directly determined by the error C, and the success rate of clamping is affected. The parameters compared relate mainly to the differences of the transformation matrices, comparing the translation and rotation components, respectively. For the translation part, the euclidean distance is used:
D=||t gt -t Cali || 2 (8)
the rotating part is expressed in terms of an axis angle, and the difference in angles is compared:
the real values of the hand-eye transformation and TCS are compared to the calibration values. A comparison of the c=ax-YB values is made.
To simulate a real scene, three gaussian errors of different magnitudes are introduced into the composite data, the mean μ is set to 0, the standard deviation σ is set to 0.01, 0.1 and 0.5, measured in millimeters or degrees. Gaussian noise is added to the euler angle and pose vectors. First providing the hand-eye transformation and the true value of TCS:
wherein:
φ(α,β,γ,X,Y,Z)=T(R Euler (α,β,γ),t=[X,Y,Z])
next, the pose of the robot end effector is randomly generated within a specified range B T F
Subsequently, the actual TCS pose is used F T T Generating true poses of TCSs in BCSs B T T
To facilitate the test, the attitude of the TCS and the target attitude (attitude of the target object) are set to be coincident. In practice, there may be a fixed transformation matrix that does not affect the calibration. In CCS, a true hand-eye transformation is used to get a true target pose:
by multiple random generation B T F And C T T multiple sets of experimental data were obtained. And calculates the error of the target pose:
variable(s) F T T And B T C the identity matrix may be used for initialization. In the scenario where the number of collected samples is 6, 8, 10, and 12, the performance of the robotic system calibration method of the present disclosure in terms of noise level was compared to that of the related art, the hand-eye matrix and tool TCP alone.
(1) TBC and TCP comparison
B T C The comparison of the error with TCP is shown in fig. 6.
In fig. 6, a comparison of hand-eye transformation and TCS position error, a comparison between the method of the present disclosure and a reference method. Blue represents the results of the methods of the present disclosure, orange corresponds to the results of the comparative methods. Fig. 6 a) -c) are comparisons of rotation errors in hand-eye calibration, translational errors in hand-eye calibration and translational errors in TCS, with standard deviation of gaussian noise set to 0.01. D) -f) in fig. 6 are comparisons of rotation errors in hand-eye calibration, translational errors in hand-eye calibration and translational errors in TCS, and standard deviation of gaussian noise is set to 0.1. G) -i) in fig. 6 is a comparison of the rotation error in the hand-eye calibration, the translation error in the hand-eye calibration and the translation error in TCS, and the standard deviation of gaussian noise is set to 0.5.
As can be seen from fig. 6, as the standard deviation of the gaussian error increases, and the number of calibration points increases, the calibration error of the robot system calibration method of the present disclosure and the individual calibration method in the related art increases. The robot system calibration method disclosed by the invention is excellent in the calibration result of the hand-eye matrix, and is superior to the comparison method in all comparison experiments. For the calibration results of TCP, the accuracy is lower than that of the comparison method when the number of calibration points is 6 and 8. However, when the number of calibration points is increased to 10 and 12, the result of the robot calibration method of the present disclosure exceeds the comparative method.
(2) c=ax-BY comparison
In fig. 7, error C of the robot system calibration method of the present disclosure and the individual calibration method in the related art is illustrated using a violin diagram.
Blue represents the results of the methods of the present disclosure, orange corresponds to the results of the comparative methods. In fig. 7 a) -b) are comparisons of the rotation and translation errors C, and the standard deviation of gaussian noise is set to 0.01. C) -d) in fig. 7 are comparisons of the rotation and translation errors C, and the standard deviation of gaussian noise is set to 0.1. E) -f) in fig. 7 are comparisons of the rotation and translation errors C, with the standard deviation of gaussian noise set to 0.5.
The error C increases with increasing standard deviation of gaussian noise and decreases with increasing calibration points, for both methods. In all comparative experiments, the robotic system calibration method of the present disclosure consistently exhibited less error than the calibration method alone. As the number of calibration points increases, the error C decreases, indicating lower operating errors and higher overall accuracy of the robotic system.
(3) ROS simulation
The ROS simulation environment comprises a robot, a tool, an object to be grabbed and a 3D sensor, as shown in a) in fig. 8. The surface of the gripped object is designed with a pattern that allows measurement of the gripping accuracy, as shown in b) in fig. 8. The object to be grasped has a side length of 50mm and is divided into 100 parts, each of which has a resolution of 0.5mm. The robot system is first calibrated using the robot system calibration method and a comparison method of the present disclosure to obtain TCS and hand-eye transformations. The grabbing and placing tasks are then performed as shown in fig. 8 c) and d). An image of the gripper (tool) on the surface after gripping is recorded to evaluate the accuracy of the gripping operation.
In order to simulate a real environment more, the pose of the tail end of the robot is added with a Gaussian error with the standard deviation of 0.5. The 3D sensor (3D camera) is implemented by a gazebo plug-in, adding a gaussian error with standard deviation of 0.5. The pose estimation algorithm used in the calibration method uses the find surface model operator in halcon. After the calibration data (i.e., calibration data) are obtained, a capturing experiment is performed, and a plurality of capturing experiments are performed, wherein 5 times are taken as an example, as shown in fig. 9, image information of a gripper (tool) and a captured object is collected after capturing, an upper left corner of the captured object is taken as an origin of coordinates, and a distance from a vertex closest to the gripper is taken as a measurement value. Fig. 9 a) is an ideal gripping position, fig. 9 b) -f) is a gripping result of a comparison method, and fig. 9 g) -k) is a gripping result of a robot system calibrated by the method of the present disclosure. It can be seen that the grabbing result calibrated by the method disclosed by the invention deviates less from the ideal grabbing position, and the grabbing accuracy is higher. The detailed grip errors are shown in table 1. The maximum grabbing Euclidean distance error of the comparison method is 29.5mm, the minimum error is 6mm, and the average error is 11.8mm. The maximum error of the method of the present disclosure is 4.1mm, the minimum error is 1.0mm, and the average error is 2.2mm.
Table 1:
in summary, the robot system calibration method of the present disclosure requires no additional instrument assistance, and the calibration process includes mounting a model (target object) on a tool, placing the target object in the field of view of a 3D camera, such as a camera, and reading from the controller of the robotTaking out B T F Acquiring a scene point cloud (including target objects), registering a model to the scene to acquire C T O And repeat this process for multiple sets of data. The error is then calculated using the calibration model of the present disclosure (c=ax-YB), ultimately solving for the hand-eye transformation and TCS. Compared to the ax=xb model in the related art, the calibration method of the present disclosure not only determines the hand-eye transformation, but also calculates the pose of TCS. Unlike conventional TCS calibration methods, the calibration method of the present disclosure not only calibrates the position of the TCS, but also its direction.
In the field of systems integrating vision and robotics, the accuracy of hand-eye transformation and TCS position significantly affects the accuracy of the system. The robot system calibration method and the robot system provided by the disclosure eliminate the need for special instruments. The calibration process includes integrating the tool with the target object in an operational state, repositioning to the vision system, capturing images to obtain the pose of the target object, and then performing calculations with the current robot pose. The model of the calibration method of the present disclosure is briefly denoted as c=ax-YB, with the advantage of simultaneously calculating the hand-eye transformation and TCS. By taking the target error as an optimization target, the overall hand-eye coordination capability of the system is enhanced.
The results of the above comparative experiments show that the calibration method of the present disclosure is excellent in terms of absolute errors of the eyes and hands and TCP as the number of calibration points increases. The results of the present disclosure are superior to the calibration methods in the related art in terms of target attitude error. In a simulated environment, the grip deviation of the robot system after calibration by the calibration method of the present disclosure is about 1/5 of that of the method in the related art. The calibration method disclosed by the invention is an effective calibration method for the operation of the vision guiding robot, and simultaneously, the gestures of the camera and the tool are calibrated, so that the grabbing precision is improved.
Correspondingly, the disclosure also provides a robot system calibration device 1000.
In some embodiments of the present disclosure, the robotic system calibration apparatus 1000 of the present disclosure includes:
the associated pose acquisition module 1002, the associated pose acquisition module 1002 acquires a plurality of groups of target object poses and actuator end poses having an associated relationship, the target object poses being poses of the target object in a Camera Coordinate System (CCS) of the robot system, and the actuator end poses being poses of the actuator end of the robot system in a robot Base Coordinate System (BCS).
The calibration model module 1004 determines an error between an initial third coordinate system relative relation and a real third coordinate system relative relation and an error between the initial first coordinate system relative relation and the real first coordinate system relative relation based on a plurality of groups of target object pose and actuator end pose with an association relation, wherein the third coordinate system relative relation is a relative relation between a Tool Coordinate System (TCS) and an actuator end coordinate system (FCS), and the first coordinate system relative relation is a relative relation (i.e. a hand-eye transformation relation) between a Camera Coordinate System (CCS) and a robot Base Coordinate System (BCS).
The calibration execution module 1006, the calibration execution module 1006 calibrates the initial third coordinate system relative relationship based on an error between the determined initial third coordinate system relative relationship and the actual third coordinate system relative relationship, and calibrates the initial first coordinate system relative relationship based on an error between the determined initial first coordinate system relative relationship and the actual first coordinate system relative relationship.
Fig. 10 is a block schematic diagram of the configuration of a robot system calibration device employing a hardware implementation of a processor according to one embodiment of the present disclosure.
The robotic system calibration apparatus 1000 may include corresponding modules that perform each or several of the steps in the flowcharts described above. Thus, each step or several steps in the flowcharts described above may be performed by respective modules, and the apparatus may include one or more of these modules. A module may be one or more hardware modules specifically configured to perform the respective steps, or be implemented by a processor configured to perform the respective steps, or be stored within a computer-readable medium for implementation by a processor, or be implemented by some combination.
The hardware structure of the robot system calibration device 1000 of the present disclosure may be implemented using a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. Bus 1100 connects together various circuits including one or more processors 1200, memory 1300, and/or hardware modules. Bus 1100 may also connect various other circuits 1400, such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
Bus 1100 may be an industry standard architecture (ISA, industry Standard Architecture) bus, a peripheral component interconnect (PCI, peripheral Component) bus, or an extended industry standard architecture (EISA, extended Industry Standard Component) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one connection line is shown in the figure, but not only one bus or one type of bus.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure. The processor performs the various methods and processes described above. For example, method embodiments in the present disclosure may be implemented as a software program tangibly embodied on a machine-readable medium, such as a memory. In some embodiments, part or all of the software program may be loaded and/or installed via memory and/or a communication interface. One or more of the steps of the methods described above may be performed when a software program is loaded into memory and executed by a processor. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above in any other suitable manner (e.g., by means of firmware).
Logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
For the purposes of this description, a "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the readable storage medium may even be paper or other suitable medium on which the program can be printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner if necessary, and then stored in a memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or part of the steps implementing the method of the above embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in each embodiment of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The storage medium may be a read-only memory, a magnetic disk or optical disk, etc.
The present disclosure also provides an electronic device, including: a memory storing execution instructions; and a processor or other hardware module that executes the memory-stored execution instructions, such that the processor or other hardware module performs the robotic system calibration method of any of the embodiments of the disclosure.
The present disclosure also provides a readable storage medium having stored therein execution instructions that when executed by a processor are to implement the robotic system calibration method of any one of the embodiments of the present disclosure.
The present disclosure also provides a computer program product comprising computer programs/instructions which, when executed by a processor, implement the robot system calibration method of any of the embodiments of the present disclosure.
It will be appreciated by those skilled in the art that the above-described embodiments are merely for clarity of illustration of the disclosure, and are not intended to limit the scope of the disclosure. Other variations or modifications will be apparent to persons skilled in the art from the foregoing disclosure, and such variations or modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A vision guidance-based robotic system, comprising:
the camera is used for acquiring the pose of the target object in a camera coordinate system; and
a robot, wherein a tool is configured at an actuator end of the robot, and the tool operates the target object based on a pose of the actuator end in a robot base coordinate system, the pose of the target object, a first coordinate system relative relationship, a second coordinate system relative relationship, and a third coordinate system relative relationship;
the first coordinate system relative relationship is a relative relationship between a camera coordinate system and a robot base coordinate system, the second coordinate system relative relationship is a relative relationship between an actuator end coordinate system and the robot base coordinate system, and the third coordinate system relative relationship is a relative relationship between a tool coordinate system and the actuator end coordinate system;
the first coordinate system relative relation and the third coordinate system relative relation are coordinate system relative relation after synchronous calibration based on the pose of the target object and the pose of the tail end of the actuator with association relation.
2. The vision-guided robot system of claim 1, wherein the target object pose and the actuator end pose with an associated relationship are obtained based on the following process:
The tail end of the actuator drives the tool and a target object fixed on the tool to act to a target position of a working space; and
synchronously acquiring the pose of the target object and the pose of the tail end of the actuator at the target position of the working space, and taking the pose of the target object and the pose of the tail end of the actuator with the association relationship as the pose of the target object and the pose of the tail end of the actuator;
wherein the camera is fixedly arranged in the working space.
3. The vision-guided robot system of claim 1, wherein the target object pose and the actuator end pose with an associated relationship are obtained based on the following process:
the end of the actuator drives the tool and a target object fixed on the tool to act to a first target position of a working space, and the position and the pose of the end of the actuator at the first target position are obtained;
holding the target object at the first target position, and moving the end of the actuator to a second target position different from the first target position; and
acquiring the end pose of the actuator of the second target position, acquiring the target object pose of the first target position by using a camera fixedly arranged on the actuator, and acquiring the target object pose and the end pose of the actuator with the association relationship based on the target object pose of the first target position, the end pose of the actuator of the first target position and the end pose of the actuator of the second target position;
Optionally, the tool comprises a gripping tool, a clamp or a suction tool, and the gripping tool is used for performing gripping operation, clamping operation or sucking operation on the target object;
optionally, the actuator includes a mechanical arm, and the end of the actuator is the end of the mechanical arm.
4. A vision-guided robot system as claimed in any one of claims 1 to 3, characterized in that the synchronous calibration is performed based on a plurality of sets of the correlated target object pose and actuator end pose.
5. The vision-guided robotic system of claim 4, wherein the synchronous calibration comprises:
taking the difference between the first product and the second product as a relative error between the pose of the tool coordinate system under the robot base coordinate system and the pose of the target object under the robot base coordinate system;
the first product is a product of the multiplication of the pose of the end of the actuator and the first error, the second product is a product of the multiplication of the second error and the pose of the target object, the pose of the end of the actuator used for obtaining the first product and the pose of the target object used for obtaining the second product are the poses with the association relation, the pose of the end of the actuator is the pose in the robot base coordinate system, and the pose of the target object is the pose in the camera coordinate system;
The first error is an error between the initial relative relation of the third coordinate system and the real relative relation of the third coordinate system, and the second error is an error between the initial relative relation of the first coordinate system and the real relative relation of the first coordinate system;
based on the acquired multiple groups of target object pose with association relation and actuator tail end pose, determining a first error and a second error by taking the relative error as an optimization object; and
calibrating the initial third coordinate system relative relationship based on the determined first error, and calibrating the initial first coordinate system relative relationship based on the determined second error;
optionally, the synchronous calibration is performed based on a calibration model, the calibration model being:
C=AX-YB;
wherein A represents the pose of the end of the actuator of the robot in the robot base coordinate system, B represents the pose of the target object in the camera coordinate system, error X represents the error between the initial third coordinate system relative relation and the real third coordinate system relative relation, error Y represents the error between the initial first coordinate system relative relation and the real first coordinate system relative relation, and error C represents the relative error between the pose of the tool coordinate system in the robot base coordinate system and the pose of the target object in the robot base coordinate system;
Optionally, the synchronous calibration includes:
inputting the acquired multiple groups of target object pose with association relation and the end pose of the actuator into the calibration model, and determining an error X and an error Y by taking the error C as an optimization object; and
calibrating the initial third coordinate system relative relationship based on the determined error X, and calibrating the initial first coordinate system relative relationship based on the determined error Y;
optionally, the actuator end pose is acquired based on a controller and/or a teach pendant of the robot.
6. A method of calibrating a robotic system, comprising:
acquiring a plurality of groups of target object pose and actuator tail end pose with association relation, wherein the target object pose is the pose of a target object in a camera coordinate system of a robot system, and the actuator tail end pose is the pose of an actuator tail end of the robot system in a robot base coordinate system;
based on a plurality of groups of target object pose with association relation and actuator end pose, taking the relative error between the pose of a tool coordinate system under a robot base coordinate system and the pose of a target object under the robot base coordinate system as an optimization object, determining the error between an initial third coordinate system relative relation and a real third coordinate system relative relation and the error between an initial first coordinate system relative relation and a real first coordinate system relative relation, wherein the third coordinate system relative relation is the relative relation between the tool coordinate system and the actuator end coordinate system, and the first coordinate system relative relation is the relative relation between a camera coordinate system and the robot base coordinate system; and
Performing first calibration on the initial third coordinate system relative relation based on the determined error between the initial third coordinate system relative relation and the real third coordinate system relative relation, and performing second calibration on the initial first coordinate system relative relation based on the determined error between the initial first coordinate system relative relation and the real first coordinate system relative relation, wherein the first calibration and the second calibration are synchronous calibration.
7. The method according to claim 6, wherein determining the error between the initial third coordinate system relative relationship and the true third coordinate system relative relationship and the error between the initial first coordinate system relative relationship and the true first coordinate system relative relationship based on the sets of the target object pose and the actuator end pose having the association relationship, with the relative error between the pose of the tool coordinate system in the robot base coordinate system and the pose of the target object in the robot base coordinate system as the optimization object, comprises:
inputting the acquired multiple groups of target object pose with association relation and the end pose of the actuator into a calibration model to determine the error between the initial third coordinate system relative relation and the real third coordinate system relative relation and the error between the initial first coordinate system relative relation and the real first coordinate system relative relation;
Optionally, the calibration model is:
C=AX-YB;
wherein A represents the pose of the end of the actuator of the robot in the robot base coordinate system, B represents the pose of the target object in the camera coordinate system, error X represents the error between the initial third coordinate system relative relation and the real third coordinate system relative relation, error Y represents the error between the initial first coordinate system relative relation and the real first coordinate system relative relation, and error C represents the relative error between the pose of the tool coordinate system in the robot base coordinate system and the pose of the target object in the robot base coordinate system;
optionally, the target object pose and the actuator end pose with the association relationship are obtained based on the following processes:
the tail end of the actuator drives the tool and a target object fixed on the tool to act to a target position of a working space; and
synchronously acquiring the pose of the target object and the pose of the tail end of the actuator at the target position of the working space, and taking the pose of the target object and the pose of the tail end of the actuator with the association relationship as the pose of the target object and the pose of the tail end of the actuator;
wherein the camera is fixedly arranged in the working space;
optionally, the target object pose and the actuator end pose with the association relationship are obtained based on the following processes:
The end of the actuator drives the tool and a target object fixed on the tool to act to a first target position of a working space, and the position and the pose of the end of the actuator at the first target position are obtained;
holding the target object at the first target position, and moving the end of the actuator to a second target position different from the first target position; and
and acquiring the end pose of the actuator of the second target position, acquiring the end pose of the first target position by using a camera fixedly arranged on the actuator, and acquiring the end pose of the first target position and the end pose of the actuator with the association relationship based on the end pose of the first target position, the end pose of the actuator of the first target position and the end pose of the actuator of the second target position.
8. An electronic device, comprising:
a memory storing execution instructions; and
a processor executing the execution instructions stored in the memory, causing the processor to perform the robotic system calibration method of claim 6 or 7.
9. A readable storage medium, wherein executable instructions are stored in the readable storage medium, which when executed by a processor are adapted to implement the robot system calibration method of claim 6 or 7.
10. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the robot system calibration method of claim 6 or 7.
CN202410192128.6A 2024-02-21 2024-02-21 Robot system based on visual guidance and robot system calibration method Active CN117817671B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410192128.6A CN117817671B (en) 2024-02-21 2024-02-21 Robot system based on visual guidance and robot system calibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410192128.6A CN117817671B (en) 2024-02-21 2024-02-21 Robot system based on visual guidance and robot system calibration method

Publications (2)

Publication Number Publication Date
CN117817671A true CN117817671A (en) 2024-04-05
CN117817671B CN117817671B (en) 2024-06-21

Family

ID=90521164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410192128.6A Active CN117817671B (en) 2024-02-21 2024-02-21 Robot system based on visual guidance and robot system calibration method

Country Status (1)

Country Link
CN (1) CN117817671B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130010081A1 (en) * 2011-07-08 2013-01-10 Tenney John A Calibration and transformation of a camera system's coordinate system
CN113744341A (en) * 2021-07-21 2021-12-03 北京旷视科技有限公司 Camera pose calibration method and device for robot system and electronic equipment
CN114012731A (en) * 2021-11-23 2022-02-08 深圳市如本科技有限公司 Hand-eye calibration method and device, computer equipment and storage medium
CN114474058A (en) * 2022-02-11 2022-05-13 中国科学院自动化研究所 Vision-guided calibration method for industrial robot systems
CN115042184A (en) * 2022-07-06 2022-09-13 深圳市易尚展示股份有限公司 Robot hand-eye coordinate conversion method, device, computer equipment and storage medium
CN115847423A (en) * 2022-12-30 2023-03-28 合肥工业大学 Calibration method for eye-seeing vision system of industrial robot
CN116494253A (en) * 2023-06-27 2023-07-28 北京迁移科技有限公司 Target object grabbing pose acquisition method and robot grabbing system
CN116922374A (en) * 2023-05-04 2023-10-24 北京思灵机器人科技有限责任公司 Binocular vision calibration method, calibration device, robot and storage medium
US20230339112A1 (en) * 2023-03-17 2023-10-26 University Of Electronic Science And Technology Of China Method for robot assisted multi-view 3d scanning measurement based on path planning
CN117021087A (en) * 2023-08-11 2023-11-10 华中科技大学 A robot kinematic parameter calibration method based on visual multi-point pose constraints

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130010081A1 (en) * 2011-07-08 2013-01-10 Tenney John A Calibration and transformation of a camera system's coordinate system
CN103702607A (en) * 2011-07-08 2014-04-02 修复型机器人公司 Calibration and transformation of a camera system's coordinate system
CN113744341A (en) * 2021-07-21 2021-12-03 北京旷视科技有限公司 Camera pose calibration method and device for robot system and electronic equipment
CN114012731A (en) * 2021-11-23 2022-02-08 深圳市如本科技有限公司 Hand-eye calibration method and device, computer equipment and storage medium
CN114474058A (en) * 2022-02-11 2022-05-13 中国科学院自动化研究所 Vision-guided calibration method for industrial robot systems
CN115042184A (en) * 2022-07-06 2022-09-13 深圳市易尚展示股份有限公司 Robot hand-eye coordinate conversion method, device, computer equipment and storage medium
CN115847423A (en) * 2022-12-30 2023-03-28 合肥工业大学 Calibration method for eye-seeing vision system of industrial robot
US20230339112A1 (en) * 2023-03-17 2023-10-26 University Of Electronic Science And Technology Of China Method for robot assisted multi-view 3d scanning measurement based on path planning
CN116922374A (en) * 2023-05-04 2023-10-24 北京思灵机器人科技有限责任公司 Binocular vision calibration method, calibration device, robot and storage medium
CN116494253A (en) * 2023-06-27 2023-07-28 北京迁移科技有限公司 Target object grabbing pose acquisition method and robot grabbing system
CN117021087A (en) * 2023-08-11 2023-11-10 华中科技大学 A robot kinematic parameter calibration method based on visual multi-point pose constraints

Also Published As

Publication number Publication date
CN117817671B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
JP6966582B2 (en) Systems and methods for automatic hand-eye calibration of vision systems for robot motion
CN110355754B (en) Robot hand-eye system, control method, equipment and storage medium
US12030184B2 (en) System and method for error correction and compensation for 3D eye-to-hand coordination
JP6180087B2 (en) Information processing apparatus and information processing method
CN106767393B (en) Hand-eye calibration device and method for robot
CN105729468B (en) A kind of robotic workstation based on the enhancing of more depth cameras
US12159429B2 (en) Hand-eye calibration of camera-guided apparatuses
EP4116043A2 (en) System and method for error correction and compensation for 3d eye-to-hand coordination
CN111775146A (en) A visual alignment method under the multi-station operation of an industrial manipulator
CN113910219A (en) Exercise arm system and control method
JP7035657B2 (en) Robot control device, robot, robot system, and camera calibration method
WO2019146201A1 (en) Information processing device, information processing method, and information processing system
CN113524147B (en) Industrial robot teaching system and method based on 3D camera
CN112958960A (en) Robot hand-eye calibration device based on optical target
CN115629066A (en) A method and device for automatic wiring based on visual guidance
CN116372918A (en) Robot control method, device, equipment, robot and storage medium
CN116323115A (en) Control device, robot arm system and control method for robot arm device
JPH0780790A (en) 3D object grip system
CN116188572B (en) A method and system for calibrating the target area spatial position of a medical surgical robot
US20240407883A1 (en) Calibration method for the automated calibration of a camera with respect to a medical robot, and surgical assistance system
CN117817671B (en) Robot system based on visual guidance and robot system calibration method
US20230321823A1 (en) Robot control device, and robot system
CN110977950B (en) Robot grabbing and positioning method
CN118830927A (en) Surgical robot system with visual detection vibration module and detection method
Pedari et al. Spatial shape estimation of a tendon-driven continuum robotic arm using a vision-based algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant