Disclosure of Invention
The invention aims to provide a movement device loaded with a 3D detection unit and a material grabbing method thereof, which can obtain complete 3D information, so that the positioning accuracy and reliability of a mechanical arm on a material are improved.
In order to achieve the above object, the present invention provides a moving device for loading a 3D detection unit, including a robot arm unit, a terminal execution unit, a control unit and a 3D detection unit, wherein the terminal execution unit is disposed at a terminal of the robot arm unit, the 3D detection unit is disposed on the terminal execution unit to obtain 3D information of a material, and the control unit guides the robot arm unit to move to a position of the material according to the 3D information of the material, and enables the terminal execution unit to grasp the material.
Optionally, the 3D detection unit includes a dot matrix camera.
Optionally, a flange is disposed at a tail end of the mechanical arm unit, so that the mechanical arm unit and the tail end execution unit are connected through the flange.
Optionally, the robot arm unit includes a robot arm with n degrees of freedom, where n is greater than or equal to 3.
The invention also provides a material grabbing method, which comprises the following steps:
s1: acquiring the grabbing pose P of the mechanical arm unit and the material under a specific working conditionboPhotographing pose P of the mechanical arm unit and the tail end execution unitbtAnd the 3D detection unit and the eye pose P of the materialcoAnd according to the grabbing pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose P of the 3D detection unit and the tail end execution unittcAnd performs step S2;
s2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboAnd performs step S3;
s3: the control unit is used for grabbing the pose P according to the pose PboGuiding the robot arm unit to move to the position of the material, and causing the end effector unit to grasp the material, and performing step S2.
Alternatively, in step S3, when the installation position of the 3D detection unit and/or the end effector is changed, step S1 is performed.
Optionally, in step S1, according to the grasp pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose PtcComprises the following steps:
will grasp the pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco;
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a hand-eye pose matrix Mtc;
The hand-eye pose matrix MtcConvert into the hand-eye pose Ptc。
Optionally, according to formula Mtc=Mbt -1·Mbo·Mco -1Obtaining the hand-eye pose matrix Mtc。
Optionally, in step S2, according to the hand-eye pose PtcObtaining the grabbing pose PboComprises the following steps:
the hand-eye position P is obtainedtcAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectco;
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a grabbing pose matrix Mbo;
The grabbing pose matrix M is obtainedboConvert into the grab pose Pbo。
Optionally, according to formula Mbo=Mbt·Mtc·McoObtaining the grabbing pose matrix Mbo。
Optionally, the mechanical arm unit has an encoder, and the photographing pose P is acquired by reading the encoderbt。
Alternatively, said 3The D detection unit acquires 3D information of the material, and the control unit obtains the eye object pose P according to the 3D informationco。
According to the invention, the 3D information of a material is acquired through the 3D detection unit, the control unit guides the mechanical arm unit to move to the position of the material according to the 3D information of the material, and the tail end execution unit is enabled to grab the material, so that the working precision and reliability of the mechanical arm unit are improved.
Detailed Description
The following describes in more detail embodiments of the present invention with reference to the schematic drawings. The advantages and features of the present invention will become more apparent from the following description. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is merely for the purpose of facilitating and distinctly claiming the embodiments of the present invention.
Referring to fig. 1, the moving device for loading a 3D detection unit provided by the present invention includes a robot arm unit 1, a terminal execution unit 3, a control unit and a 3D detection unit 2, wherein the terminal execution unit 3 is disposed at the terminal of the robot arm unit 1, the 3D detection unit 2 is disposed on the terminal execution unit 3 to obtain 3D information of a material 4, and the control unit guides the robot arm unit 1 to move to the position of the material 4 according to the 3D information of the material 4, and enables the terminal execution unit 3 to grasp the material 4.
It should be noted that the material 4 according to the present invention is plural, and the shapes of the plural materials 4 are identical.
Further, a flange is provided at the end of the robot arm unit 1, so that the robot arm unit 1 and the end effector unit 3 are connected by the flange.
Further, the robot arm unit 1 includes a robot arm with n degrees of freedom, where n is 3 or more.
The invention discloses a material grabbing method of a movement device for loading a 3D detection unit by taking a 6-degree-of-freedom mechanical arm as an example, which specifically comprises the following steps:
s1: acquiring the grabbing pose P of the mechanical arm unit 1 and the material 4 under a specific working conditionboThe shooting pose P of the mechanical arm unit 1 and the tail end execution unit 3btAnd the 3D detection unit 2 and the eye pose P of the material 4coAnd according to the grabbing pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose P of the 3D detection unit 2 and the tail end execution unit 3tcAnd performs step S2;
s2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboAnd performs step S3;
s3: the control unit is used for grabbing the pose P according to the pose PboThe robot arm unit 1 is guided to move to the position of the material 4 and the end effector 3 is caused to grasp the material 4, and step S2 is performed.
Further, in step S3, when the installation positions of the 3D detection unit 2 and the end effector 3 are changed, it is necessary to repeatedly execute step S1 of reacquiring the hand-eye pose Ptc。
Wherein, the grabbing pose PboThe relative pose of the mechanical arm unit 1 from a base coordinate system to a material 4 coordinate system; shooting pose PbtThe relative pose from the base coordinate system of the mechanical arm unit 1 to the coordinate system of the end execution unit 3 is shown; pose of eye object PcoThe relative pose from the coordinate system of the 3D detection unit 2 to the coordinate system of the material 4 is determined; hand-eye pose PtcIs the relative pose of the end effector 3 coordinate system to the 3D detector 2 coordinate system. All coordinate systems in this example follow the right hand rule.
Specifically, move to a position of shooing by artifical teaching arm unit 1, trigger 3D and survey unit 2 and shoot, 3D surveysThe measuring unit 2 is arranged on the mechanical arm unit 1, and the motion posture of the mechanical arm unit 1 follows Euler transformation, so the shooting posture P of the mechanical arm unit 1 in a base coordinate system and the terminal execution unit 3 in a coordinate systembtCan be described as (x)bt,ybt,zbt,Rxbt,Rybt,Rzbt) And when the 3D detection unit 2 works, the photographing pose PbtCan be read from the encoder of the robot arm unit 1 and uploaded to the control unit.
In the embodiment of the present invention, the end effector 3 is mounted on the end flange of the robot arm unit 1, so that the relative position of the base coordinate system of the robot arm unit 1 and the coordinate system of the material 4, i.e., the grasping position PboCan be expressed as (x)bo,ybo,zbo,Rxbo,Rybo,Rzbo) In the embodiment, the mechanical arm unit 1 is taught manually to grab the material 4 to obtain the material; during specific operation, a path is manually appointed by a demonstrator attached to the mechanical arm unit 1 through manual operation, after the mechanical arm unit 1 finishes photographing the 3D detection unit 2 at a photographing position, the mechanical arm unit 1 is controlled to move to a grabbing position of the material 4, and the grabbing pose P can be obtained at the momentboAnd uploading to the control unit.
Further, the 3D detection unit 2 includes a dot matrix camera. Specifically, the 3D detection unit 2 in this example may be a dot matrix camera including an infrared camera and an infrared dot matrix light source. The working principle is shown in figure 2:
in the figure, a is a projection plane of the infrared dot matrix light source O, and a 'is an imaging plane of the infrared camera O'. The relative position of the infrared dot matrix light source and the infrared camera is fixed. The ray Op is a beam of rays emitted by an infrared lattice light source, and if the point p is on the plane A, the position of the point p projected on the object is determined to be on the ray Op. The object has different depths, and the coordinates projected on the imaging plane of the infrared camera are different, such as points P, P1, P2 with different depths on the ray Op in fig. 2, which are projected on the imaging plane as P ', P1 ', P2 ', respectively. The depth information of point P2 in fig. 2 can be expressed as:
and c is the distance between the center of the infrared dot matrix light source and the center of the infrared camera. The coordinates of the point P2 in the infrared camera coordinate system can be obtained by geometric relations.
In fact, the infrared lattice light source can emit a plurality of rays, and the points with different depths are formed when the rays are projected on the material 4. By encoding the dot matrix image projected by the infrared dot matrix light source, points of different depths on a plurality of projection rays of the infrared dot matrix light source correspond to points on an imaging plane of the infrared camera one by one, so that point cloud data of the shot material 4 can be obtained.
The point cloud data is matched with the surface information of the material 4, and the relative position from the lattice structured light camera coordinate system to the material 4 coordinate system, namely the eye object position P can be calculated through the control unitcoCan be expressed as (x)co,yco,zco,Rxco,Ryco,Rzco). Visible, the pose P of the eye objectcoThe image data obtained by photographing through the 3D detection unit 2 is obtained after calculation through the control unit.
Further, in step S1, according to the grasp pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose PtcComprises the following steps:
will grasp the pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco;
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a hand-eye pose matrix Mtc;
The hand-eye pose matrix MtcConvert into the hand-eye pose Ptc。
In specific implementation, when the mechanical arm unit 1 moves to the photographing position and the 3D detection unit 2 photographs, the control unit sequentially obtains photographingPosition P of lightingbtAnd the pose P of the eye objectcoWhen the teaching mechanical arm unit 1 moves to the grabbing position of the material 4, the control unit obtains the grabbing pose Pbo. Waiting for the control unit to acquire the shooting pose PbtPosition and posture of eye object PcoAnd capture pose PboThen, the control unit calculates and converts the three-dimensional transformation matrix into a corresponding three-dimensional transformation matrix, namely a capture pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco。
Since all the information obtained by the 3D detection unit 2 is described in the coordinate system of the 3D detection unit 2, the robot arm unit 1 needs to use the information obtained by the vision system, and it is necessary to first determine the relative relationship between the coordinate system of the 3D detection unit 2 and the base coordinate system of the robot arm unit 1, i.e. the calibration of the 3D detection unit 2, and step S1 is the process of calibrating the 3D detection unit 2.
In the present embodiment, the movement of the robot arm unit 1 is based on the base coordinate system of the robot arm unit 1, and the eye pose P acquired by the 3D detection unit 2coIs based on the 3D detection unit 2 coordinate system; in order to ensure that the end effector 3 can accurately grasp the material 4, a pose relationship between the 3D detector unit 2 and the robot arm unit 1 or a pose relationship between the end effector 3 needs to be established.
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoThe hand-eye position matrix M of the coordinate system of the end execution unit 3 and the coordinate system of the 3D detection unit 2 is calculated by the control unittcWherein the calculation formula is as follows:
Mtc=Mbt -1·Mbo·Mco -1
then, the control unit makes the hand-eye pose matrix MtcConvert into hand-eye pose PtcTherefore, the pose relation between the 3D detection unit 2 and the tail end execution unit 3 is obtained, and the calibration work of the 3D detection unit 2 is completed.
Here, the hand-eye pose matrix MtcConvert into hand-eye pose PtcAnd the operator can refer to the actual numerical value.
Then, the process proceeds to step S2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboStep S3 is executed.
In step S2, according to the hand-eye pose PtcObtaining the grabbing pose PboComprises the following steps:
the hand-eye position P is obtainedtcAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectco;
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a grabbing pose matrix Mbo;
The grabbing pose matrix M is obtainedboConvert into the grab pose Pbo。
Specifically, the terminal execution unit 3 is driven by the mechanical arm unit 1 to move to any photographing position, the photographing position can be manually set, and the mechanical arm unit 1 drives the terminal execution unit 3 to move to the photographing position. The photographing positions in each step S21 may be the same or different, as long as the 3D detection unit 2 can photograph the material 4 to be gripped. When the terminal execution unit 3 moves to any photographing position, the control unit reads the photographing pose P from the encoder of the robot arm unit 1btAnd the shooting pose P is calculatedbtConvert into the matrix M of the pose of shooingbt。
The 3D detection unit 2 shoots and uploads image data to the control unit, the control unit processes the received 3D information of the material 4, and the eye pose P is obtained through calculationcoAnd set the position of the eye object PcoConvert into an eye pose matrix Mco。
The control unit reads the hand-eye pose P in step S1tcConvert it into hand-eye pose matrix Mtc。
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoThe control unit calculates to obtain a grabbing pose matrix M according to a formulaboWherein the calculation formula is
Mbo=Mbt·Mtc·Mco
Then, the control unit will grab the pose matrix MboConverted into a grabbing pose PboThus, the specific position information of the material 4 is obtained.
Then step S3 is executed: the control unit is used for grabbing the pose P according to the pose PboThe robot arm unit 1 is guided to move to the position of the material 4 and the end effector 3 is caused to grasp the material 4, and step S2 is executed.
Specifically, the control unit is to grasp the pose PboAnd sending the data to the mechanical arm unit 1, driving the mechanical arm unit 1 to move to the position of the material 4, and then driving the tail end execution unit 3 to grab the material 4 to finish grabbing work.
Specifically, the terminal execution unit 3 can be a pneumatic claw or other components, and the grabbing of the material 4 can be realized by controlling the terminal execution unit 3 to act.
Due to the hand-eye pose PtcIt has been found in step S1 that the hand-eye pose P is not changed when the relative fixing positions of the 3D detection unit 2 and the end effector 3 are not changedtcIs constant and will not change, therefore, the step S1 only needs to be executed once to acquire the hand-eye pose matrix MtcThereafter, the steps S2 and S3 may be repeatedly performed a plurality of times. That is, the steps S2 and S3 may be repeatedly performed for each material 4, and the robot arm unit 1 finishes grasping each material 4. If the relative fixing position of the 3D detection unit 2 and/or the end execution unit 3 is changed, step S1 needs to be executed to perform calibration again.
In summary, in the moving device loading the 3D detection unit and the material grabbing method thereof provided by the embodiment of the present invention, the pose relationship between the 3D detection unit and the end execution unit is established to obtain the complete 3D information of any material, so as to obtain the accurate grabbing pose thereof, so that the robot arm unit moves to the position of the material under the guidance of the 3D detection unit, and the end execution unit grabs the material, thereby improving the precision and reliability of the operation of the robot arm unit.
The above description is only a preferred embodiment of the present invention, and does not limit the present invention in any way. It will be understood by those skilled in the art that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.