[go: up one dir, main page]

CN110962121B - Motion device for loading 3D detection unit and its material grabbing method - Google Patents

Motion device for loading 3D detection unit and its material grabbing method Download PDF

Info

Publication number
CN110962121B
CN110962121B CN201811163269.6A CN201811163269A CN110962121B CN 110962121 B CN110962121 B CN 110962121B CN 201811163269 A CN201811163269 A CN 201811163269A CN 110962121 B CN110962121 B CN 110962121B
Authority
CN
China
Prior art keywords
pose
eye
matrix
grabbing
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811163269.6A
Other languages
Chinese (zh)
Other versions
CN110962121A (en
Inventor
周帅骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xinshang Microelectronics Technology Co ltd
Original Assignee
Shanghai Micro Electronics Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Micro Electronics Equipment Co Ltd filed Critical Shanghai Micro Electronics Equipment Co Ltd
Priority to CN201811163269.6A priority Critical patent/CN110962121B/en
Publication of CN110962121A publication Critical patent/CN110962121A/en
Application granted granted Critical
Publication of CN110962121B publication Critical patent/CN110962121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

本发明涉及一种装载3D探测单元的运动装置及其物料抓取方法,通过选取一特定工况,采集并计算抓取位姿、拍照位姿、眼物位姿及其对应的位姿矩阵,获得3D探测单元与末端执行单元的手眼位姿,进而在实际抓取中采集新的拍照位姿及眼物位姿,经控制单元计算获取物料的完整的3D信息,从中获得其准确的抓取位姿,使机械臂单元在3D探测单元的引导下,移动至抓取位姿处使末端执行单元抓取物料,提高了机械臂单元对物料的定位精度和可靠性。

Figure 201811163269

The invention relates to a motion device loaded with a 3D detection unit and a material grabbing method thereof. By selecting a specific working condition, the grabbing pose, the photographing pose, the eye-object pose and the corresponding pose matrix are collected and calculated, Obtain the hand-eye pose of the 3D detection unit and the end execution unit, and then collect new photo poses and eye-object poses in the actual grasping, and obtain the complete 3D information of the material through the control unit calculation, and obtain its accurate grasping. Under the guidance of the 3D detection unit, the robot arm unit moves to the grasping position and posture, so that the end execution unit grabs the material, which improves the positioning accuracy and reliability of the robot arm unit for the material.

Figure 201811163269

Description

Movement device for loading 3D detection unit and material grabbing method thereof
Technical Field
The invention relates to the field of machine vision, in particular to a moving device for loading a 3D detection unit and a material grabbing method thereof.
Background
In the modern automated production process, with the development of industrial robots, people widely use machine vision systems in the fields of material transportation, working condition monitoring, finished product inspection, quality control and the like. The purpose of machine vision is to provide information on the object being operated on to the robot, and the research of machine vision includes: object recognition, detecting an object in an image; estimating the pose, namely calculating the position and the posture of the object under a camera coordinate system; and calibrating the camera, and determining the position and the posture of the camera relative to the robot. In this way, the object pose can be converted into the robot pose.
In a robot system, the installation manner of a camera can be divided into two major categories: one is that the camera is installed outside the mechanical arm, is fixed relative to the base (world coordinate system) of the robot, and does not move along with the movement of the mechanical arm; the other is that the camera is mounted on the mechanical arm and moves with the movement of the mechanical arm.
In the prior art, a vision sensor is used for measuring an object so as to obtain pose information. The visual sensor is installed at the tail end of the mechanical arm and moves along with the mechanical arm, so that the visual sensor is wider in application range and suitable for more working occasions. Due to technical limitation, the tail end of the mechanical arm is extremely difficult to move at a constant speed, the error is far larger than that of a workpiece table, the result precision obtained by a common vision sensor is not high, complete 3D information cannot be obtained by the algorithm of the existing automatic handling machine, the robot cannot be grabbed accurately, and the positioning precision and the reliability of the robot are affected.
Disclosure of Invention
The invention aims to provide a movement device loaded with a 3D detection unit and a material grabbing method thereof, which can obtain complete 3D information, so that the positioning accuracy and reliability of a mechanical arm on a material are improved.
In order to achieve the above object, the present invention provides a moving device for loading a 3D detection unit, including a robot arm unit, a terminal execution unit, a control unit and a 3D detection unit, wherein the terminal execution unit is disposed at a terminal of the robot arm unit, the 3D detection unit is disposed on the terminal execution unit to obtain 3D information of a material, and the control unit guides the robot arm unit to move to a position of the material according to the 3D information of the material, and enables the terminal execution unit to grasp the material.
Optionally, the 3D detection unit includes a dot matrix camera.
Optionally, a flange is disposed at a tail end of the mechanical arm unit, so that the mechanical arm unit and the tail end execution unit are connected through the flange.
Optionally, the robot arm unit includes a robot arm with n degrees of freedom, where n is greater than or equal to 3.
The invention also provides a material grabbing method, which comprises the following steps:
s1: acquiring the grabbing pose P of the mechanical arm unit and the material under a specific working conditionboPhotographing pose P of the mechanical arm unit and the tail end execution unitbtAnd the 3D detection unit and the eye pose P of the materialcoAnd according to the grabbing pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose P of the 3D detection unit and the tail end execution unittcAnd performs step S2;
s2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboAnd performs step S3;
s3: the control unit is used for grabbing the pose P according to the pose PboGuiding the robot arm unit to move to the position of the material, and causing the end effector unit to grasp the material, and performing step S2.
Alternatively, in step S3, when the installation position of the 3D detection unit and/or the end effector is changed, step S1 is performed.
Optionally, in step S1, according to the grasp pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose PtcComprises the following steps:
will grasp the pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a hand-eye pose matrix Mtc
The hand-eye pose matrix MtcConvert into the hand-eye pose Ptc
Optionally, according to formula Mtc=Mbt -1·Mbo·Mco -1Obtaining the hand-eye pose matrix Mtc
Optionally, in step S2, according to the hand-eye pose PtcObtaining the grabbing pose PboComprises the following steps:
the hand-eye position P is obtainedtcAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a grabbing pose matrix Mbo
The grabbing pose matrix M is obtainedboConvert into the grab pose Pbo
Optionally, according to formula Mbo=Mbt·Mtc·McoObtaining the grabbing pose matrix Mbo
Optionally, the mechanical arm unit has an encoder, and the photographing pose P is acquired by reading the encoderbt
Alternatively, said 3The D detection unit acquires 3D information of the material, and the control unit obtains the eye object pose P according to the 3D informationco
According to the invention, the 3D information of a material is acquired through the 3D detection unit, the control unit guides the mechanical arm unit to move to the position of the material according to the 3D information of the material, and the tail end execution unit is enabled to grab the material, so that the working precision and reliability of the mechanical arm unit are improved.
Drawings
Fig. 1 is a schematic view of a motion device for loading a 3D detection unit according to the present invention;
FIG. 2 is a schematic diagram of a dot matrix camera according to the present invention;
in the figure: 1-a robot arm unit; a 2-3D detection unit; 3-an end execution unit; 4-materials.
Detailed Description
The following describes in more detail embodiments of the present invention with reference to the schematic drawings. The advantages and features of the present invention will become more apparent from the following description. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is merely for the purpose of facilitating and distinctly claiming the embodiments of the present invention.
Referring to fig. 1, the moving device for loading a 3D detection unit provided by the present invention includes a robot arm unit 1, a terminal execution unit 3, a control unit and a 3D detection unit 2, wherein the terminal execution unit 3 is disposed at the terminal of the robot arm unit 1, the 3D detection unit 2 is disposed on the terminal execution unit 3 to obtain 3D information of a material 4, and the control unit guides the robot arm unit 1 to move to the position of the material 4 according to the 3D information of the material 4, and enables the terminal execution unit 3 to grasp the material 4.
It should be noted that the material 4 according to the present invention is plural, and the shapes of the plural materials 4 are identical.
Further, a flange is provided at the end of the robot arm unit 1, so that the robot arm unit 1 and the end effector unit 3 are connected by the flange.
Further, the robot arm unit 1 includes a robot arm with n degrees of freedom, where n is 3 or more.
The invention discloses a material grabbing method of a movement device for loading a 3D detection unit by taking a 6-degree-of-freedom mechanical arm as an example, which specifically comprises the following steps:
s1: acquiring the grabbing pose P of the mechanical arm unit 1 and the material 4 under a specific working conditionboThe shooting pose P of the mechanical arm unit 1 and the tail end execution unit 3btAnd the 3D detection unit 2 and the eye pose P of the material 4coAnd according to the grabbing pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose P of the 3D detection unit 2 and the tail end execution unit 3tcAnd performs step S2;
s2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboAnd performs step S3;
s3: the control unit is used for grabbing the pose P according to the pose PboThe robot arm unit 1 is guided to move to the position of the material 4 and the end effector 3 is caused to grasp the material 4, and step S2 is performed.
Further, in step S3, when the installation positions of the 3D detection unit 2 and the end effector 3 are changed, it is necessary to repeatedly execute step S1 of reacquiring the hand-eye pose Ptc
Wherein, the grabbing pose PboThe relative pose of the mechanical arm unit 1 from a base coordinate system to a material 4 coordinate system; shooting pose PbtThe relative pose from the base coordinate system of the mechanical arm unit 1 to the coordinate system of the end execution unit 3 is shown; pose of eye object PcoThe relative pose from the coordinate system of the 3D detection unit 2 to the coordinate system of the material 4 is determined; hand-eye pose PtcIs the relative pose of the end effector 3 coordinate system to the 3D detector 2 coordinate system. All coordinate systems in this example follow the right hand rule.
Specifically, move to a position of shooing by artifical teaching arm unit 1, trigger 3D and survey unit 2 and shoot, 3D surveysThe measuring unit 2 is arranged on the mechanical arm unit 1, and the motion posture of the mechanical arm unit 1 follows Euler transformation, so the shooting posture P of the mechanical arm unit 1 in a base coordinate system and the terminal execution unit 3 in a coordinate systembtCan be described as (x)bt,ybt,zbt,Rxbt,Rybt,Rzbt) And when the 3D detection unit 2 works, the photographing pose PbtCan be read from the encoder of the robot arm unit 1 and uploaded to the control unit.
In the embodiment of the present invention, the end effector 3 is mounted on the end flange of the robot arm unit 1, so that the relative position of the base coordinate system of the robot arm unit 1 and the coordinate system of the material 4, i.e., the grasping position PboCan be expressed as (x)bo,ybo,zbo,Rxbo,Rybo,Rzbo) In the embodiment, the mechanical arm unit 1 is taught manually to grab the material 4 to obtain the material; during specific operation, a path is manually appointed by a demonstrator attached to the mechanical arm unit 1 through manual operation, after the mechanical arm unit 1 finishes photographing the 3D detection unit 2 at a photographing position, the mechanical arm unit 1 is controlled to move to a grabbing position of the material 4, and the grabbing pose P can be obtained at the momentboAnd uploading to the control unit.
Further, the 3D detection unit 2 includes a dot matrix camera. Specifically, the 3D detection unit 2 in this example may be a dot matrix camera including an infrared camera and an infrared dot matrix light source. The working principle is shown in figure 2:
in the figure, a is a projection plane of the infrared dot matrix light source O, and a 'is an imaging plane of the infrared camera O'. The relative position of the infrared dot matrix light source and the infrared camera is fixed. The ray Op is a beam of rays emitted by an infrared lattice light source, and if the point p is on the plane A, the position of the point p projected on the object is determined to be on the ray Op. The object has different depths, and the coordinates projected on the imaging plane of the infrared camera are different, such as points P, P1, P2 with different depths on the ray Op in fig. 2, which are projected on the imaging plane as P ', P1 ', P2 ', respectively. The depth information of point P2 in fig. 2 can be expressed as:
Figure BDA0001820470210000051
and c is the distance between the center of the infrared dot matrix light source and the center of the infrared camera. The coordinates of the point P2 in the infrared camera coordinate system can be obtained by geometric relations.
In fact, the infrared lattice light source can emit a plurality of rays, and the points with different depths are formed when the rays are projected on the material 4. By encoding the dot matrix image projected by the infrared dot matrix light source, points of different depths on a plurality of projection rays of the infrared dot matrix light source correspond to points on an imaging plane of the infrared camera one by one, so that point cloud data of the shot material 4 can be obtained.
The point cloud data is matched with the surface information of the material 4, and the relative position from the lattice structured light camera coordinate system to the material 4 coordinate system, namely the eye object position P can be calculated through the control unitcoCan be expressed as (x)co,yco,zco,Rxco,Ryco,Rzco). Visible, the pose P of the eye objectcoThe image data obtained by photographing through the 3D detection unit 2 is obtained after calculation through the control unit.
Further, in step S1, according to the grasp pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose PtcComprises the following steps:
will grasp the pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a hand-eye pose matrix Mtc
The hand-eye pose matrix MtcConvert into the hand-eye pose Ptc
In specific implementation, when the mechanical arm unit 1 moves to the photographing position and the 3D detection unit 2 photographs, the control unit sequentially obtains photographingPosition P of lightingbtAnd the pose P of the eye objectcoWhen the teaching mechanical arm unit 1 moves to the grabbing position of the material 4, the control unit obtains the grabbing pose Pbo. Waiting for the control unit to acquire the shooting pose PbtPosition and posture of eye object PcoAnd capture pose PboThen, the control unit calculates and converts the three-dimensional transformation matrix into a corresponding three-dimensional transformation matrix, namely a capture pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco
Since all the information obtained by the 3D detection unit 2 is described in the coordinate system of the 3D detection unit 2, the robot arm unit 1 needs to use the information obtained by the vision system, and it is necessary to first determine the relative relationship between the coordinate system of the 3D detection unit 2 and the base coordinate system of the robot arm unit 1, i.e. the calibration of the 3D detection unit 2, and step S1 is the process of calibrating the 3D detection unit 2.
In the present embodiment, the movement of the robot arm unit 1 is based on the base coordinate system of the robot arm unit 1, and the eye pose P acquired by the 3D detection unit 2coIs based on the 3D detection unit 2 coordinate system; in order to ensure that the end effector 3 can accurately grasp the material 4, a pose relationship between the 3D detector unit 2 and the robot arm unit 1 or a pose relationship between the end effector 3 needs to be established.
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoThe hand-eye position matrix M of the coordinate system of the end execution unit 3 and the coordinate system of the 3D detection unit 2 is calculated by the control unittcWherein the calculation formula is as follows:
Mtc=Mbt -1·Mbo·Mco -1
then, the control unit makes the hand-eye pose matrix MtcConvert into hand-eye pose PtcTherefore, the pose relation between the 3D detection unit 2 and the tail end execution unit 3 is obtained, and the calibration work of the 3D detection unit 2 is completed.
Here, the hand-eye pose matrix MtcConvert into hand-eye pose PtcAnd the operator can refer to the actual numerical value.
Then, the process proceeds to step S2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboStep S3 is executed.
In step S2, according to the hand-eye pose PtcObtaining the grabbing pose PboComprises the following steps:
the hand-eye position P is obtainedtcAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a grabbing pose matrix Mbo
The grabbing pose matrix M is obtainedboConvert into the grab pose Pbo
Specifically, the terminal execution unit 3 is driven by the mechanical arm unit 1 to move to any photographing position, the photographing position can be manually set, and the mechanical arm unit 1 drives the terminal execution unit 3 to move to the photographing position. The photographing positions in each step S21 may be the same or different, as long as the 3D detection unit 2 can photograph the material 4 to be gripped. When the terminal execution unit 3 moves to any photographing position, the control unit reads the photographing pose P from the encoder of the robot arm unit 1btAnd the shooting pose P is calculatedbtConvert into the matrix M of the pose of shooingbt
The 3D detection unit 2 shoots and uploads image data to the control unit, the control unit processes the received 3D information of the material 4, and the eye pose P is obtained through calculationcoAnd set the position of the eye object PcoConvert into an eye pose matrix Mco
The control unit reads the hand-eye pose P in step S1tcConvert it into hand-eye pose matrix Mtc
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoThe control unit calculates to obtain a grabbing pose matrix M according to a formulaboWherein the calculation formula is
Mbo=Mbt·Mtc·Mco
Then, the control unit will grab the pose matrix MboConverted into a grabbing pose PboThus, the specific position information of the material 4 is obtained.
Then step S3 is executed: the control unit is used for grabbing the pose P according to the pose PboThe robot arm unit 1 is guided to move to the position of the material 4 and the end effector 3 is caused to grasp the material 4, and step S2 is executed.
Specifically, the control unit is to grasp the pose PboAnd sending the data to the mechanical arm unit 1, driving the mechanical arm unit 1 to move to the position of the material 4, and then driving the tail end execution unit 3 to grab the material 4 to finish grabbing work.
Specifically, the terminal execution unit 3 can be a pneumatic claw or other components, and the grabbing of the material 4 can be realized by controlling the terminal execution unit 3 to act.
Due to the hand-eye pose PtcIt has been found in step S1 that the hand-eye pose P is not changed when the relative fixing positions of the 3D detection unit 2 and the end effector 3 are not changedtcIs constant and will not change, therefore, the step S1 only needs to be executed once to acquire the hand-eye pose matrix MtcThereafter, the steps S2 and S3 may be repeatedly performed a plurality of times. That is, the steps S2 and S3 may be repeatedly performed for each material 4, and the robot arm unit 1 finishes grasping each material 4. If the relative fixing position of the 3D detection unit 2 and/or the end execution unit 3 is changed, step S1 needs to be executed to perform calibration again.
In summary, in the moving device loading the 3D detection unit and the material grabbing method thereof provided by the embodiment of the present invention, the pose relationship between the 3D detection unit and the end execution unit is established to obtain the complete 3D information of any material, so as to obtain the accurate grabbing pose thereof, so that the robot arm unit moves to the position of the material under the guidance of the 3D detection unit, and the end execution unit grabs the material, thereby improving the precision and reliability of the operation of the robot arm unit.
The above description is only a preferred embodiment of the present invention, and does not limit the present invention in any way. It will be understood by those skilled in the art that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A material grabbing method is characterized by comprising the following steps:
s1: grabbing pose P of acquisition mechanical arm unit and materialboPhotographing pose P of the mechanical arm unit and the tail end execution unitbtAnd the 3D detection unit and the eye pose P of the materialcoAnd according to the grabbing pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose P of the 3D detection unit and the tail end execution unittcAnd performs step S2;
s2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboAnd performs step S3;
s3: the control unit is used for grabbing the pose P according to the pose PboGuiding the robot arm unit to move to the position of the material, and causing the end effector unit to grasp the material, and performing step S2.
2. The material grasping method according to claim 1, wherein in step S3, when a mounting position of the 3D detection unit and/or the end effector unit is changed, step S1 is performed.
3. Method for gripping material according to claim 1, characterised in that in step S1, according to the grabbing pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose PtcComprises the following steps:
will grasp the pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a hand-eye pose matrix Mtc
The hand-eye pose matrix MtcConvert into the hand-eye pose Ptc
4. A method of material handling as claimed in claim 3, characterised by being in accordance with formula Mtc= Mbt -1·Mbo·Mco -1Obtaining the hand-eye pose matrix Mtc
5. The material grasping method according to claim 1, characterized in that in step S2, according to the hand-eye pose PtcObtaining the grabbing pose PboComprises the following steps:
the hand-eye position P is obtainedtcAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a grabbing pose matrix Mbo
The grabbing pose matrix M is obtainedboConvert into the grab pose Pbo
6. Method for gripping material according to claim 5, characterised in that it is carried out according to the formula Mbo= Mbt·Mtc·McoObtaining the grabbing pose matrix Mbo
7. The material grasping method according to claim 1, wherein the robot arm unit has an encoder, and the photographing pose P is acquired by reading the encoderbt
8. The material grabbing method according to claim 1, wherein the 3D detection unit acquires 3D information of the material, and the control unit obtains the eye pose P according to the 3D informationco
CN201811163269.6A 2018-09-30 2018-09-30 Motion device for loading 3D detection unit and its material grabbing method Active CN110962121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811163269.6A CN110962121B (en) 2018-09-30 2018-09-30 Motion device for loading 3D detection unit and its material grabbing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811163269.6A CN110962121B (en) 2018-09-30 2018-09-30 Motion device for loading 3D detection unit and its material grabbing method

Publications (2)

Publication Number Publication Date
CN110962121A CN110962121A (en) 2020-04-07
CN110962121B true CN110962121B (en) 2021-05-07

Family

ID=70029545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811163269.6A Active CN110962121B (en) 2018-09-30 2018-09-30 Motion device for loading 3D detection unit and its material grabbing method

Country Status (1)

Country Link
CN (1) CN110962121B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103759716A (en) * 2014-01-14 2014-04-30 清华大学 Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
CN103895042A (en) * 2014-02-28 2014-07-02 华南理工大学 Industrial robot workpiece positioning grabbing method and system based on visual guidance
CN106272424A (en) * 2016-09-07 2017-01-04 华中科技大学 A kind of industrial robot grasping means based on monocular camera and three-dimensional force sensor
CN107053173A (en) * 2016-12-29 2017-08-18 芜湖哈特机器人产业技术研究院有限公司 The method of robot grasping system and grabbing workpiece
CN107443377A (en) * 2017-08-10 2017-12-08 埃夫特智能装备股份有限公司 Sensor robot coordinate system conversion method and Robotic Hand-Eye Calibration method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103759716A (en) * 2014-01-14 2014-04-30 清华大学 Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
CN103895042A (en) * 2014-02-28 2014-07-02 华南理工大学 Industrial robot workpiece positioning grabbing method and system based on visual guidance
CN106272424A (en) * 2016-09-07 2017-01-04 华中科技大学 A kind of industrial robot grasping means based on monocular camera and three-dimensional force sensor
CN107053173A (en) * 2016-12-29 2017-08-18 芜湖哈特机器人产业技术研究院有限公司 The method of robot grasping system and grabbing workpiece
CN107443377A (en) * 2017-08-10 2017-12-08 埃夫特智能装备股份有限公司 Sensor robot coordinate system conversion method and Robotic Hand-Eye Calibration method

Also Published As

Publication number Publication date
CN110962121A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN108109174B (en) Robot monocular guidance method and system for randomly sorting scattered parts
CN112010024B (en) Automatic container grabbing method and system based on laser and vision fusion detection
US9279661B2 (en) Information processing apparatus and information processing method
JP6855492B2 (en) Robot system, robot system control device, and robot system control method
CN107561082B (en) Inspection system
JP3946711B2 (en) Robot system
CN107009358B (en) Single-camera-based robot disordered grabbing device and method
JP2020011339A5 (en) Robot system control method, control program, recording medium, control device, robot system, article manufacturing method
CN112958960B (en) Robot hand-eye calibration device based on optical target
CN204585232U (en) Capture robot pose and the movement locus navigation system of online workpiece
JP2010152550A (en) Work apparatus and method for calibrating the same
CN108098762A (en) A kind of robotic positioning device and method based on novel visual guiding
CN104786226A (en) Posture and moving track positioning system and method of robot grabbing online workpiece
CN112577423B (en) Method for machine vision position location in motion and application thereof
KR20110095700A (en) Industrial Robot Control Method for Work Object Pickup
US12128571B2 (en) 3D computer-vision system with variable spatial resolution
JP2019049467A (en) Distance measurement system and distance measurement method
CN107685329A (en) A kind of robot workpiece positioning control system and method
CN110962121B (en) Motion device for loading 3D detection unit and its material grabbing method
JP2015132523A (en) Position/attitude measurement apparatus, position/attitude measurement method, and program
CN118023797B (en) An instruction-guided automated welding system and method
CN110977950B (en) Robot grabbing and positioning method
CN117260682B (en) A high-precision automatic teaching device and teaching method for the entire workspace of a robot
JP7660686B2 (en) ROBOT CONTROL DEVICE, ROBOT CONTROL SYSTEM, AND ROBOT CONTROL METHOD
CN114147849B (en) Method for recognizing side forms based on 2D plane vision and form removal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20250714

Address after: 3 / F, building 19, building 8, No. 498, GuoShouJing Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Patentee after: Shanghai Xinshang Microelectronics Technology Co.,Ltd.

Country or region after: China

Address before: 201203 Pudong New Area East Road, No. 1525, Shanghai

Patentee before: SHANGHAI MICRO ELECTRONICS EQUIPMENT (GROUP) Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right