TY - GEN
T1 - A low-cost stereo vision system for eye-to-hand Calibration
AU - Rojas Urzulo, Jesus Abraham
AU - Gonzalez-Barbosa, Jose Joel
AU - Sandoval Castro, Xochitl Yamile
AU - Ruiz Torres, Maximiano Francisco
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In automated systems, there are tasks such as object pick and place. In this task, a vision system detects the coordinates of the work object. The robot uses these coordinates, link lengths, and joint values to implement inverse kinematics to move the robot to a point. The vision system obtains work object coordinates in its reference system; however, the robot needs to move to a point in its reference system. Changing the camera coordinate system to the robot coordinate system is necessary. We propose a crucial task methodology to compute this rigid transformation. This document proposes a method where a stereo vision system obtains the 3D coordinates of the center of a sphere in the robot end-effector. This way, we have two sets of points with different reference systems. We can find the transformation between robot and vision system references by finding the rigid transformation that reduces the Euclidean distance between the two sets of points. Because the real length of the link has an error derived in the manufacturing and assembly process, it is necessary to perform a robot calibration. The error obtained from adjusting both sets of points in the first experiment was 3.3136 mm, and after geometrical parameters compensation, this error was reduced to 1.9927 mm. It means a reduction of 39.86%.
AB - In automated systems, there are tasks such as object pick and place. In this task, a vision system detects the coordinates of the work object. The robot uses these coordinates, link lengths, and joint values to implement inverse kinematics to move the robot to a point. The vision system obtains work object coordinates in its reference system; however, the robot needs to move to a point in its reference system. Changing the camera coordinate system to the robot coordinate system is necessary. We propose a crucial task methodology to compute this rigid transformation. This document proposes a method where a stereo vision system obtains the 3D coordinates of the center of a sphere in the robot end-effector. This way, we have two sets of points with different reference systems. We can find the transformation between robot and vision system references by finding the rigid transformation that reduces the Euclidean distance between the two sets of points. Because the real length of the link has an error derived in the manufacturing and assembly process, it is necessary to perform a robot calibration. The error obtained from adjusting both sets of points in the first experiment was 3.3136 mm, and after geometrical parameters compensation, this error was reduced to 1.9927 mm. It means a reduction of 39.86%.
UR - http://www.scopus.com/inward/record.url?scp=85147550102&partnerID=8YFLogxK
U2 - 10.1109/ROPEC55836.2022.10018743
DO - 10.1109/ROPEC55836.2022.10018743
M3 - Contribución a la conferencia
AN - SCOPUS:85147550102
T3 - 2022 IEEE International Autumn Meeting on Power, Electronics and Computing, ROPEC 2022
BT - 2022 IEEE International Autumn Meeting on Power, Electronics and Computing, ROPEC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE International Autumn Meeting on Power, Electronics and Computing, ROPEC 2022
Y2 - 9 November 2022 through 11 November 2022
ER -