Robot-Held Camera Calibration

in IRC5
Hello everyone,
I was wondering if anyone could help me with best practices on calibrating a Cognex camera that is held by an IRB1600. I've read through the 3HAC044251-001 Integrated Vision application manual and feel pretty comfortable on the grid-calibration side of things, however most of the examples it discusses refer to a fixed-camera setup, not robot-held. I would assume that there is a best practice for properly calibrating the TCP of the camera; I was thinking I could use my normal pointer to do a regular TCP/4 tool frame definition and take advantage of the auto-focus meter to make sure my Z is the same distance at each orientation. If there is a better way, I would greatly appreciate it.
In regard to the grid calibration, how important is it for the checkboard to be precisely located in space relative to the robot? Just to be safe, I 3D printed a holder to better locate it to a datum on our fixture table. I was thinking, once I get a good TCP, I can then pick up a wobj on the checkerboard using the camera (I guess a regular pointer would work, too) and perform the grid calibration from there. Unfortunately, my calibration checkboard does not have a fiducial, so I'm not sure if that throws a wrench into things.
Does this sound like I'm on the right track, or am I missing something? Any help would be greatly appreciated. Thanks!
I was wondering if anyone could help me with best practices on calibrating a Cognex camera that is held by an IRB1600. I've read through the 3HAC044251-001 Integrated Vision application manual and feel pretty comfortable on the grid-calibration side of things, however most of the examples it discusses refer to a fixed-camera setup, not robot-held. I would assume that there is a best practice for properly calibrating the TCP of the camera; I was thinking I could use my normal pointer to do a regular TCP/4 tool frame definition and take advantage of the auto-focus meter to make sure my Z is the same distance at each orientation. If there is a better way, I would greatly appreciate it.
In regard to the grid calibration, how important is it for the checkboard to be precisely located in space relative to the robot? Just to be safe, I 3D printed a holder to better locate it to a datum on our fixture table. I was thinking, once I get a good TCP, I can then pick up a wobj on the checkerboard using the camera (I guess a regular pointer would work, too) and perform the grid calibration from there. Unfortunately, my calibration checkboard does not have a fiducial, so I'm not sure if that throws a wrench into things.
Does this sound like I'm on the right track, or am I missing something? Any help would be greatly appreciated. Thanks!
0
Answers
-
So I was reading the integrated vision manual some more, and found this sentence on page 76:
"In case the camera is mounted on the robot it has to move to the calibration posebefore taking a photo during production."
Is that common? That would be very difficult for me to perform, as I have potentially hundreds of different poses that I would need to calibrate at. I did some research on the Cognex side of things and saw something called N-Point camera calibration that may be more in line with what I am looking for.
Any advice or pointers would be greatly appreciated. Thanks!0
Categories
- All Categories
- 5.6K RobotStudio
- 401 UpFeed
- 21 Tutorials
- 15 RobotApps
- 304 PowerPacs
- 407 RobotStudio S4
- 1.8K Developer Tools
- 250 ScreenMaker
- 2.8K Robot Controller
- 356 IRC5
- 75 OmniCore
- 8 RCS (Realistic Controller Simulation)
- 850 RAPID Programming
- 26 AppStudio
- 4 RobotStudio AR Viewer
- 19 Wizard Easy Programming
- 110 Collaborative Robots
- 5 Job listings