Forum Migration Notice
Update (2026-01-12): The user forums will be put into read-only mode on the 21st of January, 00:00 CET, to prepare for the data migration.
We're transitioning to a more modern community platform by beginning of next year. Learn about the upcoming changes and what to expect.
We're transitioning to a more modern community platform by beginning of next year. Learn about the upcoming changes and what to expect.
Vision coordinate system
Hi all,
I've been trying to develop a robust application using vision guided motion.
We have the following camera available: http://www.theimagingsource.com/en_US/products/zoom-cameras/gige-cmos-color/dfkz12gp031/
A quick sketch of the situation:
- A vacuum gripper is attached to the robot (ABB IRB 4600) to pick randomly placed bodies
- Those bodies are moved underneath a glue gun, which applies the glue to the body.
- The body is placed in a randomly placed housing.
The goal is to detect the bodies and housings on the carrier using a robot mounted camera.
Therefore we use the following approach:
- Take a picture of the complete carrier and send the position of all bodies and housings to the robot using socket messaging.
- Take a close-up picture of the body by moving the robot lower and to the correct x and y position.
The problem is now to get the correct x and y position from the second vision step.
In the first step we can use the outline of the carrier as reference position, however in the second step, the reference position is not visible anymore.
How are you handling those situations?
Thanks in advance,
Jeroen
I've been trying to develop a robust application using vision guided motion.
We have the following camera available: http://www.theimagingsource.com/en_US/products/zoom-cameras/gige-cmos-color/dfkz12gp031/
A quick sketch of the situation:
- A vacuum gripper is attached to the robot (ABB IRB 4600) to pick randomly placed bodies
- Those bodies are moved underneath a glue gun, which applies the glue to the body.
- The body is placed in a randomly placed housing.
The goal is to detect the bodies and housings on the carrier using a robot mounted camera.
Therefore we use the following approach:
- Take a picture of the complete carrier and send the position of all bodies and housings to the robot using socket messaging.
- Take a close-up picture of the body by moving the robot lower and to the correct x and y position.
The problem is now to get the correct x and y position from the second vision step.
In the first step we can use the outline of the carrier as reference position, however in the second step, the reference position is not visible anymore.
How are you handling those situations?
Thanks in advance,
Jeroen
0
Categories
- All Categories
- 5.7K RobotStudio
- 402 UpFeed
- 21 Tutorials
- 16 RobotApps
- 307 PowerPacs
- 407 RobotStudio S4
- 1.8K Developer Tools
- 251 ScreenMaker
- 2.9K Robot Controller
- 368 IRC5
- 92 OmniCore
- 8 RCS (Realistic Controller Simulation)
- 859 RAPID Programming
- 43 AppStudio
- 4 RobotStudio AR Viewer
- 19 Wizard Easy Programming
- 111 Collaborative Robots
- 5 Job listings