Using vision to modify modify object frame of work object.

I am setting up an arm to pick up a part and then move to a camera to find the center point based on where it picked up, and then move to a stationary tool for a deburring process that has to be done in a precise location on the part with an accuracy of about .002 inches. 

I am thinking that I would just modify the object frame of the robot held workobject from the data from the camera. 

Does anyone have experience doing this? most things I have found are a fixed camera finding a part location for the robot to pick up and not modifying position of an already picked part. 

I set up a test with calibrating the robot to the camera grid, though with this process I am thinking of using I don't know how that will even apply. This is my first robot project so I am trying to piece everything together still. 

Comments

  • AurelienAurelien France
    Hello Afletch.
    It is not easy to answer you in a few words, your problem is not complex but it should be illustrated.
    So, first of all, if you want to use positioning from the camera value, you need to calibrate the camera. So use the grid to do it.
    You have certainly already worked with the picking of a part detected by a camera. So you know you can use the workobject oframe or the PDispSet (I prefer the second method).
    But in your case, you must modify a wobjdata held by the robot.
    Then I will tell you how I will do:
    - Creation of a special tool on the robot with the calibration grid. Calibrate the camera with the on-board grid and teach the robot the position of the grid mark (at the same distance as the part to be detected). I'm going to create my integrated work object on the camera frame from the grid. With this, I will get a good value to change in the wobjdata of the camera.
    - For the deburring path, I will take a photo of my part, apply the offset with PDisSet in Camera wobjdata. Then I will teach the deburring path. If you have already learned the deburring path from RobotStudio, you can correct the position with the oframe of the camera wobjdata.
    Normally, after that, all future deburring should be placed correctly.
    If someone has another way ...
  • afletch7afletch7 Utah, US
    Hello Aurelien,

    Thank you for the information, I didn't know about the pdispset but seems that is a simpler way to do it. So I would get my position data defined from the camera and then activate that, do my deburr process and the end the pdisp correct?

    And also to follow up on creating the tool. Can you explain more detail of that please? Are you saying to create a part for the robot to hold that has the grid on it? And what do you mean by the "grid mark"?

    From what I am understanding, attach the grid to the arm, move it to the location where the camera would take the picture so the grid center is at camera center. And then to create the wobjdata move the arm in the x&y direction the camera sees?
  • AurelienAurelien France
    Hello Afletch,
    afletch7 said:
    And also to follow up on creating the tool. Can you explain more detail of that please? Are you saying to create a part for the robot to hold that has the grid on it? And what do you mean by the "grid mark"?
    Yes in my mind, you can create a tool to mount on the robot with the calibration grid. I send a PDF with calibration grid. This grid is use for Integrated Vision Camera (or other Cognex Camera). The Grid mark is the Frame defined by X and Y marks directions.

    afletch7 said:
    Thank you for the information, I didn't know about the pdispset but seems that is a simpler way to do it. So I would get my position data defined from the camera and then activate that, do my deburr process and the end the pdisp correct?
    Look on PDispSet and PDispOff in the documentation. Your process should be replaced correctly if the camera calibration is correct. It is very important to place the calibration grid at the same distance as your part to have an offset value corresponding to the part.
    afletch7 said:
    From what I am understanding, attach the grid to the arm, move it to the location where the camera would take the picture so the grid center is at camera center. And then to create the wobjdata move the arm in the x&y direction the camera sees?
    For sure, you should have the center of the camera frame (defined by the grid) near the center of the image. But it is not necessary to have it exactly in the center. The most important thing is to have a wobjdata placed exactly at the position of the frame of the calibration grid. This allows to have the same reference between the calculation of the camera and reality.
    To be sure of having a good path, correctly positioned, I always detect my part, apply the offset in the oframe (TX,TY,RZ) and teach the path with the offset wobjdata. After that, I clean up the oframe and use the PDispSet.

    FYI : If needed you can get the pose value from camera data like this...
    VAR pose poseOffset;
    poseOffset := [[TX,TY,0],OrientZYX(RZ,0,0)];
  • afletch7afletch7 Utah, US
    Hello,

    I think I am getting this now.
    Aurelien said:

    For sure, you should have the center of the camera frame (defined by the grid) near the center of the image. But it is not necessary to have it exactly in the center. The most important thing is to have a wobjdata placed exactly at the position of the frame of the calibration grid. This allows to have the same reference between the calculation of the camera and reality.

    So the reason to have the grid on the robot would be to ensure that the center of the grid and the center of the wobj of my gripper are in the same spot, giving me an accurate offset for my part. 

    Would this be the same as if I had a fixed grid and positioned the gripper center with the grid mark? (This is because I already had parts fabricated for a fixed position thinking this is what I needed and would hate to have to create new parts.) It also seems like I will not need to perform the robot-camera calibration because I am only using the offset distance from the camera center to move the robot. As opposed to running a picking operation where the robot needs to know the coordinates the camera gives (as per the example in the integrated vision application manual).

    And just to make sure conceptually I understand the whole process:

    I approach the picture position with tool_gripper and wobj_camera_location
    Take picture, use data to modify oframe of robot mounted wobj with pdispset
    move to stationary tool and machine
    drop part referencing tool_gripper and wobj_work_station

  • nomad5t5nomad5t5 ABB Canada ✭✭
    The camera/robot should be calibrated with the grid at the correct working distance from the items you are detecting/measuring.  Otherwise you will not receive accurate position information from the camera.  If the feature on the item that you need to detect is at a height of 30mm from the surface then the calibration plate should be placed at 30mm above the surface as well.
Sign In or Register to comment.