## Estimating Camera-to-World

Vision Visp also comes with another tracker called the Dot2 tracker. This looks for ellipse blobs in the camera feed. The strategy for using this kind of tracking is to print a known pattern of circles on a piece of paper, fix the paper to the flattest object you can find, detect the dots and work out the transformation between the camera’s pixel space, and the 2D coordinate system of the paper. In this experiment we want to determine the accuracy of this dot tracking approach. With the moving edge tracker we were able to find a transform between image space and the real world (three translation and three rotation degrees of freedom). With the dot tracker, because the dots are located on a plane, this is not possible. We can only determine the “homography matrix”, which is a linear transform between two “homogeneous” coordinate spaces (2D coordinate systems in this case). Simply put, we can’t tell the difference between the camera having a wide field of view VS. the piece of paper being far away from the camera. However, this not an issue in this experiment. If you want to brush up on the details of homogeneous coordinate and homography matrices you can find everything you need to know at here and here. Essentially, we change screen coordinates from (i,j) to (i,j,1) and model coordinates from (x,y) to (x,y,1) to make them homogeneous. Then the homography matrix estimation is just computing a 3×3 matrix that converts between them. Once you have that matrix you can convert between the two coordinate system easy peasy using matrix multiplication (remembering to divide by a constant to get the result back to (j,k,1) format). Actually calculating the homography matrix is a little fiddly, but luckily it’s all in the OpenCV library (thanks!). You only need a set of 4 screen coordinates and 4 model coordinates to uniquely determine the homography matrix. However, if you have more than 4 sets you can “average” the results to get a homography matrix that fits them all as best it can. This is also in OpenCV but, in fact, the library goes further. OpenCV also provides a robust method of estimating the homography matrix:`void cvFindHomography(srcPoints,dstPoints,H_passback, method=0, mask_passback)`

If the method parameter is set to CV_LMEDS, any screen-model coordinate pairs that do not fit the (estimated) homography transformation very well are ignored and have no effect on the homography estimation (and the rejected outliers are reported in the maskPassback). This is
exceedingly useful if, for example, you are getting screen coordinates from a tracker that has gone haywire (maybe the object it’s tracking was occluded for example). Bad points will not affect the estimation results, and you can work out which points are misbehaving.
So for our experiments we wrote a piece of software revolving around Vision Visp’s dot2 trackers and OpenCV’s robust homography matrix estimation method. We used a 6×6 grid pattern of dots fixed to a piece of wood, and attempted to track the dots in real time from the camera feed, and estimate the homography matrix using the detected dots in the image and the corresponding dots in the (known) grid pattern.
Our software was initialized with the position of the 36 dots in the image (using Vision Visp’s inbuilt calibration GUI). From then on, our algorithm proceeded as follows:
- Estimate the homography matrix robustly, making note of any outliers.
- Use this homography matrix to predict the expected position of the outliers on the screen given their known position on the paper
- Reinitialize the trackers on the screen using their expected position.
- Repeat with next frame.

Hi,

Thanks for the excellent article and sharing those amazing results. It was very interesting. I was wondering whether you’d integrated this into ROS already? I’d be interested in testing that kind of visual method for calibrating our robot automatically (for example, I can think of lots of other examples where I’d be happy to use this!).

Cheers,

Ugo

Hi Ugo,

I just update the article at the bottom to reflect our integration efforts. Yes I am trying to integrate this software with our lab CNC machine. Progress is a little slower than I had hoped, but it will be done sometime this year. The code for these experiments are in that repository too. They are a little rough but post feedback in the forum and I will get things running for you.

Tom

Hi Tom,

Thanks a lot for that. I hope I can find an intern to look at your code and play with it soon-ish.

Cheers,

Ugo

I’m wondering if this technique might be good for a pick and place machine.

Yeah. A long term aim is to try on a CNC. Commercial systems do already do this to some extent. If you have a machine we could try.

Oh my goodness! Impressive article dude! Thanks, However I am experiencing

troubles with your RSS. I don’t know the reason why I can’t join it.

Is there anybody else having the same RSS issues? Anyone who

knows the answer can you kindly respond? Thanx!!