Friday, June 11, 2010

Vision for Robots 5 of n: Hough Transform

If you are just joining us it may be helpful to start at part 1. Otherwise, now that we used morphological operations to remove the noise from the image, we can apply the Hough transform to find the circles formed by the outline of the objects.

The Hough transform searches through the parametric space of the image to find features that match. In the case of circles, it will generate all possible circle within a given range and it will attempt to match the generated circle to possible circles in the image. Where a pixel of the generated circle matches a pixel on screen a vote is added. If a circle acquires enough votes it is determined to be found inside the image.

The picture below provides a rough approximation of how this works. The blue circle represents the circle in the image, the orange circle represents the test circle and the overlap is shown in green. Each overlapping pixel adds a vote.

XYRVotes
............
12132132
12232130
12332132
12432134
125321316
............



OpenCV provides a built-in function for circle detection, that we demonstrated before using Harpia and also using OpenCV directly. The current implementation looks for the highest number of votes in the XY plane then it looks for the radius that matches the best. This is why the detector has problems matching concentric circles.

The code is fairly straight forward but it is important to note that the circle detector seems to work better with a white background than with a black one. The detection parameters listed here and in the official example provide a good starting point but the exact optimal settings will depend on your needs and lighting conditions. Trial and error seems to determine good values relatively quickly.

// strel_size is the size of the structuring element
    cv::Size strel_size;
    strel_size.width = 3;
    strel_size.height = 3;
    // Create an elliptical structuring element
    cv::Mat strel = cv::getStructuringElement(cv::MORPH_ELLIPSE,strel_size);
    // Apply an opening morphological operation using the structing element to the image twice
    cv::morphologyEx(img_bin_,img_bin_,cv::MORPH_OPEN,strel,cv::Point(-1, -1),2);

    // Convert White on Black to Black on White by inverting the image
    cv::bitwise_not(img_bin_,img_bin_);
    // Blur the image to improve detection
    cv::GaussianBlur(img_bin_, img_bin_, cv::Size(7, 7), 2, 2 );

    // See http://opencv.willowgarage.com/documentation/cpp/feature_detection.html?highlight=hough#HoughCircles
    // The vector circles will hold the position and radius of the detected circles
    cv::vector circles;
    // Detect circles That have a radius between 20 and 400 that are a minimum of 70 pixels apart
    cv::HoughCircles(img_bin_, circles, CV_HOUGH_GRADIENT, 1, 70, 140, 15, 20, 400 );

    for( size_t i = 0; i < circles.size(); i++ )
    {
         // round the floats to an int
         cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
         int radius = cvRound(circles[i][2]);
         // draw the circle center
         cv::circle( img_out_, center, 3, cv::Scalar(0,255,0), -1, 8, 0 );
         // draw the circle outline
         cv::circle( img_out_, center, radius+1, cv::Scalar(0,0,255), 2, 8, 0 );
         // Debugging Output
         ROS_INFO("x: %d y: %d r: %d",center.x,center.y, radius);
    }
You can get the full version of the newest code by running roscd ihr_opencv && git pull origin master, or by starting at the beginning if you skipped that step. Once you have updated things the code can be built by running roscd ihr_opencv && rosmake

The prerecorded demo can be launched using roslaunch.

roslaunch ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/launch/demo_hough.launch

If you have an orange and a webcam you can try this at home with the live version. For those of you playing along at home, if you setup your webcam and get an orange you can start the live demo this way.

roslaunch ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/launch/live_hough.launch



Now that we have found the co-ordinates of the objects, we can pass this information back as an ROS message for other nodes to use. This will be covered in the next episode.

3 comments:

Anonymous said...

Pretty cool, dude!

One observation is that a lot of systems like this will use an oddly colored light (for example the PS3's magenta globe) so the vision system can perform a filter by color.

This drasticly reduces the search space and is a cheap trick to improve accuracy.

beyondorange said...

Awww man, where is the next episode??? Its the most important!!

I Heart Robotics said...

As soon as we can allocate some resources to it. Hopefully in the next week or two.