Wednesday, June 30, 2010

CfP: IROS 2010 Workshop: Defining and Solving Realistic Perception Problems in Personal Robotics

Willow Garage has a call for papers out for an IROS 2010 workshop of robot perception. The deadline for submissions is July 23rd.

Tuesday, June 29, 2010

deepoo: a cleaning robot



"Antigérmenes: con technologia Ag+, muy efectiva para la eliminación"

Monday, June 28, 2010

Cool Tool: Metric Wire Strippers


I finally managed to find a nice combination wire stripper and crimper that doesn't feel like it is instantly going to fall apart.

The ratio brand of metric wire strippers/crimpers screw cutters feels like a high quality tool on par with the one produced by Klein Tools in the US.

Update: We now recommend and sell this tool which is the best we have found.

Sunday, June 27, 2010

Waiting for Kinect

If you are looking for some reading material while you are waiting for November 4th.
Perhaps these papers might give you some ideas as to what might be the future of robotics.

Kinect can be pre-ordered here at Amazon.

Also, I think "Structured Light" would make a great band name. Once again, if you have not seen it already, Hizook!

PIXHAWK quadrotor updates

Here.

Thursday, June 24, 2010

Robots on vacation



Posting will be a little light for the next week which makes a great excuse to post this video from BigDog's vacation last year.

Wednesday, June 23, 2010

Microsoft Kinect

Things are pretty busy here at I Heart Robotics but it looks like they are about to get a lot busier.

Hizook has more information about the Microsoft Kinect. More details should be coming out soon.

Friday, June 18, 2010

CHARLI & DARwIn



Charli is a humanoid robot being developed by the Robotics & Mechanisms Laboratory (RoMeLa) at Virgina Tech. The team of robots and humans from RoMeLa will be participating next week at the RoboCup 2010 soccer games in Singapore.

Here is the team dressed up and ready for the competition. Charli is looking good with his all new black exterior. More photos can be found on Charli's Facebook page.


Also at RoboCup2010 is DARwIn. This robot is a more conventional RoboCup soccer player built by RoMeLa and the UPenn Grasp Lab.

If anyone has any ideas as to what the exterior of a humanoid robot should be called, please post a comment. Armor sounds too heavy and shell sounds like seafood.

Also as an editorial note, moving forward all blog posts based on press releases emailed to us will be tagged with the label PR.

Thursday, June 17, 2010

Another day, another great Arduino based robot



The robot's website seems down at the moment, but should hopefully be back up soon. While we are waiting you can check out this page which has information about the robot it was inspired by.

Tuesday, June 15, 2010

PR2 plays pool



Now that the PR2 has finished folding the laundry, it's time for it to make some money by hustling the humans down at the pool hall. With enough practice and coding the PR2 might soon be ready to take on Deep Green.

Monday, June 14, 2010

Friday, June 11, 2010

Playback: Friday Night at the Movies



Mini Grand Challenge 2010




Test footage from the Cornell AUV Team




2D SLAM with Mobile Robot Programming Toolkit






It looks like a research group has started working with the Parrot AR.Drone.




GOOOOOOOAL!!

Vision for Robots 5 of n: Hough Transform

If you are just joining us it may be helpful to start at part 1. Otherwise, now that we used morphological operations to remove the noise from the image, we can apply the Hough transform to find the circles formed by the outline of the objects.

The Hough transform searches through the parametric space of the image to find features that match. In the case of circles, it will generate all possible circle within a given range and it will attempt to match the generated circle to possible circles in the image. Where a pixel of the generated circle matches a pixel on screen a vote is added. If a circle acquires enough votes it is determined to be found inside the image.

The picture below provides a rough approximation of how this works. The blue circle represents the circle in the image, the orange circle represents the test circle and the overlap is shown in green. Each overlapping pixel adds a vote.

XYRVotes
............
12132132
12232130
12332132
12432134
125321316
............



OpenCV provides a built-in function for circle detection, that we demonstrated before using Harpia and also using OpenCV directly. The current implementation looks for the highest number of votes in the XY plane then it looks for the radius that matches the best. This is why the detector has problems matching concentric circles.

The code is fairly straight forward but it is important to note that the circle detector seems to work better with a white background than with a black one. The detection parameters listed here and in the official example provide a good starting point but the exact optimal settings will depend on your needs and lighting conditions. Trial and error seems to determine good values relatively quickly.

// strel_size is the size of the structuring element
    cv::Size strel_size;
    strel_size.width = 3;
    strel_size.height = 3;
    // Create an elliptical structuring element
    cv::Mat strel = cv::getStructuringElement(cv::MORPH_ELLIPSE,strel_size);
    // Apply an opening morphological operation using the structing element to the image twice
    cv::morphologyEx(img_bin_,img_bin_,cv::MORPH_OPEN,strel,cv::Point(-1, -1),2);

    // Convert White on Black to Black on White by inverting the image
    cv::bitwise_not(img_bin_,img_bin_);
    // Blur the image to improve detection
    cv::GaussianBlur(img_bin_, img_bin_, cv::Size(7, 7), 2, 2 );

    // See http://opencv.willowgarage.com/documentation/cpp/feature_detection.html?highlight=hough#HoughCircles
    // The vector circles will hold the position and radius of the detected circles
    cv::vector circles;
    // Detect circles That have a radius between 20 and 400 that are a minimum of 70 pixels apart
    cv::HoughCircles(img_bin_, circles, CV_HOUGH_GRADIENT, 1, 70, 140, 15, 20, 400 );

    for( size_t i = 0; i < circles.size(); i++ )
    {
         // round the floats to an int
         cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
         int radius = cvRound(circles[i][2]);
         // draw the circle center
         cv::circle( img_out_, center, 3, cv::Scalar(0,255,0), -1, 8, 0 );
         // draw the circle outline
         cv::circle( img_out_, center, radius+1, cv::Scalar(0,0,255), 2, 8, 0 );
         // Debugging Output
         ROS_INFO("x: %d y: %d r: %d",center.x,center.y, radius);
    }
You can get the full version of the newest code by running roscd ihr_opencv && git pull origin master, or by starting at the beginning if you skipped that step. Once you have updated things the code can be built by running roscd ihr_opencv && rosmake

The prerecorded demo can be launched using roslaunch.

roslaunch ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/launch/demo_hough.launch

If you have an orange and a webcam you can try this at home with the live version. For those of you playing along at home, if you setup your webcam and get an orange you can start the live demo this way.

roslaunch ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/launch/live_hough.launch



Now that we have found the co-ordinates of the objects, we can pass this information back as an ROS message for other nodes to use. This will be covered in the next episode.

Thursday, June 10, 2010

Vision for Robots 4 of n: Morphological Operators

Continuing the previous work on building a computer vision system for robotics, today we will look at morphological operators. If you have already installed the I Heart Robotics code repository, the command roscd ihr_opencv && git pull origin master should download the latest examples. If you haven't installed the example code already you may want to start at the beginning.

Morphological operations can be performed on images to improve the recognition of desirable features and remove undesired features such as noise. In this case we want to remove noise from the image so we will perform a morphological open which is an erode operation followed by a dilate operation.

To perform these operations first we need to construct a structuring element. The structuring element is an MxN matrix containing binary values that will be used to modify each element in the original image. OpenCV has functions to automatically generate rectangular, elliptical, and cross shaped structuring elements. In general the structuring elements can have any shape but these are the most common and useful for most applications. The image below shows from left to right: a 5x3 rectangular element, a 5x5 elliptical element and a 5x5 cross element.



The structuring element is then used to compare the values of each element and its neighbors. An erode operation with a 3x3 cross element will set the output pixel to a 1 if the input pixel is a 1 and the pixels above, below, and the the left and right are all also 1.


The dilate operation shown below is the inverse of the erode operation. In the case of a 3x3 cross element the output pixel and the pixels above, below and to the left and right will all be set to 1 if the input is a 1.


The open operation is an erode followed by a dilate whereas the clode operation is a dilate followed by an erode. There are many more morphological operations that can be performed to improve the quality of the image, however in this program we will only be using the open operation.


After segmenting the image by hue and converting it into a binary image we are then able to remove noise from the image using an open operation which is applied three times in a row. This is effectively the same as setting the output = dilate(dilate(dilate(erode(erode(erode(input)))))).

for(int i = 0; i < img_out_.rows; i++)
    {
      for(int j = 0; j < img_out_.cols; j++)
      {
        // The output pixel is white if the input pixel
        // hue is orange and saturation is reasonable
        if(img_hue_.at(i,j) > 4 &&
           img_hue_.at(i,j) < 28 &&
           img_sat_.at(i,j) > 96) {
          img_bin_.at(i,j) = 255;
        } else {
          img_bin_.at(i,j) = 0;
        }
      }
    }

    cv::Size strel_size;
    strel_size.width = 3;
    strel_size.height = 3;
    cv::Mat strel = cv::getStructuringElement(cv::MORPH_ELLIPSE,strel_size);
    cv::morphologyEx(img_bin_,img_out_,cv::MORPH_OPEN,strel,cv::Point(-1, -1),3);

The code can be built by running

roscd ihr_opencv && rosmake

This demo can be run using the prerecorded .bag file

roslaunch ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/launch/demo_morph.launch

If you have an orange and a webcam you can try this at home with the live version.

roslaunch ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/launch/live_morph.launch



In the next episode, we will cover the use of the Hough transform to detect the circular outline of the test oranges giving us the location of the center and the radius.

Wednesday, June 9, 2010

FAA testing Scan Eagle UAVs

While regulatory progress has been painfully slow, it looks like the FAA has decided to take off the brakes and start testing Scan Eagle UAVs in controlled airspace. This will help them to determine what will be needed to support a regulatory framework for civilian UAV operations.

[From: DIYDrones]

Sale at RadioShack

While it may be a surprise to some, RadioShack is still in business. They are also profitable probably due to the high margins on overpriced cell phone contracts. Interestingly, according to the chart shown below from page 24 of their annual report, products like electronic components and speaker cables, which make up the technical category, only account for 4.6% of revenue. It is clear why they are rebranding themselves "The Shack".


Recently, one of the local RadioShacks near here sent their entire stock of the protoboards back to headquarters. Probably so that their online store didn't stay out of stock. They also appear to be trying to increase their short term profits by lowering their parts inventory so they can convince someone to buy them.

Now they have dropped their insane parts prices down to just expensive with a poorly advertised 25% discount if you buy 5 items or more from the drawers. I was also able to buy a parallax RFID reader for ten bucks. Fortunately, the one I purchased came with two tags. It looks like they have no plans to restock what is left of their already limited selection of electronics components.

Friday, June 4, 2010

Vision for Robots 3 of n: Hue Segmentation

Previous parts are here.

The goal for this step is to segment the image now that it has been split into hue, saturation, and value. One thing to note is that CvBridge imports the image in BGR not RGB. This may cause some confusion, but the previous information has been clarified.

The first thing is to separate the multi-channel HSV matrix into separate hue and saturation matrices for easy manipulation. Then we will iterate over each pixel to see if its hue is close to orange and if the pixel is colorful. Pixels that match our filter will be set to white and those that do not will be set to black. This information can also be used to mask the background from the output image.

void imageCallback (const sensor_msgs::ImageConstPtr & msg_ptr)
  {
    // Convert ROS Imput Image Message to IplImage
    try
    {
      cv_input_ = bridge_.imgMsgToCv (msg_ptr, "bgr8");
    }
    catch (sensor_msgs::CvBridgeException error)
    {
      ROS_ERROR ("CvBridge Input Error");
    }

    // Convert IplImage to cv::Mat
    img_in_ = cv::Mat (cv_input_).clone ();
    // output = input
    img_out_ = img_in_.clone ();
    // Convert Input image from BGR to HSV
    cv::cvtColor (img_in_, img_hsv_, CV_BGR2HSV);
    // Zero Matrices
    img_hue_ = cv::Mat::zeros(img_hsv_.rows, img_hsv_.cols, CV_8U);
    img_sat_ = cv::Mat::zeros(img_hsv_.rows, img_hsv_.cols, CV_8U);
    img_bin_ = cv::Mat::zeros(img_hsv_.rows, img_hsv_.cols, CV_8U);
    // HSV Channel 0 -> img_hue_ & HSV Channel 1 -> img_sat_
    int from_to[] = { 0,0, 1,1};
    cv::Mat img_split[] = { img_hue_, img_sat_};
    cv::mixChannels(&img_hsv_, 3,img_split,2,from_to,2);

    for(int i = 0; i < img_out_.rows; i++)
    {
      for(int j = 0; j < img_out_.cols; j++)
      {
        // The output pixel is white if the input pixel
        // hue is orange and saturation is reasonable
        if(img_hue_.at(i,j) > 4 &&
           img_hue_.at(i,j) < 28 &&
           img_sat_.at(i,j) > 128) {
          img_bin_.at(i,j) = 255;
        } else {
          img_bin_.at(i,j) = 0;
          // Clear pixel blue output channel
          img_out_.at(i,j*3+0) = 0;
          // Clear pixel green output channel
          img_out_.at(i,j*3+1) = 0;
          // Clear pixel red output channel
          img_out_.at(i,j*3+2) = 0;
        }
      }
    }

    // Display Input image
    cv::imshow ("input", img_in_);
    // Display Binary Image
    cv::imshow ("binary image", img_bin_);
    // Display segmented image
    cv::imshow ("segmented output", img_out_);

    // Needed to  keep the HighGUI window open
    cv::waitKey (3);
  }

To launch the image segmentation demo using prerecorded .bag file

roslaunch ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/launch/demo_segment.launch


If you camera is working and you have an orange you can try the live version.

roslaunch ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/launch/live_segment.launch



Video of Step 3 of n.

In the next step we will use morphological operations to remove noise and smooth the edge of the circles. Then we will perform circle detection to locate each object in the image.

Tuesday, June 1, 2010

Vision for Robots 2 of n: Colorspace Conversion

If you have not completed step 1 of n, go here first.

Now that we should have ROS setup and working, lets look at our problem. We have a video of two oranges being viewed by a moving camera. We want our robot to be able to interact with these oranges so first we are going to separate these oranges from the background and use circle detection to identify their size and location within the image. The process of separating the oranges from the background will take advantage of the distinct color of the oranges. However since the brightness of the orange is not constant we will first need to convert from blue, green, red (BGR) to HSV and segment the image by hue.

ROS is a robotics framework that provides a way of building software for robots that makes your life easier. It provides two primary means of communications between different programs called nodes; the first method is a client/server interface which is useful for intermittent and asynchronous communications, secondly ROS provides a messaging interface called topics which is ideal for information that is sent from one publisher to many listeners. Using the second method our program will be able to either listen or publish messages to a topic, and in this demo we will be both sending and receiving image messages.

This code is in C++ but there are many other options available for both ROS and OpenCV. This code is somewhat based on this tutorial.

Now, let's get started with the code.

Assuming you completed step 1 of n, you should already have everything downloaded and you can follow along by looking at the source in ihr_demo_hsv.cpp.

cd ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/src


Here the main function handles initializing our objects.
int
main (int argc, char **argv)
{
  // Initialize ROS Node
  ros::init (argc, argv, "ihr_demo1");
  // Start node and create a Node Handle
  ros::NodeHandle nh;
  // Instaniate Demo
  Demo d (nh);
  // Spin ...
  ros::spin ();
  // ... until done
  return 0;
}

All of the code we care about is stored in the Demo object. The protected portion of Demo has all of the variables and objects we will need to process images. CvBridge uses the older IplImage format for connecting ROS to OpenCV. However, the demo will be using the newer cv::Mat format internally to manipulate images. This will make things easier in the future. Looking at the ROS C++ Style Guide will explain why things are named the way they are.

class Demo
{

protected:
  ros::NodeHandle nh_;
  image_transport::ImageTransport it_;
  image_transport::Subscriber image_sub_;
  sensor_msgs::CvBridge bridge_;
  image_transport::Publisher image_pub_;
  cv::Mat img_in_;
  cv::Mat img_hsv_;
  IplImage *cv_input_;
  IplImage cv_output_;

...

};

The public part of the Demo object has the constructor and an image callback that is called each time a new image arrives.
public:

    Demo (ros::NodeHandle & nh):nh_ (nh), it_ (nh_)
  {
    // Advertise image messages to a topic
    image_pub_ = it_.advertise ("/demo/output_image", 1);
    // Listen for image messages on a topic and setup callback
    image_sub_ = it_.subscribe ("/usb_cam/image_raw", 1, &Demo::imageCallback, this);
    // Open HighGUI Window
    cv::namedWindow ("hsv", 1);
  }

  void imageCallback (const sensor_msgs::ImageConstPtr & msg_ptr)
  {
    // Convert ROS Imput Image Message to IplImage
    try
    {
      cv_input_ = bridge_.imgMsgToCv (msg_ptr, "bgr8");
    }
    catch (sensor_msgs::CvBridgeException error)
    {
      ROS_ERROR ("CvBridge Input Error");
    }

    // Convert IplImage to cv::Mat
    img_in_ = cv::Mat (cv_input_).clone ();
    // Convert Input image from BGR to HSV
    cv::cvtColor (img_in_, img_hsv_, CV_BGR2HSV);
    // Display HSV Image in HighGUI window
    cv::imshow ("hsv", img_hsv_);

    // Needed to  keep the HighGUI window open
    cv::waitKey (3);

    // Convert cv::Mat to IplImage
    cv_output_ = img_hsv_;
    // Convert IplImage to ROS Output Image Message and Publish
    try
    {
      image_pub_.publish (bridge_.cvToImgMsg (&cv_output_, "bgr8"));
    }
    catch (sensor_msgs::CvBridgeException error)
    {
      ROS_ERROR ("CvBridge Output error");
    }
  }

Building from source is easy. First install the dependencies.

cd ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv
rosdep install ihr_opencv


After all of the dependencies like OpenCV are installed, use rosmake to compile the package.

rosmake ihr_opencv


To launch the HSV conversion demo using prerecorded .bag file

roslaunch ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/launch/demo_hsv.launch


If you camera is working you can try that as an input.

roslaunch ~/ros/stacks/iheart-ros-pkg/ihr_demos/ihr_opencv/launch/live_hsv.launch



Video of Step 2 of n.

Now that the image has been converted the next step will be to segment the image.