Here is an initial idea for mounting a camera onto the turtlebot. The button user interface has a cross piece to which it's quite easy to mount things, and I've used a couple of pieces of right angle plastic, with a Minoru stereo webcam positioned on a piece of hardboard at the top.
The camera is just fixed in place, and I could add some extra packing behind it if it needs to be angled downwards. My thinking here is that although the obvious thing to do is to have it on a servo tilt mechanism I'll start with the simplest possible system and then see what the issues are. Tilting probably would have advantages in terms of being able to look down at the floor or to view bottles being carried (there could be some additional computer vision to count the bottles and determine whether they have lids).
The height of the camera is intended to be suitable for viewing desk surfaces or window ledges. It's not ideally suited for interacting with people - unless they're sitting down, but that too could also be changed. It would be fairly easy to add a couple of extra right angle plastic pieces with some pre-drilled holes so that the height of the camera could be easily adjustable up or down.
Why not just mount the Kinect sensor higher? That would also be a possibility, but the Kinect is a large and quite heavy sensor, and mounting it in a higher position could introduce stability issues which would also have consequences for the accuracy of mapping and localisation. The webcam is small and lightweight, so it doesn't need a very industrial stand to support it and barely alters the robot's centre of gravity.
Stereo vision with webcams has the well known problem of lack of camera synchronization, but for tasks such as inspecting the surface of a desktop this isn't necessarily a major impediment. In the sorts of scenario I'm thinking of you could schedule the robot to visit certain places, then the webcam would turn on, take some images and turn off again (preserving electrical and processing power) with the assumption that not much in the field of view is likely to change during the few milliseconds when frames are being grabbed and the robot isn't in motion. Even if the onboard computer was quite slow, sitting and processing some stereo images for a few seconds would be no big deal.
Friday, July 20, 2012
Subscribe to:
Post Comments (Atom)
2 comments:
I played with the Minoru camera a few years back.
2 major issues that limit it's suitability on mobile robots:
1. iages are not synchronised, so you cannot get reliable depth information on the move.
2. images are still mjpeg rather than raw data so each image is filled with speckle that affects any edge detection while the robot is moving.
Lack of camera sychronisation is an is a problem if the camera is moving. What happens is that the whole point cloud appears to wobble.
For looking at mostly stationary scenes though - like a table surface - it's quite good at close range. The depth resolution isn't as good as the Kinect, but it's still better than monocular vision.
The image format is only a problem if your correspondence algorithm depends upon edge detection. In practice the camera lens being used makes a bigger difference than the image format.
Post a Comment