The Kinect is a great sensor and a game changer for mobile robotics, but it won't make every other sensor obsolete. Though maybe I can use it to convince someone to let me take apart a Swiss Ranger 4000.
Field of ViewThe field of view is an important consideration for sensors because if you can't see enough features you can't use scan matching/ICP to estimate your change in position. Imagine if you were in a giant white room with only one wall that was perfectly flat. if you walk along the wall there is no way to determine your movement except from how many steps you take. With a robot you have wheel slip and encoder errors that build up over time unless there are landmarks that can be used to bound the error.
The depth image on the Kinect has a field of view of 57.8°, whereas the Hokuyo lasers have between 240° and 270°, and the Neato XV-11's LIDAR has a full 360° view.
In addition to being able to view more features, the wider field of view also allows the robot to efficiently build a map without holes. A robot with a narrower field of view will constantly need to maneuver to fill in the missing pieces to build a complete map.
One solution would be to add more Kinects to increase the field of view. The main problem is the sheer volume of data. 640 x 480 x 30fps x (3 bytes of color + 2 bytes of Depth) puts us at close to the maximum speed of the USB bus, at least to the point where you are only going to get good performance with one Kinect per bus. My laptop has two USB buses that have accessible ports, and you might get four separate buses on a desktop. Assuming you down sample until it works computationally you still have to deal with power requirements, unless your robot is powered by a reactor, and possible interference from reflections to deal with.
Another simpler approach is to mount the Kinect on a servo to pan horizontally, this however reduces the robot to intermittent motion where it is constantly stopping and scanning. Depending on your robot's mechanical design you may be better off rotating the robot.
RangeThe minimum range for the Kinect is about 0.6m and the maximum range is somewhere between 4-5m depending on how much error your application can handle. The Hokuyo URG-04LX-UG01 works from 0.06m to 4m with 1% error, and the UTM-30LX works from 0.1m to 60m. The XV-11 LIDAR does 0.2m to 6m. So the Kinect will have the same problems as the more expensive laser range finders in terms of being able to see the end of a long hallway, but the bigger problem will be the Kinect's close range blind spot. I'm sure it wouldn't be hard to imagine the dangers of a soft squishy dynamic obstacle approaching a robot from behind, then standing within 0.6m (2ft) while the robot turns and drives forward over the now no longer dynamic obstacle. It can also make maneuvering in confined spaces difficult without a priori knowledge of the environment.
One solution to this would be to add an additional laser projector to the Kinect so that the baseline could be adjustable and the minimum range could be closer. Another approach would be to place the sensor on the robot looking downward at a point high enough to ensure that the dynamic obstacles and their parents were detectable at all times.
The maximum range will be limited by the need to be eye-safe, the power output of the Kinect laser is spread out as a set of points projected over a large surface area, while more traditional 2D laser scanners direct their entire power output to a single point. The 2D laser scanner will generally be capable of a longer range given accurate time-of-flight measurements. The other major limit to the Kinect's maximum range will be the need to make the Kinect wider to increase the distance between the laser projector and the IR imager to have a large enough baseline.
EnvironmentalThe environmental challenges for the outdoor use of laser scanners has been fairly well studied, with rain and dust being known problems. Changing lighting conditions, from clouds passing overhead, can also wreak havoc with some sensors. I would be interested in seeing experimental results using the Kinect outdoors, during the day, in adverse weather. However, it should work well at night with good weather.
One important question is, can the Kinect see snow?
Computation and Thermodynamics
(updated: 22:28 EST 12/17)Having designed and built several mobile robots I can safely say that once all the software is debugged, all the electronics are rewired and actually labeled, and the mechanisms are all lubricated, the biggest problem is thermodynamics. Battery technologies are a set of trade-offs that in the end give you some amount of potential energy stored in a constant volume of space, having a constant mass.
The amount of operating time for the robot is limited by how fast you convert that potential energy into kinetic energy or heat. On the electromechanical side you can recover some energy by using regenerative breaking to convert kinetic energy into potential energy and heat. While researchers have made some progress on reversible computing, there is currently no regenerative computing so all of the energy spent computing ends up as heat. So as we add computation power to the robot we are effectively decreasing the operating time.
As the compute power is increased the run time is decreased, so to make up for it you can add more batteries. As you add more batteries the mass of the robot increases, so the motors need more power to accelerate the robot. So to make up for it you can add more batteries.
In terms of testing algorithms and getting research done, offloading computation to a ground station is a valid solution. However, it may not be a practical solution for regular operation since wireless networks may not have complete coverage or reliability.
As you may imagine these problems are worse if your robot is flying. On the upside, cooling the robot is easier.
ResultsWith a little bit of work the Kinect can provide more than enough sensing capabilities to put your robot into the big leagues, but traditional laser range finders still have a use and at the very least make solving some of the problems easier.
ReferencesThis paper has important details on the XV-11 LIDAR
Kinect calibration technical details can be found here.
Submit your ideas or corrections in the comments. Clarifications available upon request.