Friday, September 30, 2011
More Quadruped Abuse
Another Boston Dynamics demonstration. The adaptation to sideways perturbations is pretty impressive. Also this sounds a lot quieter than previous demos, although that may be because there is a compressor located elsewhere via the harness.
Wednesday, September 28, 2011
Tuesday, September 27, 2011
Sunday, September 25, 2011
IROS 2011 Tutorial: Motion Planning for Real Robots
For those of you who are interested in OMPL and motion planning in ROS, you can play along at home with the home version of the game [ No real PR2s :(, only simulations ]
Also, Motoman works with ROS!?!?
Labels:
arm,
manipulator,
motion planning,
ompl
Reporting in from IROS
This week we will be reporting in from IROS.
OMPL seems to have gotten to the point where it is useful for real robots. More importantly it looks like it is ready for integration on your robot. More after the workshop.
Also, Interactive Markers are pretty awesome.
OMPL seems to have gotten to the point where it is useful for real robots. More importantly it looks like it is ready for integration on your robot. More after the workshop.
Also, Interactive Markers are pretty awesome.
Labels:
arm,
manipulator,
motion planning,
ompl
Wednesday, September 21, 2011
Creating a bootable USB drive for use with ROS
And now for something altogether more practical. Creating a bootable USB drive is likely to be a common task for anyone using ROS on a mobile robot. Hard drives installations will also work, but they contain moving parts which may be vulnerable to failure on a platform which is subject to vibrations and jolts. USB drives are cheap and easily available, difficult to destroy, consume little energy, are easy to create backups from and take up minimal physical space inside the robot. Once you have your system set up on a USB drive you can then easily create a copy on a second drive so that you can have a "stable" version for quick demonstration purposes and a separate development version for all other occasions.
These instructions assume that you're installing some version of Ubuntu, and in this case I was using 11.04. For ROS installation, Ubuntu seems to be the most supported distro. Other distros may work, but sometimes have issues with dependencies when using the --rosdep-install option. It's also a good idea not to use the very latest version, since it typically takes a while before new versions are fully supported by ROS.
One thing to be aware of from the outset is that not all USB flash drives are created equal. There are a variety of storage sizes, but more importantly there are also different access speeds. Try to use a drive which has the highest access speed possible in order to avoid excessive latencies. I used a 16GB Sandisk Cruzer Blade, which seems quite adequate for the task.
1. Download the latest version of Unetbootin.
2. Format a USB drive as fat32. You can do this easily with Gparted.
3. Using Unetbootin, create a USB install, specifying some non-zero storage space size.
4. Using Gparted unmount the USB drive, then resize the partition so that it's close in size to the amount of data installed. Create a new partition, formatted as ext4, called casper-rw.
Why create this extra partition? Without this the maximum amount of storage is only a couple of gigabytes. ROS is quite a large installation and without the additional partition won't fit onto the drive.
5. Boot from the USB drive.
6. Click on the wireless network icon, select "wireless", pick the current connection then click "Edit". Make sure that "connect automatically" and "available to all users" are selected.
This assumes that you're on an IPv4 network. Click on IPv4 settings, select "manual" as the method then click on "Add". Enter a fixed IP address for the robot, with a netmask of 255.255.255.0 and with the gateway being the IP address of the wireless router. Enter the DNS server IP addresses, which are usually visible from the router's administration page. "Search domains" can be left blank.
Click on "Save".
It makes sense to have the robot on a fixed local IP address, since you can then assign a name to it within the /etc/hosts file and therefore be able to connect to it in a reliable way. It makes even more sense if you have multiple robots.
7. Run synaptic and under "repositories" make sure that other software sources (community, multiverse) are enabled.
8. Open a terminal and enter the following:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential openssh-server
Install whatever command line text editor you prefer, such as:
sudo apt-get install emacs
This will be used later during ssh logins. You may also want to install version control software.
sudo apt-get install bzr subversion git mercurial
9. Ensure that you're not prompted to press Enter when shutting down or restarting.
sudo emacs /cdrom/syslinux.cfg
Then add the options noeject and noprompt to the append entries. For example:
append initrd=/ubninit file=/cdrom/preseed/ubuntu.seed boot=casper quiet splash noeject noprompt -- persistent
10. Create a user that will be used for ssh logins.
sudo adduser <username>
11. Add this user to the sudoers
sudo visudo -f /etc/sudoers
Under the line:
root ALL=(ALL:ALL) ALL
add the line:
user ALL=(ALL:ALL) ALL
Then save with CTRL-X then Y
12. Under "Users and Groups" select the user you made earlier, then click on "Advanced" and make sure that they can use video and audio devices.
13. You may wish to enable remote desktop sharing so that you can use a VNC client to view the robot desktop. If so make sure that "confirm each access" and "enter password" are unticked.
14. Follow the instructions for installing ROS from
http://www.ros.org/wiki/electric/Installation/Ubuntu
This may take a while, since the full installation is quite a large download.
15. If you're using a Kinect sensor then install the drivers.
http://www.ros.org/wiki/openni_kinect
16. You may wish to update the package path within:
/opt/ros/${ROS_VERSION_NAME}/setup.sh
In order to include any additional packages which you are using. It's also a good idea to keep a copy of the setup.sh file elsewhere, since subsequent updates can overwrite it.
17. When you've tweaked and tested the installation to your satisfaction you can easily create a backup of it using the dd command line program.
These instructions assume that you're installing some version of Ubuntu, and in this case I was using 11.04. For ROS installation, Ubuntu seems to be the most supported distro. Other distros may work, but sometimes have issues with dependencies when using the --rosdep-install option. It's also a good idea not to use the very latest version, since it typically takes a while before new versions are fully supported by ROS.
One thing to be aware of from the outset is that not all USB flash drives are created equal. There are a variety of storage sizes, but more importantly there are also different access speeds. Try to use a drive which has the highest access speed possible in order to avoid excessive latencies. I used a 16GB Sandisk Cruzer Blade, which seems quite adequate for the task.
1. Download the latest version of Unetbootin.
2. Format a USB drive as fat32. You can do this easily with Gparted.
3. Using Unetbootin, create a USB install, specifying some non-zero storage space size.
4. Using Gparted unmount the USB drive, then resize the partition so that it's close in size to the amount of data installed. Create a new partition, formatted as ext4, called casper-rw.
Why create this extra partition? Without this the maximum amount of storage is only a couple of gigabytes. ROS is quite a large installation and without the additional partition won't fit onto the drive.
5. Boot from the USB drive.
6. Click on the wireless network icon, select "wireless", pick the current connection then click "Edit". Make sure that "connect automatically" and "available to all users" are selected.
This assumes that you're on an IPv4 network. Click on IPv4 settings, select "manual" as the method then click on "Add". Enter a fixed IP address for the robot, with a netmask of 255.255.255.0 and with the gateway being the IP address of the wireless router. Enter the DNS server IP addresses, which are usually visible from the router's administration page. "Search domains" can be left blank.
Click on "Save".
It makes sense to have the robot on a fixed local IP address, since you can then assign a name to it within the /etc/hosts file and therefore be able to connect to it in a reliable way. It makes even more sense if you have multiple robots.
7. Run synaptic and under "repositories" make sure that other software sources (community, multiverse) are enabled.
8. Open a terminal and enter the following:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential openssh-server
Install whatever command line text editor you prefer, such as:
sudo apt-get install emacs
This will be used later during ssh logins. You may also want to install version control software.
sudo apt-get install bzr subversion git mercurial
9. Ensure that you're not prompted to press Enter when shutting down or restarting.
sudo emacs /cdrom/syslinux.cfg
Then add the options noeject and noprompt to the append entries. For example:
append initrd=/ubninit file=/cdrom/preseed/ubuntu.seed boot=casper quiet splash noeject noprompt -- persistent
10. Create a user that will be used for ssh logins.
sudo adduser <username>
11. Add this user to the sudoers
sudo visudo -f /etc/sudoers
Under the line:
root ALL=(ALL:ALL) ALL
add the line:
user ALL=(ALL:ALL) ALL
Then save with CTRL-X then Y
12. Under "Users and Groups" select the user you made earlier, then click on "Advanced" and make sure that they can use video and audio devices.
13. You may wish to enable remote desktop sharing so that you can use a VNC client to view the robot desktop. If so make sure that "confirm each access" and "enter password" are unticked.
14. Follow the instructions for installing ROS from
http://www.ros.org/wiki/electric/Installation/Ubuntu
This may take a while, since the full installation is quite a large download.
15. If you're using a Kinect sensor then install the drivers.
http://www.ros.org/wiki/openni_kinect
16. You may wish to update the package path within:
/opt/ros/${ROS_VERSION_NAME}/setup.sh
In order to include any additional packages which you are using. It's also a good idea to keep a copy of the setup.sh file elsewhere, since subsequent updates can overwrite it.
17. When you've tweaked and tested the installation to your satisfaction you can easily create a backup of it using the dd command line program.
Monday, September 19, 2011
Robot Roundup
Here's an amusing video of cattle being rounded up using a radio controlled vehicle. It probably wouldn't take very much more development to turn this into a telerobot with a camera, GPS and speaker such that cattle could be herded with very little manual effort, and all supervised via the internet from the comfort of your farmstead.
Saturday, September 17, 2011
Hacker Space Program
I Heart Engineering is pleased to announce the launch of our Hacker Space Program. We are now offering special discounts, and monthly giveaways. Also, if your Hacker Space is near New York City we may be able to visit and give a demo of the TurtleBot and ROS robotics operating system. If you are interested in joining send an email to the space program manager.
Labels:
hacking,
I Heart Engineering,
PR
Thursday, September 15, 2011
TurtleBot Projects at Make: Projects
Over at Make: Projects, there are some TurtleBot related projects that have been added recently. These might be fun weekend projects for you and your local hackerspace or a starting point for developing your own ideas.
One of the projects we contributed is how to make your own 2" standoffs with simple handtools. If there are other tutorials you would like to see or parts you are having trouble making, let us know.
Also, we should have a kit of parts for the arm available in the store next week after the Maker Faire.
Labels:
I Heart Engineering,
making,
PR,
TurtleBot,
tutorials
Wednesday, September 14, 2011
Printing a Future Economy
The RepRap project has apparently now reached a level of 3D printing quality which is comparable to, or better than, far more expensive commercially available rapid prototypers.
These devices are really still in their infancy, and the rate of printing is slow, but it's the beginning of a new type of manufacturing process which doesn't depend upon large amounts of capital investment in equipment. Things will start to get even more interesting once it's also possible to print out circuit boards and solar panels.
With new 3D scanning processes such as that which exists within RoboEarth it should also be possible to rapidly produce models which can be printed, and this opens up a new front in the ongoing controversies over copyright. Probably the solution is for original 3D models to be distributed under something like a Creative Commons license, such that there is a growing public domain of printable objects.
These devices are really still in their infancy, and the rate of printing is slow, but it's the beginning of a new type of manufacturing process which doesn't depend upon large amounts of capital investment in equipment. Things will start to get even more interesting once it's also possible to print out circuit boards and solar panels.
With new 3D scanning processes such as that which exists within RoboEarth it should also be possible to rapidly produce models which can be printed, and this opens up a new front in the ongoing controversies over copyright. Probably the solution is for original 3D models to be distributed under something like a Creative Commons license, such that there is a growing public domain of printable objects.
Beyond Digital Artisanship
Douglas Rushkoff has a recent article which seems to be doing the rounds. This is very much the standard narrative about automation and its ultimate direction - namely that we all become digital artisans exchanging informational artifacts with each other.
In a way this narrative has a lot of truth to it, because we have already witnessed the emergence of a post-scarcity economy in the world of information and software. Information used to be costly to produce and distribute, but the advent of the World Wide Web changed all of that. In the domain of information, a large fraction of the world's population is now in ownership of the means of production, and it's reasonable to assume that in the near future everyone will possess equipment which means that they can become an information producer or a digital artisan if they so desire.
My main criticism of the Rushkoff narrative is that it doesn't go far enough. There is another revolution waiting to happen beyond the world of bits within the realm of atoms and molecules. This is the democratization of production of physical goods and services using advanced automation, similar to that written about in Kevin Carson's Low-Overhead Manifesto. It's dependent upon the prior information revolution, but is likely to be highly disruptive to the old and highly centralized factory model of production with its reliance upon cheap human labor, economies of scale and the problematic relations of political power which were propped up by that sort of economic arrangement.
What I think we're about to enter is an era of robotics enabled Shanzhaiism, where the new entrepreneurs are people with a small factory in their loft or garden shed and who may be either subsistence producers or producing on a small scale for the local community or extended family group. This isn't a new idea, since Isaac Asimov imagined these possibilities in The Bicentennial Man. In that story the robot becomes a producer of furniture which is then traded to support the family. Unlike the subsistence farmers or peasant farmers of previous centuries the robotic Shanzhai will be able to produce consumable or trade-able artifacts of arbitrary technological sophistication. Indeed the automation of small scale gardening or farming could be a major turning point in history which leads to the elimination of food shortages, and which paves the way towards a sustainable life for human settlements beyond the Earth's biosphere.
In a way this narrative has a lot of truth to it, because we have already witnessed the emergence of a post-scarcity economy in the world of information and software. Information used to be costly to produce and distribute, but the advent of the World Wide Web changed all of that. In the domain of information, a large fraction of the world's population is now in ownership of the means of production, and it's reasonable to assume that in the near future everyone will possess equipment which means that they can become an information producer or a digital artisan if they so desire.
My main criticism of the Rushkoff narrative is that it doesn't go far enough. There is another revolution waiting to happen beyond the world of bits within the realm of atoms and molecules. This is the democratization of production of physical goods and services using advanced automation, similar to that written about in Kevin Carson's Low-Overhead Manifesto. It's dependent upon the prior information revolution, but is likely to be highly disruptive to the old and highly centralized factory model of production with its reliance upon cheap human labor, economies of scale and the problematic relations of political power which were propped up by that sort of economic arrangement.
What I think we're about to enter is an era of robotics enabled Shanzhaiism, where the new entrepreneurs are people with a small factory in their loft or garden shed and who may be either subsistence producers or producing on a small scale for the local community or extended family group. This isn't a new idea, since Isaac Asimov imagined these possibilities in The Bicentennial Man. In that story the robot becomes a producer of furniture which is then traded to support the family. Unlike the subsistence farmers or peasant farmers of previous centuries the robotic Shanzhai will be able to produce consumable or trade-able artifacts of arbitrary technological sophistication. Indeed the automation of small scale gardening or farming could be a major turning point in history which leads to the elimination of food shortages, and which paves the way towards a sustainable life for human settlements beyond the Earth's biosphere.
Tuesday, September 13, 2011
ROS Documentation Contest: Round Three
We have a winner! Round three of the ROS Documentation contest goes to Srećko Jurić-Kavelj from the University of Zagreb.
The ROSARIA package allows you to connect an Adept MobileRobots (formerly ActivMedia) Pioneer to ROS using the ARIA C++ interface. They also put together a tutorial on connecting it to your i Phone and drive the robot using the phone's accelerometer.
Srećko has also developed and documented an anaglyph node for ROS that accepts a pair of stereo images and outputs a red/cyan image. While the above video was generated using ROS, I am pretty sure the dancers are not running ROS, yet.
Congrats on winning round three, and we are now accepting entries for round four.
Also, if you are interested in the future of ROS, today is the day to get involved by participating in a ROS special interest group (SIG) for planning the features of the next release.
The ROSARIA package allows you to connect an Adept MobileRobots (formerly ActivMedia) Pioneer to ROS using the ARIA C++ interface. They also put together a tutorial on connecting it to your i Phone and drive the robot using the phone's accelerometer.
Srećko has also developed and documented an anaglyph node for ROS that accepts a pair of stereo images and outputs a red/cyan image. While the above video was generated using ROS, I am pretty sure the dancers are not running ROS, yet.
Congrats on winning round three, and we are now accepting entries for round four.
Also, if you are interested in the future of ROS, today is the day to get involved by participating in a ROS special interest group (SIG) for planning the features of the next release.
Labels:
contest,
documentation,
ROS
Monday, September 12, 2011
TurtleBot Party
Over at Clearpath Robotics they seem to have put together a great video showcasing their TurtleBots ability to party.
In Stock: Zoom Lens for Kinect
Previously we have discussed some of the limitations of using the current Kinect sensor for robotics. Now, the Nyko Zoom lens for the Kinect provides a possible solution to two of those issues.
First the lens increases the field of view and the wider view should improve performance for tasks such as cloud/scan matching when building maps. Next the zoom lens decrease the minimum distance required for the depth sensor to function, without the lenses we were about to get depth information for a coffee mug about 52cm away. The minimum distance decreased to about 36cm with the zoom lens attached.
The packaging claims that it is plug and play and no software changes are necessary, this is possibly true for video games, for robotics applications a manual calibration will need to be performed. We should have more information on calibration requirements in the near future.
The above image shows the point could data from the Kinect with the factory calibration.
This image shows the effect of the zoom lens without a calibration.
If the calibration works as well as we hope, these should be ideal for improving performance of manipulation tasks. They should also provide a significant boost to 3D recognition rates and scan quality.
We will put up more information as it becomes available. In the mean time if you want to try it they are now in stock.
First the lens increases the field of view and the wider view should improve performance for tasks such as cloud/scan matching when building maps. Next the zoom lens decrease the minimum distance required for the depth sensor to function, without the lenses we were about to get depth information for a coffee mug about 52cm away. The minimum distance decreased to about 36cm with the zoom lens attached.
The packaging claims that it is plug and play and no software changes are necessary, this is possibly true for video games, for robotics applications a manual calibration will need to be performed. We should have more information on calibration requirements in the near future.
The above image shows the point could data from the Kinect with the factory calibration.
This image shows the effect of the zoom lens without a calibration.
If the calibration works as well as we hope, these should be ideal for improving performance of manipulation tasks. They should also provide a significant boost to 3D recognition rates and scan quality.
We will put up more information as it becomes available. In the mean time if you want to try it they are now in stock.
Xtion in Action
Here is some examples of mounting an ASUS Xtion Pro Live on a TurtleBot.
We had some MDF leftover from a previous project, so we used a coping saw to cut out a plate that roughly matched the distance between the Kinect mounting holes on the TurtleBot plates and had a cutout for the bottom of the Xtion. We happen to have some 3" spacers that were tapped #6-32 on both sides, alternatives include 3d printing parts or using wood screws and dowels.
Thin double stick tape seems to hold pretty well but you could also use something like hot glue.
Here is the view of it assembled from the back.
From the front it looks like we might need to paint both it and our workbench. CAD files for making this yourself should be uploaded in the next day or so.
If you want to mount the Xtion Pro Live on tripod or servo, Michael Ferguson
has the parts you need over on Thingiverse.
We had some MDF leftover from a previous project, so we used a coping saw to cut out a plate that roughly matched the distance between the Kinect mounting holes on the TurtleBot plates and had a cutout for the bottom of the Xtion. We happen to have some 3" spacers that were tapped #6-32 on both sides, alternatives include 3d printing parts or using wood screws and dowels.
Thin double stick tape seems to hold pretty well but you could also use something like hot glue.
Here is the view of it assembled from the back.
From the front it looks like we might need to paint both it and our workbench. CAD files for making this yourself should be uploaded in the next day or so.
If you want to mount the Xtion Pro Live on tripod or servo, Michael Ferguson
has the parts you need over on Thingiverse.
ASUS Xtion PRO LIVE Unboxing
We have been waiting for a while for the Xtion Pro Live to be released. ASUS previously released the Xtion Pro depth sensor without an RGB camera and we have been hoping that they would release a version of the Xtion that integrates an RGB camera and hopefully provides hardware time synchronization and registration between the depth image and the color image. Other features on our wish list for robotics would be a wider field of view, a shorter minimum range for manipulation tasks or a longer maximum range.for navigation tasks or both by using either multiple laser projectors or multiple IR cameras.
The ASUS Xtion Live Pro definitely seems to be designed for developers instead of consumers. Instead of sweeping curves and Torx security screws, the Xtion Pro Live is easy to disassemble with Phillips head screws and has several flat surfaces that can be used to mount the sensor. The price of $199 is bit more than a Kinect, however the Xtion Pro Live has several advantages for robotics and computer vision applications.
The Xtion Pro Live is significantly smaller than the Kinect and the placement of the lenses are more symmetric, this give it a less lopsided appearance and makes it much more usable for humanoids. Robots on a diet will be pleased to know that the Xtion Pro Live weights about 170 grams including the USB cable. Perhaps best of all the sensor does not require an external power supply, making in much more portable and a great choice for mapping and art projects where power isn't always available.
If necessary, the base is easy to remove and replace.
The two piece case opens easily and reveals a cast metal frame for cooling and precision alignment of the cameras and the projector. If you were careful I bet you could tap holes directly into the frame if necessary.
The USB cable is attached with a connector for easy modification, if you need Mini or Micro USB connectors instead.
So far in our testing we have not been able to determine conclusively if the Xtion Pro Live has hardware registration or synchronization. We have a request in with ASUS for a follow up and will let you know as we have more information.
Overall it seems well designed and may be worth the extra cost depending on your budget and application. The ASUS Xtion Pro Live is currently only available direct from ASUS. Tell them we sent you and let them know if you are using the sensor for robotics.
Labels:
3d sensor,
dissection,
Kinect,
Xtion
Guest Blogger: Bob Mottram
Give a warm welcome to our new guest blogger, Bob Mottram. Bob
has worked on industrial automation, computer vision and artificial
intelligence. He has built several great robots including a mobile robot
that runs ROS named GROK2.
In addition to building robots, he has also developed several interesting open source
software packages such as Roadcv, a computer vision system for autonomous driving.
While he is here, he may have a chance to share his views on the
Singularity and how it might not end in super intelligent computers.
Bob can be contacted directly for consulting in the United Kingdom.
Labels:
meta
Sunday, September 11, 2011
OpenCog
Here's a video in which Ben Goertzel talks about the OpenCog system. This is a project which I've been keeping an eye on for the last few years, although so far I havn't gone into the details of its use at anything other than an overview level.
I should say at the outset that I'm not entirely on board with the Singularity idea - a more detailed excoriation of which could be a topic for another time - but thankfully this presentation doesn't get bogged down with such meanderings.
The thing I like about Goertzel is that he seems to be a practitioner. His ideas are somewhat interesting and laid out in some detail in his various writings, such as The Hidden Pattern. Roughly half way through this talk he gives what is probably the clearest and most amusing explanation of the MOSES algorithm which I've yet heard.
There are quite a few points on which I'd agree with him, such as that not all behavior is goal-oriented (often systems are merely interacting, of "dancing", under dynamics whose attractors may not reside within the agents themselves), that continuous self-modeling will be very important for intelligent systems (for example, detecting damage and devising compensatory strategies, or maintaining an ongoing narrative about the self) and also the economic attention allocation (or attention as a commodity) idea does make some sense. All complex machines, be they biological or otherwise, have finite energy and material resources, and the problem of intelligence can be framed as one of juggling these so as to continue functioning and thriving in whatever environment you happen to occupy.
Although Ben Goertzel is not primarily a roboticist he has done some amount of robotics related work with OpenCog in recent times, using the Nao robot. Towards the end of the talk he laments upon how difficult and contrived this often is, and I think that this is because a gap still remains between the sorts of abstractions which can be produced from sensor data using systems such as OpenCV or PCL and the higher level probabilistic reasoning which systems like OpenCog can facilitate. It's still difficult to seamlessly ground high level concepts, or to have such concepts emerge ultimately from sensor data.
However, I think there's good cause to believe that what is sometimes referred to as "cognitive robotics" may really become possible in the years ahead. If you can have a robot drive around and construct a model of its environment using some variety of 3D SLAM, then segment the model to extract objects such as tables, chairs and doorways then this could become the basis for the formation of intelligent goal-oriented executive planning, and it's at that sort of level where I think that systems such as OpenCog could be genuinely useful and lead to new innovations.
As far as I'm aware there is currently no ROS node to interface with the OpenCog system, so that's a project which someone might wish to take on in future.
I should say at the outset that I'm not entirely on board with the Singularity idea - a more detailed excoriation of which could be a topic for another time - but thankfully this presentation doesn't get bogged down with such meanderings.
The thing I like about Goertzel is that he seems to be a practitioner. His ideas are somewhat interesting and laid out in some detail in his various writings, such as The Hidden Pattern. Roughly half way through this talk he gives what is probably the clearest and most amusing explanation of the MOSES algorithm which I've yet heard.
There are quite a few points on which I'd agree with him, such as that not all behavior is goal-oriented (often systems are merely interacting, of "dancing", under dynamics whose attractors may not reside within the agents themselves), that continuous self-modeling will be very important for intelligent systems (for example, detecting damage and devising compensatory strategies, or maintaining an ongoing narrative about the self) and also the economic attention allocation (or attention as a commodity) idea does make some sense. All complex machines, be they biological or otherwise, have finite energy and material resources, and the problem of intelligence can be framed as one of juggling these so as to continue functioning and thriving in whatever environment you happen to occupy.
Although Ben Goertzel is not primarily a roboticist he has done some amount of robotics related work with OpenCog in recent times, using the Nao robot. Towards the end of the talk he laments upon how difficult and contrived this often is, and I think that this is because a gap still remains between the sorts of abstractions which can be produced from sensor data using systems such as OpenCV or PCL and the higher level probabilistic reasoning which systems like OpenCog can facilitate. It's still difficult to seamlessly ground high level concepts, or to have such concepts emerge ultimately from sensor data.
However, I think there's good cause to believe that what is sometimes referred to as "cognitive robotics" may really become possible in the years ahead. If you can have a robot drive around and construct a model of its environment using some variety of 3D SLAM, then segment the model to extract objects such as tables, chairs and doorways then this could become the basis for the formation of intelligent goal-oriented executive planning, and it's at that sort of level where I think that systems such as OpenCog could be genuinely useful and lead to new innovations.
As far as I'm aware there is currently no ROS node to interface with the OpenCog system, so that's a project which someone might wish to take on in future.
Saturday, September 10, 2011
Kinect 3D Scanner
The 3D scan 2.0 project from the Technical University Bergakademie Freiberg looks like an easy way to scan larger objects.
Imagine the results you could get if the Kinect could only be used at a closer range.
Labels:
3D scanner,
3d sensor,
Kinect,
software
RoboEarth ROS Stack
There is plenty of exciting news recently in the field of reality virtualization. It is now clear that the future is not virtual reality but an augmented reality where humans can see the world with data overlays and robots can see a virtualized view of the world.
RoboEarth has released a ROS stack that provides the ability to recognize objects downloaded from RoboEarth and an object recorder to re-upload new objects to the cloud.
This is very exciting because it starts building the infrastructure necessary for robots to recognize objects they have never seen before. This could be thought of as the beginnings of a shared experience and memory for mobile robots.
However based on our previous thoughts, we feel unfortunately that lawsuits are inevitable in the United States.
RoboEarth has released a ROS stack that provides the ability to recognize objects downloaded from RoboEarth and an object recorder to re-upload new objects to the cloud.
This is very exciting because it starts building the infrastructure necessary for robots to recognize objects they have never seen before. This could be thought of as the beginnings of a shared experience and memory for mobile robots.
However based on our previous thoughts, we feel unfortunately that lawsuits are inevitable in the United States.
Labels:
augmented reality,
RoboEarth,
ROS,
Shared memories
Friday, September 9, 2011
RViz and ROS running on OSX
Some good news for all of you out there struggling with one mouse button, William Woodall, who previously won the ROS Doc contest, has managed to get ROS and RViz working under OSX using the homebrew package manager.
Speaking of the ROS Documentation Contest, entries for round 3 are due tomorrow Friday Sep 10th at 6pm EST. This is you last chance for Diamondback T-Shirts.
Labels:
documentation,
OSX,
ROS,
RViz
Co-Op Manufacturing Safety
One of the bigger concerns about the National Robotics Initiative, and co-operative manufacturing in the US, is not technological. It's the problem of building legal, safety and insurance infrastructure necessary to free the manufacturing robots from their cages.
The insurance problems can probably be solved with enough money, but the legal problems will require some miserable growing pains for the industry to deploy whatever technological products the NRI produces.
In a bit of recent good news however the RIA has now announced that there is now a new ISO standard for industrial robot safety. It also reintroduces 'man-in-the-loop' into the manufacturing process. This along with a better risk assessment will help ensure that the technology developed can actually be used and make the inevitable lawsuits slightly less painful.
I wish everyone perusing a NRI grant the best of luck and I hope you can take the time to learn more about these important safety standards.
Labels:
co-op mode,
industrial,
manufacturing,
NRI,
safety
Subscribe to:
Posts (Atom)