I should say at the outset that I'm not entirely on board with the Singularity idea - a more detailed excoriation of which could be a topic for another time - but thankfully this presentation doesn't get bogged down with such meanderings.
The thing I like about Goertzel is that he seems to be a practitioner. His ideas are somewhat interesting and laid out in some detail in his various writings, such as The Hidden Pattern. Roughly half way through this talk he gives what is probably the clearest and most amusing explanation of the MOSES algorithm which I've yet heard.
There are quite a few points on which I'd agree with him, such as that not all behavior is goal-oriented (often systems are merely interacting, of "dancing", under dynamics whose attractors may not reside within the agents themselves), that continuous self-modeling will be very important for intelligent systems (for example, detecting damage and devising compensatory strategies, or maintaining an ongoing narrative about the self) and also the economic attention allocation (or attention as a commodity) idea does make some sense. All complex machines, be they biological or otherwise, have finite energy and material resources, and the problem of intelligence can be framed as one of juggling these so as to continue functioning and thriving in whatever environment you happen to occupy.
Although Ben Goertzel is not primarily a roboticist he has done some amount of robotics related work with OpenCog in recent times, using the Nao robot. Towards the end of the talk he laments upon how difficult and contrived this often is, and I think that this is because a gap still remains between the sorts of abstractions which can be produced from sensor data using systems such as OpenCV or PCL and the higher level probabilistic reasoning which systems like OpenCog can facilitate. It's still difficult to seamlessly ground high level concepts, or to have such concepts emerge ultimately from sensor data.
However, I think there's good cause to believe that what is sometimes referred to as "cognitive robotics" may really become possible in the years ahead. If you can have a robot drive around and construct a model of its environment using some variety of 3D SLAM, then segment the model to extract objects such as tables, chairs and doorways then this could become the basis for the formation of intelligent goal-oriented executive planning, and it's at that sort of level where I think that systems such as OpenCog could be genuinely useful and lead to new innovations.
As far as I'm aware there is currently no ROS node to interface with the OpenCog system, so that's a project which someone might wish to take on in future.
5 comments:
I'm always happy to see grassroots efforts... but why not join forces with the likes of KnowRob and/or CRAM?
http://ias.in.tum.de/research/cram
All I can say that this OpenCog system is really hard for me and I really hate it. But anyway thanks to the video you've shared here at least I can say that I gain knowledge from this blog site of yours.
Motters... some students in Xiamen got partway through making a ROS interface for OpenCog but didn't finish... I agree it's an important project...
I would be interested in seeing how one could make OpenCOG control low level motor torque commands and how it could figure out a time series of motor torques such that robot locomotion is attained while avoiding problems such as excessive foot wrench forces(which could result in feet slipping, ZMP problems - tipping, twisting, etc).
My guess is that its impractical to control low level torque commands directly with OpenCOG especially given the 1 msec or lower sample times that are required. Servo position commands parameterized by trajectory polynomials are more practical I think. Yet sometimes it would be beneficial to have a high level reasoning system modify low level motion or torque commands because the motion of each robotic limb is not completely independent of other limbs. Knowledge of the relationship can be used to improve position tracking of the limbs or increasing foot stability of a biped. For example moving the swing leg of a biped affects the ZMP position of the support leg and thus the stability of the robot. Swinging the arms affects the yaw torque at the support foot and thus arm motion can affect robot stability or performance.
I would be interested to know how OpenCog could be made to recognize and learn to control such relationships in order to improve robot mobility.
It seems their current work on DeSTIN is trying to learn learn motion sub-tasks to achieve larger motion goals. It seems promising. I look forward to learning more and perhaps even contributing to the project.
Once you're getting into the problems of multi-scale adaptation for a biped then this is an architecture similar to the viable system model.
https://secure.wikimedia.org/wikipedia/en/wiki/Viable_System_Model
In that situation it would be helpful to have the system create a self-model which it can then use to help it deal with optimization problems or unforeseen events or alterations/damage to its design. Adaptation facilitated by a self-model is an interesting area of research, and I think it's something which a system like OpenCog could be helpful with.
A possible project would be to have OpenCog continuously learn a motion model, based upon various sensor data, such that this can then be used for things like MCL and gait or manipulation/grasp planning.
Post a Comment