George Rebane
New advances in walking and evolving intelligence.
Today we’re a lot closer to robots walking naturally, and maybe tomorrow we’ll add the concurrent ability to chew gum. A team of systems engineers heavy in six-degree-of-freedom kinematics and control theory has developed a totally new paradigm for robot locomotion that gets closer to how, say, we humans walk and move things using our hands. They have demonstrated their new control algorithms in virtual robots and other body-type critters (here).
The basic approach is opposite from how robot controllers have been designed to date. Instead of figuring out and even optimizing how each limb segment is controlled in concert with other body parts when something like walking or stair climbing occurs, the Toronto team designed the ‘system’ by specifying what kind of constrained motions and actions are possible for each body component relative to the position/motion of neighboring components. They then developed a higher level controller that ‘managed’ the location and motion of the end components, say, feet in walking, while letting the other body parts do what they must to support the desired placement of the end components (feet) in such a way that the entire body would stay upright and not topple over.
What emerged (recall that complex systems have emergent behaviors) for a humanoid robot was a natural walking gait that even included the natural cross extensor reflex of opposite swinging arms. The result, as you can see in the video, is a totally cool and naturally moving virtual robot. BTW, you can compare this with the earlier and still ongoing work on the Big and Little Dog walking robots here.
The near-term applications of this are manifold – better looking computer games, realistic animated movies, and, of course, realworld 3D robots that will be better able to do the jobs formerly reserved for humans. This includes all kinds of dangerous work like maintenance spacewalks, mining, disaster rescue, and maybe even combat. My own favorite is the reprise of long-dead actors starring in new movies and TV shows. For you older folks, consider seeing a forever young Elizabeth Taylor reprising her classic character roles in new screen plays. Or seeing the development of new and realistic virtual stars who will perform in new movies for as long as the audiences accept them. No more having to dress someone up and record them in a skin-tight suit with lights so that they can supply the natural motion ‘body’ for a computer generated film.
Artificial life forms are evolving basic intelligence. Some significant steps have been demonstrated in the evolution of artificial life, or more specifically, artificial intelligence. RR readers may recall that I am a strong proponent of the notion that human peer intelligence will evolve as opposed to being programmed into some final functional form.
Such evolutionary programs divide into two broad classes – evolutionary algorithms (EAs) and evolutionary programs (EPs) – also called genetic algorithms and genetic programming. In the EA paradigm the program implements a fixed algorithm that ‘powers’ the virtual critter. And the object is to find better and better control parameters that dictate how the algorithm operates in competition with critters having the same algorithm but differently evolved control parameters. Obviously, the intellectual evolution of such critters is top end limited by the design of the algorithm (and, of course, the ‘environment’ they find themselves in).
The EP approach is more difficult to implement but has a much higher potential IQ level it can achieve – one can argue that it’s essentially limited only by the computational resources that the critters have available to them in their virtual worlds. EP works by separately evolving each critter’s core program, the line-by-line code that powers it rather than a set of control parameters. This means that from generation to generation EP critters can evolve new and emerging functions that were never in the original (simpler) code that gave ‘life’ to the first generation.
And this kind of evolution is exactly what the Michigan team was able to achieve. Their critters started out with no memory capacity and evolved memory because they needed to remember what had happened to them in the past in order to beat out their memory-less neighbors. This should be a spine tingling report for people who follow such things, but I can guarantee that it was a spine tingler for the systems techies who witnessed this happen. (I had a spine tingler when an EA I had developed in my research some years ago evolved a better solution to a complex problem than the one I thought I had maxed out using other established means.)
As machines now start to demonstrate their ability to reconfigure their ‘thinking’ processes, we enter into a new accelerating phase of the approaching Singularity. Combine the ability to better think and move (aka manipulate their environment) will yield even faster progress toward the momentary goal of peer intelligence with humans.
Is that a dem-bot?
Posted by: Todd Juvinall | 09 August 2010 at 07:33 AM