Wednesday, November 22, 2006

Self aware robot deals with injuries

"Most robots have a fixed model laboriously designed by human engineers. We showed, for the first time, how the model can emerge within the robot. It makes robots adaptive at a new level, because they can be given a task without requiring a model. It opens the door to a new level of machine cognition and sheds light on the age-old question of machine consciousness, which is all about internal models," said Cornell University researcher Hod Lipson.

The robot that Lipson and his colleagues - Josh Bongard and Victor Zykov - have designed has sensors implemented in each joint. This allows it to perceive its own state of affair and then to use this information for creating models about how it could move. The various models are made to compete with one another in the robot's mind and then the winning strategy is enacted in reality. (Image above - from left to right: Zykov, Bongard, Lipson.)

The robot's first attempts at locomotion look strangely like a fish trying to move on land. The scientists actually refer to the robot as "starfish" because of its shape (although a real starfish has five limbs and not just four). What's amazing is that the researchers haven't programmed the locomotion behavior themselves, but just the robot's ability to generate locomotion behaviors.



The fact that the robot has an algorithm for generating behaviors rather than just having a rigid built-in model describing them and the surrounding environment, makes the "starfish" more similar to a real animal. For example when you twist an ankle, you start walking differently so as to not put too much pressure on the injured muscles. You can do this because your brain has the ability to invent new kinds of locomotion behaviors – rather than having specific built-in strategies for dealing with every imaginable injury.

"Higher animals use some form of an 'internal model' of themselves for planning complex actions and predicting their consequence, but it is not clear if and how these self-models are acquired or what form they take," the authors write in their paper published in the latest issue of the journal Science. "Analogously, most practical robotic systems use internal mathematical models, but these are laboriously constructed by engineers. While simple yet robust behaviors can be achieved without a model at all, here we show how low-level sensation and actuation synergies can give rise to an internal predictive self-model, which in turn can be used to develop new behaviors."

The algorithm programmed in the robot is as follows:

Creating a model of its own structure

  • (a) Robot physically performs an action. Initially, this action is random; subsequently, it is the best action generated in step (c).
  • (b) Robot generates several self-models that match sensor data collected while performing previous actions. In does not know which model is correct.
  • (c) Robot generates several possible actions that disambiguate competing self-models. It then goes to step (a) and performs the actions.
Using the self-model to generate locomotion strategies
  • (d) After several cycles a-b-c the currently best self-model is used to generate a locomotion sequence through optimization: various locomotion strategies compete to each other virtually and the robot uses the self-model and the model of the environment to predict which locomotion strategy is the best.
  • (e) The best locomotion sequence is then executed by the physical robot.
  • (f) Depending on the results, the robot goes at step (b) to further refine the self-model, or at step (d) to create new behaviors.

"The legged robot learned how to move forward based on only 16 brief self-directed interactions with its environment. These interactions were unrelated to the task of locomotion, driven only by the objective of disambiguating competing internal models," the authors wrote.

In other words, the robot learned how to walk, not by trying to learn how to walk, but by simply trying to create a coherent model of its own body. This result is quite spectacular because it may also explain how animals learn to control their bodies based only on relatively scarce experience. This also turns on their head some common ideas: one tends to think that animals know how to move instinctively and then use this ability to move for learning stuff about themselves. However, it might be the other way around: the animals generate models about themselves and, as they're doing it, they learn how to move.



The algorithm also allows the robot to automatically redesign its behavior after injuries. When researchers shorten one of its legs, the "starfish" developed a new style of moving.

"We demonstrate, both computationally and experimentally, how a legged robot automatically synthesizes a predictive model of its own topology (where and how its body parts are connected) through limited yet self-directed interaction with its environment, and then uses this model to synthesize successful new locomotive behavior before and after damage," they write.

"These findings may help develop more robust robotics, as well as shed light on the relation between curiosity and cognition in animals and humans: Creating models through exploration, and using them to create new behaviors through introspection."

No comments:

Post a Comment

.....................................................................................................................
.....................................................................................................................