In the presence of an unfamiliar surface or obstacle, robots have a difficult time improvising, often resulting in an abrupt stop or hard fall. However, researchers have created a new model of robotic locomotion that adapts in real-time to any terrain, adjusting its gait accordingly as it encounters any unexpected obstacles.
Robotic movement can be precise and versatile, and robots can “learn” to climb steps, cross damaged terrain, and so forth, but these behaviors are more like individual skills that the robot switches between. Even though robots like Spot can regain their balance when pushed or kicked, the system is simply trying to compensate for a physical anomaly, while following the same walking policy.
There are some adaptation models, but some (such as this one using real insect movements) are very specific, and others take the robot so long to implement that it will likely have fallen by the time they kick in. UC Berkeley, Carnegie Mellon University, and Facebook AI researchers team call it – Rapid Motor Adaptation. It is derived from the fact that humans and animals are able to change their walking style quickly, effectively, and unconsciously based on their surroundings.
The senior researcher at Facebook AI and UC Berkeley, Jitendra Malik remarks, “The first time you walk on the beach, you sink your foot in, however, you have to use more force to pull it out. It feels strange at first, but with practice, you’ll be walking like a pro in no time. It’s a true secret if you’ve never walked on the sand before, but even if you’ve walked on it many times in your lifetime, you’re not entering some special “sand mode” where you can walk on soft surfaces. As you move, you change automatically and without considering the external environment. What’s happening in your body responds to the different physical conditions by sensing the differing consequences of those conditions on the body itself. When we humans walk in new conditions, in a very short span of time, less than around half a second, we make enough measurements that are estimating what conditions are, and we modify the walking policy. A similar system is used for RMAs.”
Robot’s small brain, where everything runs locally on its onboard limited computation unit, has been programmed to maximize forward motion with minimum energy, avoid falling, and respond immediately to information coming from its virtual joints, physical sensors, and accelerometers.
Malik emphasizes the robot’s complete internality by noting that it uses no visual input in any way. People and animals without vision can still walk, so why not a robot? Due to the inability to observe the “externalities” such as the friction coefficient of the rock or sand on which it walks, it simply observes itself closely. Also, co-author Ashish Kumar, from Berkeley, said, “We don’t learn about sand, we learn about feet sinking.”
Ultimately, the system consists of two parts: a parallel adaptive algorithm that monitors changes in the robot’s internal readings and the main, always-running algorithm that controls the robot’s gait. As soon as a significant change is detected, it is analyzed then the legs should take immediate action, but they are taking this action, which means the main model adjusts according to the situation. As a result, the robot only thinks about how to move forward under these new conditions, improvising a unique gait.
According to the news release, after training in simulation, it performed admirably in real life: In all the tests, the robot walked on sand, mud, hiking trails, tall grass, and a dirt pile without failing. In 70% of the trials, the robot successfully walked down a set of stairs. Despite never seeing the unstable or sinking ground, obstructive vegetation, or stairs during training, it successfully navigated a cement pile and a pile of pebbles in 80% of the trials. Additionally, it maintained its height with a high success rate when moving with a load of 12 kg, which amounted to 100% of its body weight.
In the gif below, you can see examples of many of these situations
Malik drew attention to the work of NYU’s Karen Adolph, whose work has demonstrated how adaptable and free-form the human process of learning to walk actually is. Adaptability is something a robot needs to learn from scratch, rather than being able to choose among multiple modes. Just as it is impossible for one to build a better computer-vision system by exhaustively labeling and documenting every object and interaction (there will always be more), the same is to build a robot that can never equip itself for an unpredictable physical environment with 10, 100, even thousands of special parameters for taking steps on gravel, mud, rubble, wet wood, etc. As a matter of fact, one may not even want to specify anything beyond the idea of forwarding movement.
The robot’s morphology or legs are not pre-programmed. In other words, the basis of the system – not the fully trained one, which eventually developed its own quadrupedal gaits – can potentially be used not only for other legged robots but for entirely different fields of AI and robotics. According to Deepak Pathak of Carnegie Mellon University, the legs of the robots are similar to the fingers of human hands — the way that legs interact with environments, fingers interact with objects, and this basic idea can be applied to any robot. Moreover, Malik pointed out, basic algorithms and adaptive algorithms can be combined to augment other intelligent systems.
As of now, the team is presenting their initial findings in a paper at the Robotics: Science and Systems conference and acknowledges that much further research will be required. A cognitive strategy such as assembling the improvised gaits into an internal library as a “medium-term” memory, or using vision to predict the need for a new movement style. However, RMA appears to be a promising new approach for a long-term challenge in robotics.