Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!
Novel algorithms are deployed to allow robots to learn from mistakes and teach themselves to walk
July 20, 2022
In the animal kingdom, rapidly learning to walk is a survival skill necessary moments after birth, and it’s a skill that’s becoming increasingly prominent in the robotic world.
A team of researchers from the Max Planck Institute for Intelligent Systems in Stuttgart, Germany recently unveiled a robotic dog that taught itself to walk, using a learning algorithm that showed results within an hour of deployment. The research was undertaken to determine how animals use stumbling and tripping to learn to walk, with these incidents used to modify motor and neural networks.
Results from the research were published in the journal Nature Machine Intelligence.
“As engineers and roboticists, we sought the answer by building a robot that features reflexes just like an animal and learns from mistakes,” said Felix Ruppert, first author of the researchers’ study. “If an animal stumbles, is that a mistake? Not if it happens once. But if it stumbles frequently, it gives us a measure of how well the robot walks.”
The Labrador-sized robot, dubbed Morti, features an AI system designed to replicate the motor function network found in animals; the central pattern generator (CPG). As the CPG is typically found in the animals’ spinal cord, Morti was fitted with a small computer that controlled its movements. Placed on Morti’s back, this virtual spinal cord received data from the robot’s foot sensors to optimize its movements, comparing received and expected sensor information to train itself to walk. If the robot stumbles, it adapts its motor control patterns accordingly.
“We can’t easily research the spinal cord of a living animal. But we can model one in the robot,” said Alexander Badri-Spröwitz, study co-author. “This is fundamental research at the intersection between robotics and biology. The robotic model gives us answers to questions that biology alone can’t answer.”
The Stuttgart team is not the only one announcing news of a robotic dog’s fast learning capabilities. A University of California, Berkeley team recently unveiled results of an AI technique known as reinforcement learning that enables robots to teach themselves to walk by rewarding them for desired actions.
The team’s algorithm, Dreamer, allows the robot to learn from past mistakes and conduct trial-and-error calculations to streamline the learning process, not only learning to walk but also making it adaptable to unexpected situations.
Such advancements could improve robots’ functionality and self-reliance, allowing businesses to place them in situations with the knowledge they will adapt rapidly to the situations and circumstances in front of them.
Assistant Editor, IoT World Today
Scarlett Evans is the assistant editor for IoT World Today, with a particular focus on robotics and smart city technologies. Scarlett has previous experience in minerals and resources with Mine Australia, Mine Technology and Power Technology. She joined Informa in April 2022.
You May Also Like