Robot Dog Learns to Walk Starting on its Back

New system lets robots learn by experience and automatically adapt

Scarlett Evans, Assistant Editor, IoT World Today

July 21, 2022

2 Min Read
obotic dog taught itself to walk from a starting point on its back.

It only took an hour, but a robotic dog taught itself to walk from a starting point on its back.

A research team from the University of California, Berkeley, developed a new algorithm based on reinforcement learning (RL) to train robots without simulations or demonstrations. 

The system gives robots a trial-and-error learning model that could be ground breaking for efficient real-world robot training. 

The Dreamer program uses reinforcement learning to ‘train’ robots through continuous feedback and rewarding bots once a task is successfully completed. 

Dreamer was applied to four robots to test the RL capabilities in practice. 

A quadruped robot learned to stand from its back and walk in an hour, and taught itself to withstand pushes and roll over. 

The team also trained two robotic arms to pick and place objects using camera images, with results outperforming model-free units. When deployed on a wheeled robot, Dreamer helped it to navigate itself to a destination using only camera images.

Typically, the reinforcement learning training  was considered inefficient due to the vast amounts of trial and error required before a robot adapts behavior. However, simulations to train units can be insufficient as they can only capture a small amount of real-world situations and are prone to inaccuracies. 

The Dreamer system is slightly different from conventional deep reinforcement learning practices, using a small amount of interaction to create trial-and-error actions a robot uses to plan its movements.

While initial results are promising, further testing is needed to see how the algorithm will respond to different situations, and challenges remain in the time it takes to code each robot. 

The results of the study, “DayDreamer: World Models for Physical Robot Learning”, were published in the journal arXiv.

About the Author(s)

Scarlett Evans

Assistant Editor, IoT World Today

Scarlett Evans is the assistant editor for IoT World Today, with a particular focus on robotics and smart city technologies. Scarlett has previous experience in minerals and resources with Mine Australia, Mine Technology and Power Technology. She joined Informa in April 2022.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like