Stanford Team Creates Multi-Sensory Robot Training Platform

The new platform trains robots using both visual and audio cues

Scarlett Evans, Assistant Editor, IoT World Today

June 23, 2023

1 Min Read
Example of the Sonicverse platform
Example of the Sonicverse platformStanford University

Stanford University researchers have created a new robot-training platform, Sonicverse, which uses audio and visual elements for navigation.

Designed as a simulation platform for robots that rely on both camera and microphone feeds, Sonicverse accounts for any sounds robots may cause or detect as they complete tasks, and makes for what the researchers say is a more “realistic” training environment. 

“Developing embodied agents in simulation has been a key research topic in recent years … However, most of them assume deaf agents in silent environments, while we humans perceive the world with multiple senses,” said the team. “Sonicverse models realistic continuous audio rendering in 3D environments in real time.” 

In tests, the researchers used the platform to test a simulation of TurtleBot, requiring it to move through its environment and reach a set destination without colliding with obstacles. 

The system was then applied to a real-life TurtleBot, which was placed in an office environment. 

The tests saw successful results and Sonicverse is now available for use online to train AI agents and robotic systems. 

About the Author(s)

Scarlett Evans

Assistant Editor, IoT World Today

Scarlett Evans is the assistant editor for IoT World Today, with a particular focus on robotics and smart city technologies. Scarlett has previous experience in minerals and resources with Mine Australia, Mine Technology and Power Technology. She joined Informa in April 2022.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like