Sponsored By

Research Team Collects Data to Train Off-Road AVs

The data set is thought to be the largest ever gathered for off-road terrain

Scarlett Evans

June 20, 2022

2 Min Read
Off-road ATV

A team of researchers from Carnegie Mellon University has created a data set to inform off-road autonomous vehicles (AVs), taking a Yamaha Viking All-Terrain Vehicle (ATV) off-road and compiling more than 200,000 of its interactions in five hours worth of data. 

In the data collection drive, the ATV’s steering was controlled by a joystick while the brakes were directly controlled by a human driver, with each trajectory ending when the driver had to apply the brakes. Proprioceptive and exteroceptive sensors were used to collect real-time data on each trajectory. 

“We were forcing the human to go through the same control interface as the robot would,” said Wenshan Wang, a project scientist at the Robotics Institute. “In that way, the actions the human takes can be used directly as input for how the robot should act.”

Data gathered included video footage of its journey, the speed of each wheel and the amount of suspension shock from seven separate sensors. The resulting data set, dubbed TartanDrive, is hoped to provide a foundation for training off-road AVs. 

According to a statement from the team, TartanDrive is the largest real-world, multimodal, off-road driving data set “both in terms of the number of interactions and types of sensors.” 

“Unlike autonomous street driving, off-road driving is more challenging because you have to understand the dynamics of the terrain in order to drive safely and to drive faster,” said Wang.

Previous attempts to automate off-road driving have included annotated maps to indicate to robots where certain landmarks and terrain are; such as a lake, hill or rubble. Additionally, these traditional models do not take into account certain logistical factors such as the speed and angle with which a robot may approach an area. With the new research, the team has been able to create predictive models that more accurately demonstrate whether a robot can or cannot drive in certain environments. 

The team presented its findings at the International Conference on Robotics and Automation in Philadelphia in May. Now, the group says it will continue collecting data to make its data set more robust and expand its potential use cases beyond dynamics prediction to imitation learning. 

About the Author(s)

Scarlett Evans

Assistant Editor, IoT World Today

Scarlett Evans is the assistant editor for IoT World Today, with a particular focus on robotics and smart city technologies. Scarlett has previous experience in minerals and resources with Mine Australia, Mine Technology and Power Technology. She joined Informa in April 2022.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like