Lidar, computer vision and radar could play a pivotal role in the ultimate commercialization of autonomous vehicles.

Daniel O'Shea

October 15, 2019

10 Min Read
Image shows social infrastructure and communication technology concept. IoT(Internet of Things). Autonomous transportation.
Getty Images

It is still early in the evolution of autonomous transportation, though it is never too soon to think about safety and collision avoidance, and the issue is becoming more urgent with every report of a crash involving a driverless vehicle. 

The global autonomous vehicle market is expected to be worth more than $170 billion by 2024, according to a Research and Markets report. Dozens of automotive and technology giants are investing billions of dollars in autonomous vehicle projects. But, in recent years, crashes involving Tesla vehicles with the Autopilot function engaged, a fatal self-driving Uber accident and other collisions involving self-driving cars and buses have occured. Those incidents have put a bright spotlight on what is likely the biggest concern autonomous vehicles need to overcome to succeed.

The need to prove safety has been a major factor in how long it is taking many autonomous vehicle projects to exit their test tracks and merge onto public roadways with live traffic, according to Keith Kirkpatrick, principal analyst at research firm Tractica. 

“We’re still in that nascent period of the autonomous vehicle market where there are a lot of trials going on,” he said. “There are some commercial deployments, but mostly in limited, controlled environments — things like autonomous buses in a controlled situation where you don’t have a lot of cross traffic and interference.”

Kirkpatrick said there might be an “unrealistic expectation” that even one collision is one too many for a driverless vehicle to be involved in. True, fatigue, distraction, willful violation of traffic laws and other factors lead human drivers to cause millions of accidents and related fatalities annually. But news media is quick to amplify any reports of a single autonomous vehicle crash.

The biggest fear for companies working on autonomous vehicles is contributing to or causing a collision — especially one that involves a fatality. For this reason, the industry is paying a massive amount of attention to technologies that help autonomous vehicles detect other vehicles, humans, road signals and other objects within a given driving environment, and help them to avoid such incidents.

Lidar and Computer Vision

Of all self-driving vehicle technologies, none is more important than those that help the \vehicle gain a complete and accurate visualization of its driving environment. This fact is especially important in the case of transportation and logistics fleets, in which the ultimate hope is that fully automated, driverless vehicles can improve fleet performance, efficiency and safety records, all while decreasing expenses. A 2018 Tractica report observed about 188,000 autonomous trucks and buses will ship annually by 2022, and with no human intervention possible in some of these vehicles, crash avoidance technology systems on such vehicles need near-perfect execution.

Among the technologies in play for crash avoidance are light imaging, detection and radar technology (lidar) and camera-based systems that leverage computer vision technology.

Lidar uses laser-enabled sensors deployed on a vehicle that fire millions of light pulses each second to create a 3D map of the environment around a self-driving vehicle, allow autonomous systems to locate the vehicle itself in this environment, instantly recognize objects in its path and take appropriate action. (Lidar does this by relying on the “Time of Flight” principle.) 

Camera-based computer vision solutions, as the description implies, rely on a real-world view of the driving environment derived from high-resolution, sensor-based cameras placed on multiple locations around a vehicle.

These technologies often are positioned in media reports and by some of the companies that work with them as rivals to one another, especially in the case of lidar vs. camera-based computer vision. 

For example, proponents of lidar extoll the technology’s ability to build a comprehensive view of the driving environment within 150 to 200 meters of a vehicle, as well as its ability to operate in a variety of ambient lighting conditions. They also point out that crash avoidance systems using camera-based systems in most cases offer a two-dimensional view, and may not be able to see low-lying obstacles if the cameras are placed on the rooftop of a vehicle.

According to a white paper from lidar technology company Velodyne and research firm Frost & Sullivan, lidar provides “a 360-degree horizontal field of view, and up to 40-degree vertical field of view, allowing the vehicle to generate dense, high-resolution 3D maps of the environment up to 10 to 20 times per second.”

A Velodyne spokesman further noted in an email to IoT World Today that lidar “has inherent advantages over camera and radio wave-based radar, such as the ability to see the world in real-time 3D at range despite low ambient lighting.” He later added that “lidar delivers a core set of perception data that provides real-time free space detection in all light conditions. It represents a significant opportunity for vehicle manufacturers to improve roadway safety.”

Companies working with camera-based computer vision technology argue their systems have been effective in a variety of lighting conditions and have succeeded in weather conditions they say could limit the effectiveness of lidar.

For example, Vivian Sun, head of marketing and business development for TuSimple, a company creating autonomous systems and software for self-driving trucks, said a test of its camera-based system with the U.S. Postal Service involved “1,200 hours of night driving,” and phases in which vehicles were driven autonomously for as long as 22 hours in a single stretch, “so you go through all light conditions.”

She also said TuSimple conducted a test in Tucson, Arizona, that demonstrated the ability of its technology to out-perform lidar on specific navigational functions, as well as when weather is a factor. “We did a test in Tucson in the rain where the truck was taking an unprotected left turn,” she said. “Lidar may be able see 150 meters, but if traffic is coming at least 45 miles per hour you need to be able to see more than 200 meters to know what’s coming. Our computer vision technology can see further and react faster, giving the vehicle more time to react to do better path planning.”

Companies using camera-based approaches also argue their systems are better at identifying the content of roadway signs and signals. In addition, they say some lidar systems are not automotive-grade. As a result, they may not be able to withstand the miles of driving that autonomous systems must survive just to be considered for commercial implementation. 

The Importance of Redundancy

The apparent rivalry between lidar and camera-based technology has become heated enough that Elon Musk has described lidar as “a fool’s errand,” and anyone relying on the technology as “doomed.”

Tractica’s Kirkpatrick said ultimately such claims and debates about which technology is better to overlook a more grounded reality: Most present-day autonomous vehicles use a mix of technologies for collision avoidance, and that could continue to be the case for at least the near future.

This is made possible with the help of sensor data fusion, the popular IoT concept that Tractica’s Kirkpatrick described as “the idea of taking data from all of these disparate sensors and systems, and trying to make sense of all that.” This is done by algorithms that can weight different data points, and analyze and process them to “ascertain, for example, if lidar is telling us that there is some sort of object that is some feet away on the road. We may need another technology to tell us what that object is, and how the vehicle should react,” he added.

Whereas lidar paints a comprehensive 3D view of objects in the driving environment, cameras can identify those objects. Each may perform more or less accurately depending on weather or lighting conditions, or the guidance required for navigational functions. Other technologies may also contribute to the sensor data mix, such as traditional radar, which can sense objects at a greater distance than either lidar or computer vision, or GPS, which in some cases may provide the best fix on a vehicle’s location.

While companies like Velodyne and TuSimple are firm in arguing the advantages of their respective technologies and the disadvantages of others, they seem to buy into this notion.

As TuSimple’s Sun said, “We believe that lidar is important. We believe in sensor fusion. We have cameras all around the vehicle, and we also have lidar, radar and GPS on the vehicle, so all of the sensors give us really good input for understanding the environment around us. All kinds of sensors have limitations, but mastering the art of using the best of the sensors and the echoing of information will give us the best implementation.”

She added that while in TuSimple’s view lidar “has a limitation when it comes to rain, it also gives us a very good 3D view of the world. Some of our cameras are on the truck’s rooftop, but lidar gives us a good view from the ground up,” Sun said. “If a company can make good use of all the sensors that are available in the market, they have a good path to success.”

Velodyne had a similar take, as its spokesman said via email, “Sensor redundancy is an asset for autonomous vehicles. Building vehicle systems with sensor suites that include lidar along with other sensor technologies enables the strengths of each type of sensor to be optimized for the various use cases.” 

Kirkpatrick said this common belief in taking a multiple-technology approach to collision avoidance renders the whole technology debate moot, at least for now. “That’s why when you see Elon Musk talking about lidar and crapping all over it, it’s important to understand we are not going to see autonomous vehicle systems, whether you’re talking about cars or trucks or buses, rely on a single system for crash avoidance,” he said. “It just doesn’t make any sense because each technology has specific strengths and limitations.”

The need for multiple crash avoidance schemes and robust computing architectures to handle large volumes and varieties of sensor data may raise other challenges for the autonomous driving sector. It could increase the number of sensors that need to be deployed on a vehicle, or require sensors capable of leveraging multiple technologies, either of which could add cost. Also, collecting and processing more data from different sources also could tax in-vehicle computing power, meaning that some processing — perhaps pertaining to vehicle functions that don’t require zero latency — could be off-loaded to nearby edge computing location outside the vehicle.

Given the critical importance of crash avoidance capabilities to the future success of autonomous driving, companies in the sector just may have to navigate their way around these obstacles.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like