Autonomous Vehicle Hopes Hinge on Crash Avoidance Technology
Camera-based computer vision solutions, as the description implies, rely on a real-world view of the driving environment derived from high-resolution, sensor-based cameras placed on multiple locations around a vehicle.
These technologies often are positioned in media reports and by some of the companies that work with them as rivals to one another, especially in the case of lidar vs. camera-based computer vision.
For example, proponents of lidar extoll the technology’s ability to build a comprehensive view of the driving environment within 150 to 200 meters of a vehicle, as well as its ability to operate in a variety of ambient lighting conditions. They also point out that crash avoidance systems using camera-based systems in most cases offer a two-dimensional view, and may not be able to see low-lying obstacles if the cameras are placed on the rooftop of a vehicle.
According to a white paper from lidar technology company Velodyne and research firm Frost & Sullivan, lidar provides “a 360-degree horizontal field of view, and up to 40-degree vertical field of view, allowing the vehicle to generate dense, high-resolution 3D maps of the environment up to 10 to 20 times per second.”
A Velodyne spokesman further noted in an email to IoT World Today that lidar “has inherent advantages over camera and radio wave-based radar, such as the ability to see the world in real-time 3D at range despite low ambient lighting.” He later added that “lidar delivers a core set of perception data that provides real-time free space detection in all light conditions. It represents a significant opportunity for vehicle manufacturers to improve roadway safety.”
Companies working with camera-based computer vision technology argue their systems have been effective in a variety of lighting conditions and have succeeded in weather conditions they say could limit the effectiveness of lidar.
For example, Vivian Sun, head of marketing and business development for TuSimple, a company creating autonomous systems and software for self-driving trucks, said a test of its camera-based system with the U.S. Postal Service involved “1,200 hours of night driving,” and phases in which vehicles were driven autonomously for as long as 22 hours in a single stretch, “so you go through all light conditions.”
She also said TuSimple conducted a test in Tucson, Arizona, that demonstrated the ability of its technology to out-perform lidar on specific navigational functions, as well as when weather is a factor. “We did a test in Tucson in the rain where the truck was taking an unprotected left turn,” she said. “Lidar may be able see 150 meters, but if traffic is coming at least 45 miles per hour you need to be able to see more than 200 meters to know what’s coming. Our computer vision technology can see further and react faster, giving the vehicle more time to react to do better path planning.”
Companies using camera-based approaches also argue their systems are better at identifying the content of roadway signs and signals. In addition, they say some lidar systems are not automotive-grade. As a result, they may not be able to withstand the miles of driving that autonomous systems must survive just to be considered for commercial implementation.