SENSORS Jeremy Cohen, founder and CEO at Think Autonomous Q&A Arvind Srivastav Software engineer, Zoox Are you seeing a convergence toward a single type of sensor dominating the AV stack or will there be redundancy where you need to combine multiple sensors for better visibility/safety? I’m continuing to see the use of a mix of sensors in AVs – cameras, lidars and radars – to cover each other’s blind spots and ensure the highest safety. Cameras are great for seeing things like traffic lights and reading signs, but even with the state-of-the-art AI models, cameras are not so good for measuring distances, especially in challenging weather. That’s where lidars come in, providing precise 3D images and depth perception, though they too struggle in poor weather conditions. Radars fill in these gaps, offering reliable measurements of speed and distance no matter the weather. In addition, the emergence of cross-modality transformers-based AI models is enabling the AV industry to fully leverage the capabilities of all sensor data to achieve robust perception. So in order to ensure highest safety, using a complement of sensors is the recommended choice. efforts. Innovations such as FMCW (frequency modulated continuous wave) lidars, which measure direct velocity, and the development of solid-state lidars, which use optical phased arrays instead of moving parts, have been key in making lidar sensors cheaper and more reliable. The shift toward solid-state lidars highlights the industry’s move toward more robust, cost-effective sensor solutions to address previous concerns around durability, reliability and costs. As the technology evolves, the focus remains on refining sensor performance and integration to meet AV requirements. Do you have any overarching predictions on how the AV sensor industry will evolve? In my opinion, the AV sensor industry will evolve distinctly based on the level of autonomy targeted. For fully autonomous vehicles, a comprehensive array of sensors, including FMCW lidars, 4D radars and cameras, will continue to be used around the vehicle to ensure a 360-degree environmental perception. Meanwhile, traditional car companies aiming for L2 to L3 autonomy will likely focus on integrating a suite of cameras, a few 4D radars and a solid-state lidar at the vehicle’s front, so that they can support ADAS with some degree of automation but requiring human oversight. “Our view is that with low-level sensor fusion, what you’re really doing is using one sensor to compensate for the weakness of another,” Galves says. This could lead to unexpected failures. For example, in a foggy environment, the camera system’s accuracy drops and the perception system relies on the radar, which is not affected by fog. But this means that in foggy situations, you’re relying solely on the radar, and if the radar fails, you have no backup. “The idea is you need to create a camera system that works in that environment. And then you would also have a radar and lidar system that works in that environment. And if one of them fails, then you’re still okay because the other one is not failing,” Galves says. “When you mix the sensor data at a low level, it’s not really redundant. It’s more that you’re using one sensor for a particular task and using another sensor for another task. So you never have that true redundancy in the system.” What’s next for AV sensors? There is much to look forward to in the coming months and years. One interesting development is solid-state lidars, which might replace the more traditional mechanical lidars. They have no moving parts and eliminate the complex and high-frequency rotating mechanical parts of classic lidars. This simplification can make lidars more durable against environmental conditions, reduce costs and make the technology more accessible for automotive companies. Another direction of research is frequency-modulated continuous wave (FMCW) lidars, which can replace the classic What challenges do you see in time-of-flight (TOF) technology. Advances in FMCW lidars can current sensors and where has help improve the accuracy of object detection and reduce most progress been made? sensitivity to sunlight and other factors that disrupt classic lidars. In recent years, I’ve seen The development of sensor intelligence is also worth watching. significant progress in overcoming Today, all AV perception stacks use machine learning to process challenges related to sensor sensor data for object detection and trajectory tracking. However, technology in AVs, particularly there are different opinions about where the algorithms should with radar and lidar sensors. The reside. Advances in machine learning and AI hardware are making advancement from 3D to 4D it possible to perform AI functions directly on sensors as opposed radars has notably enhanced to a centralized compute unit. Each approach has its advantages resolution and added elevation measurements, improving and trade-offs. The industry might converge on a single path or radar-based perception. Lidar mature in several directions. sensors, traditionally expensive, “It will be interesting to see if intelligence is distributed have become more affordable across all the different sensors or just centralized in one box,” through continuous cost reduction Beiker comments. But most importantly, it is worth watching what unexpected trends emerge, how hardware and software continue to evolve, and how the industry adapts. Zoox vehicles “Some people say that software is very difficult. But feature lidars, I would say hardware is even harder – like, really hard,” radars, visual cameras and says Cohen. “When sensor manufacturers are building longwave-their lidar version 1, they start thinking about versions 3 infrared cameras and 4. They are actually designing the next versions that are going to be sold in 2026-2027. It’s impressive to see entrepreneurs thinking at that scale.” You can read more from Arvind on radar at autonomousvehicle international.com (search Feature: Zoox – radars in autonomy , or scan the QR code) 30 ADAS & Autonomous Vehicle International April 2024