Deep Learning for Robust Vision in Realtime Autonomous Driving

Abstract

The concept of robust vision is explored as a means to improve autonomous vehicle performance and safety. This research is applicable to both the University of Toronto’s self-driving car team, aUToronto, as well as to manufacturers of autonomous road vehicles, who have been criticized for the failures of their vehicles that resulted in injuries and fatalities. The requirements of a robust vision system are identified; chiefly, it must be capable of uncertainty quantification, so this field is introduced and explored with respect to its applications in vision. With this foundation, the most commonly used computer vision algorithms are evaluated for robustness. Some experiments are performed using one of the most robust algorithms identified (Bayesian Neural Networks), on autonomous driving applications to demonstrate the advantages of uncertainty quantification. Noting that a major factor in the lack of usage of robust vision systems in autonomous driving is the computational cost, a proposal is made to use FPGAs to eliminate this relative disadvantage of Bayesian Neural Networks over the current most popular models. If future tests to validate the proposal are successful, this may pave the way for more robust vision systems to be adopted by autonomous vehicle manufacturers.

Stewart Jamieson
Stewart Jamieson
PhD Candidate

My research interests include semantic mapping and path planning for multi-agent underwater robotic systems.