Promoting CAV Deployment by Enhancing the Perception Phase of the Autonomous Driving Using Explainable AI

Promoting CAV Deployment by Enhancing the Perception Phase of the Autonomous Driving Using Explainable AI

Headshot of Samuel Labi. The link directs to their bio page.
Samuel Labi
Headshot of Sikai Chen. The link directs to their bio page.
Sikai Chen
Purdue University Logo. The link directs to the funded research led by this institution.

Principal Investigator(s):

Samuel Labi, Professor of Civil Engineering – Purdue University
Director – NEXTRANS
Associate Director – Center for Connected and Automated Transportation (CCAT)
Sikai Chen, Post-Doctoral Researcher – Center for Connected and Automated Transportation
Post-Doctoral Researcher – NEXTRANS Center at Purdue University
Visiting Research Fellow – Robotics Institute at the School of Computer Science at Carnegie Mellon University 

Project Abstract:
The perception phase, the weak link in the driving task, has been identified as the key cause of most autonomous vehicle (AV) accidents. This has been attributed to the relative infancy of computer vision (CV), the key technology in perception. Deep learning (DL) approaches have been used widely in computer vision applications, from object detection to semantic understanding, but are generally considered as black boxes due to their lack of interpretability which exacerbates user distrust and hinders their deployment in autonomous driving. It has been argued that explainable AI (XAI), an emerging concept in contemporary computer science literature where model outputs can be understood by humans, offers an opportunity to address this issue. Thus, this research project is developing an explainable end-to-end autonomous driving system as an improvement to existing autonomous driving systems. To do this, the team is using a state-of-the-art self-attention-based model that generates driving actions with corresponding explanations using visual features from images from onboard cameras. The model will imitate human peripheral vision by performing soft attention over the images’ global features.

Institution(s): Purdue University

Award Year: 2022

Research Thrust(s): Control & Operations, Enabling TechnologyModeling & Implementation

Project Form(s):