The VI AMMCS International Conference
Waterloo, Ontario, Canada | August 14-18, 2023
AMMCS 2023 Plenary Talk
Explainable AI via Semantic Information Pursuit
Rene Vidal (Johns Hopkins University, Baltimore)
There is a significant interest in developing ML algorithms whose final predictions can be explained in terms understandable to a human. Providing such an "explanation" of the reasoning process in domain-specific terms can be crucial for the adoption of ML algorithms in risk-sensitive domains such as healthcare. This has motivated a number of approaches that seek to provide explanations for existing ML algorithms in a post-hoc manner. However, many of these approaches have been widely criticized for a variety of reasons and no clear methodology exists in the field for developing ML algorithms whose predictions are readily understandable by humans. To address this challenge, we develop a method for constructing high performance ML algorithms which are "explainable by design". Namely, our method makes its prediction by asking a sequence of domain- and task-specific yes/no queries about the data (akin to the game "20 questions"), each having a clear interpretation to the end-user. We then minimize the expected number of queries needed for accurate prediction on any given input. This allows for human interpretable understanding of the prediction process by construction, as the questions which form the basis for the prediction are specified by the user as interpretable concepts about the data. Experiments on vision and NLP tasks demonstrate the efficacy of our approach and its superiority over post-hoc explanations. Joint work with Aditya Chattopadhyay, Stewart Slocum, Benjamin Haeffele and Donald Geman.
Rene Vidal is the Rachleff Penn Integrates Knowledge University Professor in the Departments of Electrical and Systems Engineering and Radiology and the inaugural Director of the Institute for Data Engineering and Science (IDEAS) at University of Pennsylvania. He is also an Amazon Scholar, a Chief Scientist at NORCE, Associate Editor in Chief of TPAMI and the director of the NSF-Simons Collaboration on the Mathematical Foundations of Deep Learning and the NSF TRIPODS Institute on the Foundations of Graph and Deep Learning.
His current research focuses on the foundations of deep learning and its applications in computer vision and biomedical data science. He is an ACM Fellow, AIMBE Fellow, IEEE Fellow, IAPR Fellow and Sloan Fellow, and has received numerous awards for his work, including the IEEE Edward J. McCluskey Technical Achievement Award, D'Alembert Faculty Award, J.K. Aggarwal Prize, ONR Young Investigator Award, NSF CAREER Award as well as best paper awards in machine learning, computer vision, controls, and medical robotics.
His current research focuses on the foundations of deep learning and its applications in computer vision and biomedical data science. He is an ACM Fellow, AIMBE Fellow, IEEE Fellow, IAPR Fellow and Sloan Fellow, and has received numerous awards for his work, including the IEEE Edward J. McCluskey Technical Achievement Award, D'Alembert Faculty Award, J.K. Aggarwal Prize, ONR Young Investigator Award, NSF CAREER Award as well as best paper awards in machine learning, computer vision, controls, and medical robotics.