XAI: Can Artificial Intelligence be Made Intelligible?

IIEA11th December 20181min
In his address to the IIEA Mr. Gunning discussed his and DARPA’s ongoing work to design ‘explainable artificial intelligence’ (XAI) systems, which can articulate the reasoning behind the decisions they make. The United States Department of Defence considers the design of XAI to be essential to the future successful management of artificially intelligent machines.

 

 

Recent success in the field of Artificial Intelligence (AI) has resulted in the growing use of systems capable of autonomously deciding on courses of action in areas such as transportation, finance and medicine. The effectiveness of these systems is limited, however, by their current inability to explain the reasoning behind their decisions to human users, often referred to as the ‘black-box problem’. Through the Defence Advanced Research Projects Agency (DARPA), the United States Department of Defence (DoD) is attempting to address this by designing ‘explainable AI’ (XAI) systems, which it considers to be essential to the future success of artificial intelligence

 

About the Speaker:

David Gunning is programme manager in the Information Innovation Office (I2O) of the Defence Advanced Research Projects Agency (DARPA). He has over 30 years of experience in the development of artificial intelligence (AI) technology, and manages the Explainable AI (XAI) as well as the Communicating with Computers (CwC) programs. This is Mr. Gunning’s third tour at DARPA. Previously, he managed the Personalized Assistant that Learns (PAL) project that produced Siri. He holds an M.S. in Computer Science from Stanford University, an M.S. in Cognitive Psychology from the University of Dayton, and a B.S. in Psychology from Otterbein College.