Getting Playful with Explainable AI: Games with a Purpose to Improve Human Understanding of AI
Published at
CHI
| Honolulu, HI
2020
Abstract
Explainable Artificial Intelligence (XAI) is an emerging topic in Machine
Learning (ML) that aims to give humans visibility into how AI systems make
decisions. XAI is increasingly important in bringing transparency to fields such
as medicine and criminal justice where AI informs high consequence decisions.
While many XAI techniques have been proposed, few have been evaluated beyond
anecdotal evidence. Our research offers a novel approach to assess how humans
interpret AI explanations; we explore this by integrating XAI with Games with a
Purpose (GWAP). XAI requires human evaluation at scale, and GWAP can be used for
XAI tasks which are presented through rounds of play. This paper outlines the
benefits of GWAP for XAI, and demonstrates application through our creation of a
multi-player GWAP that focuses on explaining deep learning models trained for
image recognition. Through our game, we seek to understand how humans select and
interpret explanations used in image recognition systems, and bring empirical
evidence on the validity of GWAP designs for XAI.