Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization

Picture of Emily Wall
Emily Wall
Picture of Laura Matzen
Laura Matzen
Picture of Mennatallah El-Assady
Mennatallah El-Assady
Picture of Peta Masters
Peta Masters
Picture of Helia Hosseinpour
Helia Hosseinpour
Picture of Alex Endert
Alex Endert
Picture of Rita Borgo
Rita Borgo
Picture of Polo Chau
Polo Chau
Picture of Harald Schupp
Harald Schupp
Picture of Lace Padilla
Lace Padilla
Published at PacificVis | Tokyo, Japan 2024
Teaser image

Abstract

Many papers make claims about specific visualization techniques that are said to enhance or calibrate trust in AI systems. But a design choice that enhances trust in some cases appears to damage it in others. In this paper, we explore this inherent duality through an analogy with “knobs”. Turning a knob too far in one direction may result in under-trust, too far in the other, over-trust or, turned up further still, in a confusing distortion. While the designs or so-called “knobs” are not inherently evil, they can be misused or used in an adversarial context and thereby manipulated to mislead users or promote unwarranted levels of trust in AI systems. When a visualization that has no mean- ingful connection with the underlying model or data is employed to enhance trust, we refer to the result as “trust junk.” From a review of 65 papers, we identify nine commonly made claims about trust calibration. We synthesize them into a framework of knobs that can be used for good or “evil,” and distill our findings into observed pitfalls for the responsible design of human-AI systems.