Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization
Emily Wall
Laura Matzen
Mennatallah El-Assady
Peta Masters
Helia Hosseinpour
Alex Endert
Rita Borgo
Polo Chau
Harald Schupp
Lace Padilla
Published at
PacificVis
| Tokyo, Japan
2024
Abstract
Many papers make claims about specific visualization techniques that are said to
enhance or calibrate trust in AI systems. But a design choice that enhances
trust in some cases appears to damage it in others. In this paper, we explore
this inherent duality through an analogy with “knobs”. Turning a knob too far in
one direction may result in under-trust, too far in the other, over-trust or,
turned up further still, in a confusing distortion. While the designs or
so-called “knobs” are not inherently evil, they can be misused or used in an
adversarial context and thereby manipulated to mislead users or promote
unwarranted levels of trust in AI systems. When a visualization that has no
mean- ingful connection with the underlying model or data is employed to enhance
trust, we refer to the result as “trust junk.” From a review of 65 papers, we
identify nine commonly made claims about trust calibration. We synthesize them
into a framework of knobs that can be used for good or “evil,” and distill our
findings into observed pitfalls for the responsible design of human-AI systems.