Zeno: An Interactive Framework for Behavioral Evaluation of Machine Learning
Published at
CHI
| Hamburg, Germany
2023
Abstract
Machine learning models with high accuracy on test data can still produce
systematic failures, such as harmful biases and safety issues, when deployed in
the real world. To detect and mitigate such failures, practitioners run
behavioral evaluation of their models, checking model outputs for specific types
of inputs. Behavioral evaluation is important but challenging, requiring
practitioners to discover real-world patterns and validate systematic failures.
We conducted 18 semi-structured interviews with ML practitioners to better
understand the challenges of behavioral evaluation and found that it is a
collaborative, use-case-first process that is not adequately supported by
existing task- and domain-specific tools. Using these findings, we designed
Zeno, a general-purpose framework for visualizing and testing AI systems across
diverse use cases. In four case studies with participants using Zeno on
real-world models, we found that practitioners were able to reproduce previous
manual analyses and discover new systematic failures.