Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Demo Video
Abstract
Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions-potentially causing harms once deployed. However, how to take action to address these patterns is not always clear. In a collaboration between ML and human-computer interaction researchers, physicians, and data scientists, we develop GAM Changer, the first interactive system to help domain experts and data scientists easily and responsibly edit Generalized Additive Models (GAMs) and fix problematic patterns. With novel interaction techniques, our tool puts interpretability into action-empowering users to analyze, validate, and align model behaviors with their knowledge and values. Physicians have started to use our tool to investigate and fix pneumonia and sepsis risk prediction models, and an evaluation with 7 data scientists working in diverse domains highlights that our tool is easy to use, meets their model editing needs, and fits into their current workflows. Built with modern web technologies, our tool runs locally in users' web browsers or computational notebooks, lowering the barrier to use. GAM Changer is available at the following public demo link: https://interpret.ml/gam-changer.
Citation
Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
@inproceedings{wangInterpretabilityThenWhat2022, title = {Interpretability, {{Then What}}? {{Editing Machine Learning Models}} to {{Reflect Human Knowledge}} and {{Values}}}, booktitle = {Proceedings of the 28th {{ACM SIGKDD Conference}} on {{Knowledge Discovery}} and {{Data Mining}}}, author = {Wang, Zijie J. and Kale, Alex and Nori, Harsha and Stella, Peter and Nunnally, Mark E. and Chau, Duen Horng and Vorvoreanu, Mihaela and Wortman Vaughan, Jennifer and Caruana, Rich}, year = {2022}, series = {{{KDD}} '22} }