LLM Attributor: Interactive Visual Attribution for LLM Generation

crown jewel figure
LLM Attributor enables LLM developers to visualize the training data attribution of their models in computational notebooks. In this example, our user Megan is surprised that an LLM fine-tuned on a disaster dataset occasionally generates dry weather as the cause of the 2023 Hawaii wildfires, while often yielding directed-energy weapons as in a conspiracy theory. (A) Tokens being attributed, which are interactively selected by Megan, are displayed side-by-side for visual comparison. (B) Training data points with the highest attribution scores are presented as a list by default, and can be interactively expanded to the full source text, revealing that the data point most responsible for generating directed-energy weapons is an X post that spreads the conspiracy theory. (C) Keyword Summary shows important words in the displayed training data. (D) Score Distribution over the entire training data is visualized as a histogram, enabling both high-level comparison over the entire data and low-level analysis focusing on individual data points. Below, the training data points with the lowest attribution scores are visualized in the same way.
Demo Video
Abstract
While large language models (LLMs) have shown remarkable capability to generate convincing text across diverse domains, concerns around its potential risks have highlighted the importance of understanding the rationale behind text generation. We present LLM Attributor, a Python library that provides interactive visualizations for training data attribution of an LLM's text generation. Our library offers a new way to quickly attribute an LLM's text generation to training data points to inspect model behaviors, enhance its trustworthiness, and compare model-generated text with user-provided text. We describe the visual and interactive design of our tool and highlight usage scenarios for LLaMA2 models fine-tuned with two different datasets: online articles about recent disasters and finance-related question-answer pairs. Thanks to LLM Attributor's broad support for computational notebooks, users can easily integrate it into their workflow to interactively visualize attributions of their models. For easier access and extensibility, we open-source LLM Attributor at https://github.com/poloclub/ LLM-Attribution. The video demo is available at https://youtu.be/mIG2MDQKQxM.
Citation
LLM Attributor: Interactive Visual Attribution for LLM Generation
@article{leeLLMAttributorInteractive2024,
  title = {{{LLM Attributor}}: {{Interactive Visual Attribution}} for {{LLM Generation}}},
  shorttitle = {{{LLM Attributor}}},
  author = {Lee, Seongmin and Wang, Zijie J. and Chakravarthy, Aishwarya and Helbling, Alec and Peng, ShengYun and Phute, Mansi and Chau, Duen Horng and Kahng, Minsuk},
  year = {2024},
  url = {http://arxiv.org/abs/2404.01361},
  urldate = {2024-06-04},
  archiveprefix = {arxiv},
  journal = {arXiv 2404.01361}
}