Farsight: Fostering Responsible AI Awareness During AI Application Prototyping

crown jewel figure
With in situ interfaces and novel techniques, Farsight empowers AI prototypers to envision potential harms that may arise from their large language models (LLMs)-powered AI applications during early prototyping. (A) In this example, an AI prototyper is creating a prompt for an English-to-French translator in a web-based AI prototyping tool. (B) The Alert Symbol from Farsight warns the user of potential risks associated with their AI application. (C) Clicking the symbol expands the Awareness Sidebar, highlighting news articles relevant to the user's prompt (top), and LLM-generated potential use cases and harms (bottom). (D) Clicking the blue button opens the Harm Envisioner that allows the user to interactively envision, assess, and reflect on the potential use cases, stakeholders, and harms of their AI application with the assistance of an LLM.
Demo Video
Prompt-based interfaces for Large Language Models (LLMs) have made prototyping and building AI-powered applications easier than ever before. However, identifying potential harms that may arise from AI applications remains a challenge, particularly during prompt-based prototyping. To address this, we present Farsight, a novel in situ interactive tool that helps people identify potential harms from the AI applications they are prototyping. Based on a user's prompt, Farsight highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms. We report design insights from a co-design study with 10 AI prototypers and findings from a user study with 42 AI prototypers. After using Farsight, AI prototypers in our user study are better able to independently identify potential harms associated with a prompt and find our tool more useful and usable than existing resources. Their qualitative feedback also highlights that Farsight encourages them to focus on end-users and think beyond immediate harms. We discuss these findings and reflect on their implications for designing AI prototyping experiences that meaningfully engage with AI harms. Farsight is publicly accessible at: https://pair-code.github.io/farsight.
Farsight: Fostering Responsible AI Awareness During AI Application Prototyping
  title = {Farsight: {{Fostering Responsible AI Awareness During AI Application Prototyping}}},
  booktitle = {{{CHI Conference}} on {{Human Factors}} in {{Computing Systems}}},
  author = {Wang, Zijie J. and Kulkarni, Chinmay and Wilcox, Lauren and Terry, Michael and Madaio, Michael},
  year = {2024}