DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors

crown jewel figure
The DetectorDetective interface. (A) Image Viewer allows users to select an input image and presents the selected image with its attacked version. (B) Users can select an internal module to investigate in the object detector in Module Viewer. Module Viewer highlights the selected module in the architecture diagram. (C) Internal Viewer visualizes feature maps extracted by the selected module for benign and adversarial images, enabling side-by-side comparisons of how the module responds differently to the benign and adversarial images. (D) Explanation Viewer provides visual explanations of which part of the benign and adversarial images are used to make a model’s outcome, also enabling side-by-side comparisons of model prediction on benign and adversarial attacks.
Demo Video
With deep learning based systems performing exceedingly well in many vision-related tasks, a major concern with their widespread deployment especially in safety-critical applications is their susceptibility to adversarial attacks. We propose DetectorDetective, an interactive visual tool that aims to help users better understand the behaviors of a model as adversarial images journey through an object detector. DetectorDetective enables users to easily learn about how the three key modules of the Faster R-CNN object detector -- Feature Pyramidal Network, Region Proposal Network, and Region Of Interest Head--respond to a user-selected benign image and its adversarial version. Visualizations about the progressive changes in the intermediate features among such modules help users gain insights into the impact of adversarial attacks, and perform side-by-side comparisons between the benign and adversarial responses. Furthermore, DetectorDetective displays saliency maps for the input images to comparatively highlight image regions that contribute to attack success. DetectorDetective complements adversarial machine learning research on object detection by providing a user-friendly interactive tool for inspecting and understanding model responses. DetectorDetective is available at the following public demo link: A video demo is available at
DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors
  title = {{{DetectorDetective}}: {{Investigating}} the Effects of Adversarial Examples on Object Detectors},
  booktitle = {Proceedings of the {{IEEE}}/{{CVF}} Conference on Computer Vision and Pattern Recognition ({{CVPR}})},
  author = {Vellaichamy, Sivapriya and Hull, Matthew and Wang, Zijie J. and Das, Nilaksh and Peng, ShengYun and Park, Haekyu and Chau, Duen Horng (Polo)},
  year = {2022},
  month = jun