KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts
Adam Coscia, Alex Endert
Poster won the College of Computing (CoC) Award at CRIDC Poster Competition
Georgia Tech
IEEE Transactions on Visualization and Computer Graphics (IEEE TVCG), 2024
KnowledgeVIS teaser
Abstract

Recent growth in the popularity of large language models has led to their increased usage for summarizing, translating, predicting, and generating text, making it vital to help researchers understand how and why they work. We present KnowledgeVIS, a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream. Tightly integrated views in KnowledgeVIS help users create and test multiple prompt variations simultaneously, analyze predicted words using a novel semantic clustering technique, and discover insights using expressive and interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models. KnowledgeVIS shows how human-in-the-loop visual analytics can help researchers interpret language models by providing insight into what they have learned.

Citation
Coming soon!