CLASP
The Centre for Linguistic Theory and Studies in Probability

Faster Isn't Always Better: Building Reliable and Accountable AI Collaborators in the Age of LLMs

Abstract

With the rapid integration of generative AI, exemplified by large language models (LLMs), into personal, educational, business, and even governmental workflows, such systems are increasingly being treated as “collaborators” with humans. While generative AI has been sold as an opportunity to increase speed and efficiency, its usage comes with a risk that non-experts will uncritically accept its outputs with consequences that range from inconveniente to catastrophic. In this talk, I will discuss how the insertion of appropriate “friction” into human-AI interaction can counter these risks of overreliance on AI and incentivize deliberative, accountable decision-making in human-AI teams. Specifically, I will discuss how we use multimodal signals to automatically model the common ground that emerges in a multi-party collaborative interaction, use these inferred belief states to expose unspoken assumptions and misalignment within the group, and develop a novel “LLM-agent” alignment framework to prompt the group to slow down, deliberate, and resolve misapprehensions before making decisions. Our results show how, in multiple collaborative tasks, the insertion of positive friction correlates with greater common ground and more correct task solutions. Finally, I will also show how, as an interactive mechanism, friction can be adapted to not just mediating collaboration, but also active task guidance in potentially high-stakes situations.