← Stream
library

Grounding gaps in language model generations

LLMs produce fewer grounding acts than humans and tend to presume common ground rather than actively establishing it, exposing a structural gap between human and machine conversation.

by Omar Shaikh, Kristina Gligorić, Ashna Khetan, Matthias Gerstgrasser, Diyi Yang, Dan Jurafsky

To read

LLMs produce fewer grounding acts than humans and presume common ground rather than actively establishing it. Where human speakers check understanding, repair misalignments, and signal confusion, LLMs skip these moves and proceed as if shared understanding were already in place. The gap is not incidental but structural.

Published 2023. Accepted at NAACL 2024.

Mycelium tags, relations & arguments