Daily Paper Cast

Alignment Makes Language Models Normative, Not Descriptive


Listen Later

🤗 Upvotes: 36 | cs.CL, cs.AI, cs.GT

Authors:

Eilam Shapira, Moshe Tennenholtz, Roi Reichart

Title:

Alignment Makes Language Models Normative, Not Descriptive

Arxiv:

http://arxiv.org/abs/2603.17218v1

Abstract:

Post-training alignment optimizes language models to match human preference signals, but this objective is not equivalent to modeling observed human behavior. We compare 120 base-aligned model pairs on more than 10,000 real human decisions in multi-round strategic games - bargaining, persuasion, negotiation, and repeated matrix games. In these settings, base models outperform their aligned counterparts in predicting human choices by nearly 10:1, robustly across model families, prompt formulations, and game configurations. This pattern reverses, however, in settings where human behavior is more likely to follow normative predictions: aligned models dominate on one-shot textbook games across all 12 types tested and on non-strategic lottery choices - and even within the multi-round games themselves, at round one, before interaction history develops. This boundary-condition pattern suggests that alignment induces a normative bias: it improves prediction when human behavior is relatively well captured by normative solutions, but hurts prediction in multi-round strategic settings, where behavior is shaped by descriptive dynamics such as reciprocity, retaliation, and history-dependent adaptation. These results reveal a fundamental trade-off between optimizing models for human use and using them as proxies for human behavior.

...more
View all episodesView all episodes
Download on the App Store

Daily Paper CastBy Jingwen Liang, Gengyu Wang