Publications

Conference Papers


Estimating and Correcting Yes-No Bias in Language Models

Published in CogSci, 2025

When presented with a yes-no question, humans tend to say “yes” regardless of the ground truth. This “yes-bias” can be attributed either to the social pressure to agree with an interlocutor or simply to the tendency to mimic the distribution of the input data. Here, we estimate “yes-no” response bias in language models (LMs), with the goal of distinguishing the two theories, and explore two strategies for bias correction. We develop two yes-no question datasets derived from existing world knowledge datasets, and test 16 open-weight LMs. We find that LMs often show response bias on yes-no questions, but that it is highly variable, deviating from bias observed in humans. We further present a novel bias correction method, which eliminates bias and improves model performance. Evidence of non-humanlike response bias in LMs informs us on the source of yes-bias in humans, and the efficacy of our bias correction method holds promise for LM evaluation.

Recommended citation: Bhatt, O., & Ivanova, A. (2025). Estimating and Correcting Yes-No Bias in Language Models. Proceedings of the Annual Meeting of the Cognitive Science Society, 47. Retrieved from https://escholarship.org/uc/item/2c04k26b
Permalink | PDF | Poster