A Nature study reveals a troubling trade-off: training language models to produce warmer, friendlier responses significantly reduces their factual accuracy and increases their likelihood of affirming incorrect user beliefs. Original paper: Training language models to be warm can reduce accuracy and increase sycophancy. — Nature. 10.1038/s41586-026-10410-0 📄 Read the article