
Sign up to save your podcasts
Or


The big AI conference NeurIPS is under way in San Diego this week, and nearly 6,000 papers presented there will set the technical, intellectual, and ethical course for AI for the year.
NeurIPS is a strange pseudo-academic gathering, where researchers from universities show up to present their findings alongside employees of Apple and Nvidia, part of the strange public-private revolving door of the tech industry. Sometimes they’re the same person: Increasingly, academic researchers are allowed to also hold a job at a big company. I can’t blame them for taking opportunities where they arise—I’m sure I would, in their position—but it’s particularly bothersome to me as a journalist, because it limits their ability to speak publicly.
The papers cover robotics, alignment, and how to deliver kitty cat pictures more efficiently, but one paper in particular—awarded a top prize at the conference—grabbed me by the throat.
A coalition from Stanford, the Allen Institute, Carnegie Mellon, and the University of Washington presented “Artificial Hive Mind: The Open-Ended Homogeneity of Language Models (and Beyond),” which shows that average large language model converges toward a narrow set of responses when asked big, brainstormy, open-ended questions. Worse, different models tend to produce similar answers, meaning when you switch from ChatGPT to Gemini or Claude for “new perspective,” you’re not getting it. I’ve warned for years that AI could shrink our menu of choices while making us believe we have more of them. This paper shows just how real that risk is. Today I walk through the NIPS landscape, the other trends emerging at the conference, and why “creative assistance” may actually be the crushing of creativity in disguise. Yay!
By Jacob Ward5
2424 ratings
The big AI conference NeurIPS is under way in San Diego this week, and nearly 6,000 papers presented there will set the technical, intellectual, and ethical course for AI for the year.
NeurIPS is a strange pseudo-academic gathering, where researchers from universities show up to present their findings alongside employees of Apple and Nvidia, part of the strange public-private revolving door of the tech industry. Sometimes they’re the same person: Increasingly, academic researchers are allowed to also hold a job at a big company. I can’t blame them for taking opportunities where they arise—I’m sure I would, in their position—but it’s particularly bothersome to me as a journalist, because it limits their ability to speak publicly.
The papers cover robotics, alignment, and how to deliver kitty cat pictures more efficiently, but one paper in particular—awarded a top prize at the conference—grabbed me by the throat.
A coalition from Stanford, the Allen Institute, Carnegie Mellon, and the University of Washington presented “Artificial Hive Mind: The Open-Ended Homogeneity of Language Models (and Beyond),” which shows that average large language model converges toward a narrow set of responses when asked big, brainstormy, open-ended questions. Worse, different models tend to produce similar answers, meaning when you switch from ChatGPT to Gemini or Claude for “new perspective,” you’re not getting it. I’ve warned for years that AI could shrink our menu of choices while making us believe we have more of them. This paper shows just how real that risk is. Today I walk through the NIPS landscape, the other trends emerging at the conference, and why “creative assistance” may actually be the crushing of creativity in disguise. Yay!

38,430 Listeners

6,881 Listeners

9,238 Listeners

4,113 Listeners

5,130 Listeners

12,258 Listeners

544 Listeners

6,467 Listeners

2,031 Listeners

6,304 Listeners

113,121 Listeners

9,475 Listeners

2,867 Listeners

16,525 Listeners