
Sign up to save your podcasts
Or
AI chatbots have gotten pretty good at generating text that looks like it was written by a real person. That’s because they’re trained on words and sentences that actual humans wrote, scraped from blogs and news websites. But research now shows when you feed that AI-generated text back into the models to train a new chatbot, after a while, it sort of stops making sense. It’s a phenomenon AI researchers are calling “model collapse.” Marketplace’s Lily Jamali spoke to Clive Thompson, author of “Coders” and contributing writer for the New York Times Magazine and Wired, about what could be a growing problem as more AI-generated stuff lands on the web.
4.5
12341,234 ratings
AI chatbots have gotten pretty good at generating text that looks like it was written by a real person. That’s because they’re trained on words and sentences that actual humans wrote, scraped from blogs and news websites. But research now shows when you feed that AI-generated text back into the models to train a new chatbot, after a while, it sort of stops making sense. It’s a phenomenon AI researchers are calling “model collapse.” Marketplace’s Lily Jamali spoke to Clive Thompson, author of “Coders” and contributing writer for the New York Times Magazine and Wired, about what could be a growing problem as more AI-generated stuff lands on the web.
6,056 Listeners
882 Listeners
8,644 Listeners
30,898 Listeners
1,359 Listeners
32,202 Listeners
43,390 Listeners
2,168 Listeners
5,493 Listeners
1,443 Listeners
9,553 Listeners
3,594 Listeners
6,248 Listeners
163 Listeners
2,685 Listeners
1,322 Listeners
1,589 Listeners
82 Listeners
221 Listeners