
Sign up to save your podcasts
Or
AI chatbots have gotten pretty good at generating text that looks like it was written by a real person. That’s because they’re trained on words and sentences that actual humans wrote, scraped from blogs and news websites. But research now shows when you feed that AI-generated text back into the models to train a new chatbot, after a while, it sort of stops making sense. It’s a phenomenon AI researchers are calling “model collapse.” Marketplace’s Lily Jamali spoke to Clive Thompson, author of “Coders” and contributing writer for the New York Times Magazine and Wired, about what could be a growing problem as more AI-generated stuff lands on the web.
4.5
12321,232 ratings
AI chatbots have gotten pretty good at generating text that looks like it was written by a real person. That’s because they’re trained on words and sentences that actual humans wrote, scraped from blogs and news websites. But research now shows when you feed that AI-generated text back into the models to train a new chatbot, after a while, it sort of stops making sense. It’s a phenomenon AI researchers are calling “model collapse.” Marketplace’s Lily Jamali spoke to Clive Thompson, author of “Coders” and contributing writer for the New York Times Magazine and Wired, about what could be a growing problem as more AI-generated stuff lands on the web.
1,633 Listeners
870 Listeners
8,584 Listeners
30,846 Listeners
1,364 Listeners
31,972 Listeners
2,168 Listeners
5,500 Listeners
56,091 Listeners
1,441 Listeners
9,540 Listeners
3,555 Listeners
5,906 Listeners
163 Listeners
2,570 Listeners
1,333 Listeners
83 Listeners
207 Listeners