
Sign up to save your podcasts
Or


AI chatbots have gotten pretty good at generating text that looks like it was written by a real person. That’s because they’re trained on words and sentences that actual humans wrote, scraped from blogs and news websites. But research now shows when you feed that AI-generated text back into the models to train a new chatbot, after a while, it sort of stops making sense. It’s a phenomenon AI researchers are calling “model collapse.” Marketplace’s Lily Jamali spoke to Clive Thompson, author of “Coders” and contributing writer for the New York Times Magazine and Wired, about what could be a growing problem as more AI-generated stuff lands on the web.
By Marketplace4.5
12471,247 ratings
AI chatbots have gotten pretty good at generating text that looks like it was written by a real person. That’s because they’re trained on words and sentences that actual humans wrote, scraped from blogs and news websites. But research now shows when you feed that AI-generated text back into the models to train a new chatbot, after a while, it sort of stops making sense. It’s a phenomenon AI researchers are calling “model collapse.” Marketplace’s Lily Jamali spoke to Clive Thompson, author of “Coders” and contributing writer for the New York Times Magazine and Wired, about what could be a growing problem as more AI-generated stuff lands on the web.

31,989 Listeners

30,666 Listeners

8,766 Listeners

923 Listeners

1,386 Listeners

1,705 Listeners

4,338 Listeners

2,176 Listeners

5,490 Listeners

56,525 Listeners

1,450 Listeners

9,517 Listeners

3,589 Listeners

6,444 Listeners

6,389 Listeners

163 Listeners

2,988 Listeners

5,512 Listeners

1,382 Listeners

90 Listeners