
Sign up to save your podcasts
Or


AI chatbots have gotten pretty good at generating text that looks like it was written by a real person. That’s because they’re trained on words and sentences that actual humans wrote, scraped from blogs and news websites. But research now shows when you feed that AI-generated text back into the models to train a new chatbot, after a while, it sort of stops making sense. It’s a phenomenon AI researchers are calling “model collapse.” Marketplace’s Lily Jamali spoke to Clive Thompson, author of “Coders” and contributing writer for the New York Times Magazine and Wired, about what could be a growing problem as more AI-generated stuff lands on the web.
By Marketplace4.5
12561,256 ratings
AI chatbots have gotten pretty good at generating text that looks like it was written by a real person. That’s because they’re trained on words and sentences that actual humans wrote, scraped from blogs and news websites. But research now shows when you feed that AI-generated text back into the models to train a new chatbot, after a while, it sort of stops making sense. It’s a phenomenon AI researchers are calling “model collapse.” Marketplace’s Lily Jamali spoke to Clive Thompson, author of “Coders” and contributing writer for the New York Times Magazine and Wired, about what could be a growing problem as more AI-generated stuff lands on the web.

32,220 Listeners

30,643 Listeners

8,796 Listeners

936 Listeners

1,386 Listeners

1,652 Listeners

2,177 Listeners

5,485 Listeners

113,463 Listeners

56,997 Listeners

9,563 Listeners

10,329 Listeners

3,618 Listeners

6,106 Listeners

6,585 Listeners

6,462 Listeners

163 Listeners

2,991 Listeners

154 Listeners

1,383 Listeners

91 Listeners