
Sign up to save your podcasts
Or


Every week it seems the world is stunned by another advance in artificial intelligence, including text-to-image generators like DALL-E and the latest chatbot, GPT-4. What makes these tools impressive is the enormous amount of data they’re trained on, specifically the millions of images and words on the internet. But the process of machine learning relies on a lot of human data labelers. Marketplace’s Meghan McCarty Carino spoke to Sarah Roberts, a professor of information studies and director of the Center for Critical Internet Inquiry at UCLA, about how this work is often overlooked.
By Marketplace4.4
7373 ratings
Every week it seems the world is stunned by another advance in artificial intelligence, including text-to-image generators like DALL-E and the latest chatbot, GPT-4. What makes these tools impressive is the enormous amount of data they’re trained on, specifically the millions of images and words on the internet. But the process of machine learning relies on a lot of human data labelers. Marketplace’s Meghan McCarty Carino spoke to Sarah Roberts, a professor of information studies and director of the Center for Critical Internet Inquiry at UCLA, about how this work is often overlooked.

25,785 Listeners

8,760 Listeners

9,193 Listeners

1,221 Listeners

937 Listeners

3,415 Listeners

925 Listeners

1,389 Listeners

1,276 Listeners

5,490 Listeners

9,538 Listeners

10 Listeners

35 Listeners

6,396 Listeners

1,378 Listeners

398 Listeners

95 Listeners