
Sign up to save your podcasts
Or


Every week it seems the world is stunned by another advance in artificial intelligence, including text-to-image generators like DALL-E and the latest chatbot, GPT-4. What makes these tools impressive is the enormous amount of data they’re trained on, specifically the millions of images and words on the internet. But the process of machine learning relies on a lot of human data labelers. Marketplace’s Meghan McCarty Carino spoke to Sarah Roberts, a professor of information studies and director of the Center for Critical Internet Inquiry at UCLA, about how this work is often overlooked.
By Marketplace4.5
12471,247 ratings
Every week it seems the world is stunned by another advance in artificial intelligence, including text-to-image generators like DALL-E and the latest chatbot, GPT-4. What makes these tools impressive is the enormous amount of data they’re trained on, specifically the millions of images and words on the internet. But the process of machine learning relies on a lot of human data labelers. Marketplace’s Meghan McCarty Carino spoke to Sarah Roberts, a professor of information studies and director of the Center for Critical Internet Inquiry at UCLA, about how this work is often overlooked.

31,960 Listeners

30,668 Listeners

8,768 Listeners

925 Listeners

1,386 Listeners

1,704 Listeners

4,335 Listeners

2,177 Listeners

5,487 Listeners

56,481 Listeners

1,449 Listeners

9,523 Listeners

3,590 Listeners

6,445 Listeners

6,387 Listeners

163 Listeners

2,996 Listeners

5,506 Listeners

1,383 Listeners

90 Listeners