
Sign up to save your podcasts
Or
Richard Mathenge was part of a team of contractors in Nairobi, Kenya who trained OpenAI's GPT models. He did so as a team lead at Sama, an AI training company that partnered on the project. In this episode of Big Technology Podcast, Mathenge tells the story of his experience. During the training, he was routinely subjected to sexually explicit material, offered insufficient counseling, and his team members were paid, in some cases, just $1 per hour. Listen for an in-depth look at how these models are trained, and for a look at the human side of Reinforcement Learning with Human Feedback.
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/
Questions? Feedback? Write to: [email protected]
----
OpenAI's response:
We engaged Sama as part of our ongoing work to create safer AI systems and prevent harmful outputs. We take the mental health of our employees and our contractors very seriously. One of the reasons we first engaged Sama was because of their commitment to good practices. Our previous understanding was that wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalization, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so. Upon learning of Sama worker conditions in February of 2021 we immediately sought to find out more information from Sama. Sama simultaneously informed us that they were exiting the content moderation space all together.
OpenAI paid Sama $12.50 / hour. We tried to obtain more information about worker compensation from Sama but they never provided us with hard numbers. Sama did provide us with a study they conducted across other companies that do content moderation in that region and shared Sama’s wages were 2-3x the competition.
4.7
432432 ratings
Richard Mathenge was part of a team of contractors in Nairobi, Kenya who trained OpenAI's GPT models. He did so as a team lead at Sama, an AI training company that partnered on the project. In this episode of Big Technology Podcast, Mathenge tells the story of his experience. During the training, he was routinely subjected to sexually explicit material, offered insufficient counseling, and his team members were paid, in some cases, just $1 per hour. Listen for an in-depth look at how these models are trained, and for a look at the human side of Reinforcement Learning with Human Feedback.
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/
Questions? Feedback? Write to: [email protected]
----
OpenAI's response:
We engaged Sama as part of our ongoing work to create safer AI systems and prevent harmful outputs. We take the mental health of our employees and our contractors very seriously. One of the reasons we first engaged Sama was because of their commitment to good practices. Our previous understanding was that wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalization, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so. Upon learning of Sama worker conditions in February of 2021 we immediately sought to find out more information from Sama. Sama simultaneously informed us that they were exiting the content moderation space all together.
OpenAI paid Sama $12.50 / hour. We tried to obtain more information about worker compensation from Sama but they never provided us with hard numbers. Sama did provide us with a study they conducted across other companies that do content moderation in that region and shared Sama’s wages were 2-3x the competition.
1,637 Listeners
1,059 Listeners
3,147 Listeners
669 Listeners
226 Listeners
953 Listeners
188 Listeners
1,020 Listeners
336 Listeners
58 Listeners
559 Listeners
298 Listeners
425 Listeners
142 Listeners
103 Listeners
34 Listeners
124 Listeners
200 Listeners
71 Listeners
119 Listeners
505 Listeners
45 Listeners
21 Listeners
468 Listeners
32 Listeners