
Sign up to save your podcasts
Or


When workers lose their jobs, many turn to gig work to earn income while waiting for new opportunities. Increasingly, companies that hire gig workers are shifting from delivering food or sharing rides to creating content to train AI systems. This raises various communication and ethical issues. Neville and Shel explain what’s happening and discuss the implications in this short midweek episode.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz
Neville Hobson
People are being paid, typically in small amounts, to record themselves walking down the street, having conversations, folding laundry, even just going about their day. That data is then used to train AI systems because those systems need examples of how people actually speak, move, and interact in the real world. In one case, delivery drivers in the US are being redirected to film tasks for robotics training. Platforms are turning existing gig workers like delivery drivers into distributed data collectors for AI. In another example, people are selling access to their phone conversations through apps that pay contributors to upload voice and text data. And in yet another, workers are strapping phones to their heads to record household chores so humanoid robots can learn how to move. The work is global, fragmented, and often invisible, with workers spanning Nigeria, India, South Africa, the US, and far beyond. Humans are no longer just users of AI — they are raw material suppliers. In China, there are even state-run centers where workers wear virtual reality headsets and exoskeletons to teach robots how to carry out everyday physical tasks. What we’re seeing is the rise of what you might call data labor, where identity itself becomes part of the work.
There’s a clear driver behind it. AI companies are running out of high-quality training data. The open web isn’t enough anymore, and synthetic data has its limits. So the industry is turning to something else: real human lived experience. Because if you want a robot to understand how to load a dishwasher, navigate a room, or interact with objects, you need to see humans doing it at scale.
But there’s an interesting contrast here. One of the stories highlights a 23-year-old in the US, a guy called Cale Mouser, who earns well into six figures repairing diesel engines. It’s something he’s developed great skill in doing. His work depends on judgment, experience, and problem solving in the real world — things that don’t easily translate into data. So while some people are being paid small amounts to generate data for AI systems, others like Cale Mouser are building highly valuable careers precisely because their skills can’t be reduced to it. And that contrast feels important.
Because on one level, this new kind of work does create opportunity. For some people, especially in lower-income regions in the Global South, this is real income — paid in dollars, flexible and accessible. But there’s another side to it. Because what people are actually selling isn’t just time, it’s identity: their voice, their behavior, their presence in the world. And often once that data is handed over, it’s gone — permanently licensed, reused, repurposed, potentially in ways the individual never sees or understands.
So you have this asymmetry: individuals earning small immediate payments while companies build long-term, highly valuable AI systems. Perhaps it’s a new version of the Mechanical Turk for the AI era. And that raises a deeper question. What does it mean when the inputs to AI are no longer abstract data, but pieces of human identity? When the training set is not just content, but behavior, voice, and presence? And when those pieces can be reused, replicated, and scaled, often without the individual’s ongoing knowledge or control? Many platforms grant royalty-free perpetual licenses, where workers get paid once and lose control forever. There’s potential for deepfakes, identity theft, and misuse without consent. And perhaps more uncomfortably, what does it mean when people are contributing to systems that could automate their future jobs?
For communicators, this feels important because this isn’t just a technology story. It’s a story about trust, consent, transparency, and how organizations explain what they’re doing with AI. If AI ethics lives anywhere, it’s here — in how these systems are built and how that’s communicated. So the question to explore — one of the questions to explore, perhaps — is this one: Are we comfortable with an economy where identity itself is becoming labor? And if not, what responsibility do organizations and communicators have in shaping it?
Shel Holtz
In terms of the AI element, what this suggests is that the gig economy didn’t go anywhere when AI came along; it just became the training ground for AI. And it’s interesting that the workers who are being squeezed out of knowledge jobs are selling their voices and their movements to build the systems that squeezed them out. Because where do a lot of these people who are being laid off because of AI go? Well, they go drive for Uber, they go drive for DoorDash. And you do that long enough and you get really accustomed to the idea that they send you a task, you go do that task, and you get paid for it. So if that task shifts from picking up a meal at a restaurant and delivering it to somebody’s house to going to your own house and washing your dishes because that’s what they want to capture on video — it’s the same thing. You’re getting a task on the app. You’re doing the task and you’re getting paid for it. So I think for a lot of people, this is going to be a fairly easy shift, and they’re not going to think a lot about what’s happening to the information and the content that’s being created with their movements and their voices, which is now being shared and used to make a lot of money for the people who are paying a pittance to these folks.
So I see three issues here that connect directly to organizational communication. The first is consent and transparency — and I’m talking about inside organizations — because companies are already deploying AI tools trained on data that their own workers have supplied, and sometimes they’ve supplied this data unknowingly. The ethical and reputational questions that employees are going to ask are questions like: Was my voice used to train a bot that you activated in order to replace my friend who sat next to me and I had lunch with? And regulators are going to end up asking these questions too. So communicators really need to be out front with clear internal messaging about what data employees generate and how the company is using it. Let’s talk about that before I hit the other things that popped into my mind.
Neville Hobson
The other element, which is also ethics-related, is: is this whole thing ethical if participation is driven by economic necessity? Whatever reason you might give — we need to get an edge on the competition, whatever — you’re still up against that element.
That’s the big-picture ethics question. But common sense tells you how you should do this. Should individuals be compensated long-term for use of their data? On the one hand, you might say, fine, let’s tell everyone: your data may be used — your day-to-day interactions with colleagues, the recordings of your conversations on our internal Teams tool — that’s kept. So the employee might say, I’m okay with that, but I want to be compensated for it. And now there’s an interesting position.
Shel Holtz
Neville Hobson
Shel Holtz
Neville Hobson
Shel Holtz
Neville Hobson
And the thing is, a new economy is emerging where people monetize their identity and behavior voluntarily. In the case of the examples we heard about — the guy in Uganda filming himself walking down the street — and then the flip of that, as I mentioned, the example of young people in America, which the Guardian has a really good analysis of, who have skills that cannot easily be translated into something AI can do. The key element in that part of the discussion was about the skill this young guy has — 23 years old. It’s not unique, but he’s got a skill that isn’t just “I know how to repair a diesel engine.” It’s that he can, at a glance, literally see what’s wrong and already formulate the six things he needs to do to fix it. And that is valuable. He’s earning $150,000 a year already in salary doing this, and he’s 23 years old.
So there are other examples mentioned in that Guardian piece too that are interesting. On the one hand, you’ve got gig economy workers like DoorDash drivers doing what they’re doing. On the other hand, you’ve got people like this guy developing a career not related to AI at all — a skill that cannot easily be replicated by AI. So that’s part of the landscape. I’m not sure where all of that fits within this, Shel, to be honest, but it’s part of the picture.
Shel Holtz
I raised the issue of employees inside the organization. Those gig workers are another issue for organizational communicators, because these workers — the ones very accustomed to having the app tell them to do a task, doing the task, and getting paid for it — these folks aren’t covered by traditional internal communications. Organizations relying on gig workers and contracted labor, and increasingly if your AI tools were trained by them, have a stakeholder relationship they may not have a communication strategy for. I’d argue they don’t have a communication strategy for it.
I’ve often made the distinction between internal communications and employee communications. Employees are the people who come in and get paid by you directly, whether salaried or hourly. But you have other internal stakeholders, and we develop strategies for them — the contractors embedded in our organization. I work in construction; we have subcontractors; there are ways the organization communicates with them. There are all kinds of internal stakeholders, and these gig and contract workers are now among them. We should figure out a way to communicate with them, talk about our ethical use of their data, and engage with them in ways that are meaningful, useful, and produce positive results.
Neville Hobson
Shel Holtz
So organizational communicators talking about AI as just augmenting human workers need to be careful, because I think increasingly we’re going to hear stories about how that isn’t actually true, particularly for this younger demographic. We have to be honest about that asymmetry. I mean, whose labor is augmenting whom?
Neville Hobson
I agree with the premise in all the articles we’ve linked in the show notes that a new data labor economy is emerging where people monetize their identity and behavior and, in the case of the Global South in particular, don’t think twice about it. Employers have a duty of care to recognize what they need to do to bring that group into their structure — one where communication, ethics, and trust play the bigger role.
Shel Holtz
Neville Hobson
Shel Holtz
The post FIR #508: Inside AI’s Human Raw Material Supply Chain appeared first on FIR Podcast Network.
By Neville Hobson and Shel Holtz5
2020 ratings
When workers lose their jobs, many turn to gig work to earn income while waiting for new opportunities. Increasingly, companies that hire gig workers are shifting from delivering food or sharing rides to creating content to train AI systems. This raises various communication and ethical issues. Neville and Shel explain what’s happening and discuss the implications in this short midweek episode.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz
Neville Hobson
People are being paid, typically in small amounts, to record themselves walking down the street, having conversations, folding laundry, even just going about their day. That data is then used to train AI systems because those systems need examples of how people actually speak, move, and interact in the real world. In one case, delivery drivers in the US are being redirected to film tasks for robotics training. Platforms are turning existing gig workers like delivery drivers into distributed data collectors for AI. In another example, people are selling access to their phone conversations through apps that pay contributors to upload voice and text data. And in yet another, workers are strapping phones to their heads to record household chores so humanoid robots can learn how to move. The work is global, fragmented, and often invisible, with workers spanning Nigeria, India, South Africa, the US, and far beyond. Humans are no longer just users of AI — they are raw material suppliers. In China, there are even state-run centers where workers wear virtual reality headsets and exoskeletons to teach robots how to carry out everyday physical tasks. What we’re seeing is the rise of what you might call data labor, where identity itself becomes part of the work.
There’s a clear driver behind it. AI companies are running out of high-quality training data. The open web isn’t enough anymore, and synthetic data has its limits. So the industry is turning to something else: real human lived experience. Because if you want a robot to understand how to load a dishwasher, navigate a room, or interact with objects, you need to see humans doing it at scale.
But there’s an interesting contrast here. One of the stories highlights a 23-year-old in the US, a guy called Cale Mouser, who earns well into six figures repairing diesel engines. It’s something he’s developed great skill in doing. His work depends on judgment, experience, and problem solving in the real world — things that don’t easily translate into data. So while some people are being paid small amounts to generate data for AI systems, others like Cale Mouser are building highly valuable careers precisely because their skills can’t be reduced to it. And that contrast feels important.
Because on one level, this new kind of work does create opportunity. For some people, especially in lower-income regions in the Global South, this is real income — paid in dollars, flexible and accessible. But there’s another side to it. Because what people are actually selling isn’t just time, it’s identity: their voice, their behavior, their presence in the world. And often once that data is handed over, it’s gone — permanently licensed, reused, repurposed, potentially in ways the individual never sees or understands.
So you have this asymmetry: individuals earning small immediate payments while companies build long-term, highly valuable AI systems. Perhaps it’s a new version of the Mechanical Turk for the AI era. And that raises a deeper question. What does it mean when the inputs to AI are no longer abstract data, but pieces of human identity? When the training set is not just content, but behavior, voice, and presence? And when those pieces can be reused, replicated, and scaled, often without the individual’s ongoing knowledge or control? Many platforms grant royalty-free perpetual licenses, where workers get paid once and lose control forever. There’s potential for deepfakes, identity theft, and misuse without consent. And perhaps more uncomfortably, what does it mean when people are contributing to systems that could automate their future jobs?
For communicators, this feels important because this isn’t just a technology story. It’s a story about trust, consent, transparency, and how organizations explain what they’re doing with AI. If AI ethics lives anywhere, it’s here — in how these systems are built and how that’s communicated. So the question to explore — one of the questions to explore, perhaps — is this one: Are we comfortable with an economy where identity itself is becoming labor? And if not, what responsibility do organizations and communicators have in shaping it?
Shel Holtz
In terms of the AI element, what this suggests is that the gig economy didn’t go anywhere when AI came along; it just became the training ground for AI. And it’s interesting that the workers who are being squeezed out of knowledge jobs are selling their voices and their movements to build the systems that squeezed them out. Because where do a lot of these people who are being laid off because of AI go? Well, they go drive for Uber, they go drive for DoorDash. And you do that long enough and you get really accustomed to the idea that they send you a task, you go do that task, and you get paid for it. So if that task shifts from picking up a meal at a restaurant and delivering it to somebody’s house to going to your own house and washing your dishes because that’s what they want to capture on video — it’s the same thing. You’re getting a task on the app. You’re doing the task and you’re getting paid for it. So I think for a lot of people, this is going to be a fairly easy shift, and they’re not going to think a lot about what’s happening to the information and the content that’s being created with their movements and their voices, which is now being shared and used to make a lot of money for the people who are paying a pittance to these folks.
So I see three issues here that connect directly to organizational communication. The first is consent and transparency — and I’m talking about inside organizations — because companies are already deploying AI tools trained on data that their own workers have supplied, and sometimes they’ve supplied this data unknowingly. The ethical and reputational questions that employees are going to ask are questions like: Was my voice used to train a bot that you activated in order to replace my friend who sat next to me and I had lunch with? And regulators are going to end up asking these questions too. So communicators really need to be out front with clear internal messaging about what data employees generate and how the company is using it. Let’s talk about that before I hit the other things that popped into my mind.
Neville Hobson
The other element, which is also ethics-related, is: is this whole thing ethical if participation is driven by economic necessity? Whatever reason you might give — we need to get an edge on the competition, whatever — you’re still up against that element.
That’s the big-picture ethics question. But common sense tells you how you should do this. Should individuals be compensated long-term for use of their data? On the one hand, you might say, fine, let’s tell everyone: your data may be used — your day-to-day interactions with colleagues, the recordings of your conversations on our internal Teams tool — that’s kept. So the employee might say, I’m okay with that, but I want to be compensated for it. And now there’s an interesting position.
Shel Holtz
Neville Hobson
Shel Holtz
Neville Hobson
Shel Holtz
Neville Hobson
And the thing is, a new economy is emerging where people monetize their identity and behavior voluntarily. In the case of the examples we heard about — the guy in Uganda filming himself walking down the street — and then the flip of that, as I mentioned, the example of young people in America, which the Guardian has a really good analysis of, who have skills that cannot easily be translated into something AI can do. The key element in that part of the discussion was about the skill this young guy has — 23 years old. It’s not unique, but he’s got a skill that isn’t just “I know how to repair a diesel engine.” It’s that he can, at a glance, literally see what’s wrong and already formulate the six things he needs to do to fix it. And that is valuable. He’s earning $150,000 a year already in salary doing this, and he’s 23 years old.
So there are other examples mentioned in that Guardian piece too that are interesting. On the one hand, you’ve got gig economy workers like DoorDash drivers doing what they’re doing. On the other hand, you’ve got people like this guy developing a career not related to AI at all — a skill that cannot easily be replicated by AI. So that’s part of the landscape. I’m not sure where all of that fits within this, Shel, to be honest, but it’s part of the picture.
Shel Holtz
I raised the issue of employees inside the organization. Those gig workers are another issue for organizational communicators, because these workers — the ones very accustomed to having the app tell them to do a task, doing the task, and getting paid for it — these folks aren’t covered by traditional internal communications. Organizations relying on gig workers and contracted labor, and increasingly if your AI tools were trained by them, have a stakeholder relationship they may not have a communication strategy for. I’d argue they don’t have a communication strategy for it.
I’ve often made the distinction between internal communications and employee communications. Employees are the people who come in and get paid by you directly, whether salaried or hourly. But you have other internal stakeholders, and we develop strategies for them — the contractors embedded in our organization. I work in construction; we have subcontractors; there are ways the organization communicates with them. There are all kinds of internal stakeholders, and these gig and contract workers are now among them. We should figure out a way to communicate with them, talk about our ethical use of their data, and engage with them in ways that are meaningful, useful, and produce positive results.
Neville Hobson
Shel Holtz
So organizational communicators talking about AI as just augmenting human workers need to be careful, because I think increasingly we’re going to hear stories about how that isn’t actually true, particularly for this younger demographic. We have to be honest about that asymmetry. I mean, whose labor is augmenting whom?
Neville Hobson
I agree with the premise in all the articles we’ve linked in the show notes that a new data labor economy is emerging where people monetize their identity and behavior and, in the case of the Global South in particular, don’t think twice about it. Employers have a duty of care to recognize what they need to do to bring that group into their structure — one where communication, ethics, and trust play the bigger role.
Shel Holtz
Neville Hobson
Shel Holtz
The post FIR #508: Inside AI’s Human Raw Material Supply Chain appeared first on FIR Podcast Network.

32,246 Listeners

30,233 Listeners

113,121 Listeners

56,944 Listeners

10,331 Listeners

9,167 Listeners

67 Listeners

16,512 Listeners

14,324 Listeners

2,230 Listeners

29,272 Listeners

12,848 Listeners

20,222 Listeners

1,261 Listeners

98 Listeners