
Sign up to save your podcasts
Or
Debate continues about when to disclose that you have used AI to create an output. Do you disclose any use at all? Do you confine disclosure to uses of AI that could lead people to feel deceived? Wherever you land on this question, it may not matter when it comes to building trust with your audience. According to a new study, audiences lose trust as soon as they see an AI disclosure. This doesn’t mean you should not disclose, however, since finding out that you used AI and didn’t disclose is even worse. That leaves little wiggle room for communicators taking advantage of AI and seeking to be as transparent as possible. In this short midweek FIR episode, Neville and Shel examine the research along with recommendations about how to be transparent while remaining trusted.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, May 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz (00:05)
@nevillehobson (00:13)
13 experiments over 5,000 participants from students and hiring managers to legal analysts and investors. And the results are consistent across all groups, across all scenarios. People trust others less when they’re told that AI played a role in getting the work done. We’ll get into this right after this.
So imagine this, you’re a job applicant who says you used AI to polish a CV, or a manager who mentions AI helped write performance reviews, or a professor who says grades were assessed using AI. In each case, just admitting you used AI is enough to make people view you as less trustworthy. Now this isn’t about AI doing the work alone. In fact, the study found that people trusted a fully autonomous AI more than they trusted a human.
who disclosed they had help from an AI. That’s the paradox. So why does this happen? Well, the researchers say it comes down to legitimacy. We still operate with deep seated norms that say proper work should come from human judgment, effort and expertise. So when someone reveals they used AI, it triggers a reaction, a kind of social red flag. Even if AI helped only a little, even if the work is just a good.
Changing how the disclosure is worded doesn’t help much. Whether you say, AI assisted me lightly, or I proofread the AI output, or I’m just being transparent, trust still drops. There’s one twist. If someone hides their AI use, and it’s later discovered by a third party, the trust hit is even worse. So you’re damned if you do, but potentially more damned if you don’t. Now here’s where it gets interesting.
Just nine months earlier in July, 2024, Elsevier published a different report, Insights 2024 Attitudes Towards AI, based on a global survey of nearly 3,000 researchers and clinicians. That survey found most professionals are enthusiastic about AI’s potential, but they demand transparency to trust the tools. So on the one hand, we want transparency from AI systems. On the other hand, we penalize people who are transparent about using AI.
It’s not a contradiction. It’s about who we’re trusting. In the 2024 study, trust is directed at the AI tool. In the 2025 study, trust is directed at the human disclosure. And that’s a key distinction. It shows just how complex and fragile trust is in the age of AI. So where does this leave us? It leaves us in a space where the social norms around AI use still lag behind the technology itself.
And that has implications for how we communicate, lead teams and build credibility. As generative AI becomes ever more part of everyday workflows, we’ll need to navigate this carefully. Being open about AI use is the right thing to do, but we also need to prepare for how people will respond to that honesty. It’s not a tech issue, it’s a trust issue. And as communicators, we’re right at the heart of it. So how do you see it, Shail?
Shel Holtz (03:53)
@nevillehobson (03:56)
Shel Holtz (04:18)
false information. There was a study that was conducted, actually I don’t know who actually did the study, but it was published in the Strategic Management Journal. This was related specifically to the issue that you mentioned with writing performance reviews or automating performance evaluations or recommending performance improvements for somebody who’s not doing that well on the job.
So on the one hand, know, powerful AI data analytics increase the quality of feedback, which may enhance employee productivity, according to this research. They call that the deployment effect. But on the other hand, employees may develop a negative perception of AI feedback once it’s disclosed to them, harming productivity. And that’s referred to as the disclosure effect. And there was one other bit of research that I found.
And this was from Trusting News. This was research conducted with a grant that says what audiences really need in order for a disclosure to be of any use to them is specificity. They respond better to detailed disclosures about how AI is being used as opposed to generic disclaimers, which are viewed less favorably and produced.
less trust. Word choice matters less. Audiences wanted to know specifically what AI was used to do with the words that the disclosers used to present that information mattering less. And finally, Epic has, that’s the Electronic Privacy and Information Center, had some recommendations. They said that
both direct and indirect disclosures, direct being a disclosure that says, hey, before you read or listen or watch this or view it, you should know that we used AI on it. And an indirect disclosure is where it’s somehow baked into the content itself. But they said, regardless of whether it’s direct or indirect, to ensure persistence and to meaningfully notify viewers that the content is synthetic, disclosures cannot be the only tool used to address the harms that stem from generative AI.
And they recommended specificity, just as you did see from the other research that I cited. says disclosure should be specific about what the components of the content are, which components are actually synthetic. Direct disclosures must be clear and conspicuous such that a reasonable person would not mistake a piece of content as being authentic.
Robustness, disclosures must be technically shielded from attempts to remove or otherwise tamper with them. Persistence, disclosures must stay attached to a piece of content even when reshared. There’s an interesting one. And format neutral, the disclosure must stay attached to the content even if it is transformed, such as from a JPEG to a .PNG or a .TXT to a .doc file.
@nevillehobson (07:34)
Shel Holtz (07:40)
@nevillehobson (07:50)
of mistrust will fade over time. They say as AI becomes more widespread and potentially more reliable, disclosing its use may eventually seem less suspect. They also mentioned that there is absolutely no consensus on how organizations should handle AI disclosure from the research that they carried out. One option they talk about is making transparency voluntary, which leads a decision to disclose the individual. Another is a mandatory disclosure policy.
And they say their research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors. And finally, they mentioned a third approach is cultural, building a workplace where AI use is seen as normal, accepted and legitimate. And they say that we think this kind of environment could soften the trust penalty and support both transparency and credibility. they…
In my view, certainly, I would continue disclosing my AI use in the way I have been, which is not blowing trumpets about it or making a huge deal out of it. Just saying as it’s appropriate, I have an AI use thing on my website. Been there now for a year and a bit. And I’ve not yet had anyone ask me, so what are you telling us about your AI use? It’s very open. The one thing I have found that I think helps
in this situation where you might get negative feedback on AI use is if you’ve written something, for instance, that you published that AI has helped you in the construction of that document, primarily through researching the topic. So it could be summarizing a lengthy article or report. I did that not long ago on a 50 page PDF and it produced the summary in like four paragraphs, a little too concise. So that comes down to the prompt. What do you ask it to do?
But I found that if you share clearly the citations, i.e. the links to sources that often are referenced, or rather they’re not referenced, let’s say, or you add a reference because you think it’s relevant, that suggests you have taken extra steps to verify that content and that therefore means you have not just, you
shares something an AI has created. And I think that’s probably helpful. That said, I think the report though, the basis of it is quite clear. There is no solution to this currently at hand. And I think the worst thing anyone can do, and that’s to the conversation’s first point, leaving it a voluntary disclosure option, is probably not a good idea because some people aren’t going to do it. Others won’t be clear on how to do it. And so they won’t do it.
And then if they found out the penalty is severe, not only what you’ve done, but your own reputation, and that’s not good. you’re kind of between the devil and the deep blue sea here, but bottom line, you should still disclose, but you need to do it the right way. And there ought to be some guidance in organizations in particular on how to disclose, what to disclose, when to disclose. I’ve not seen a lot of discussion about that though.
Shel Holtz (11:10)
@nevillehobson (11:15)
Right.
Shel Holtz (11:36)
@nevillehobson (11:49)
Shel Holtz (12:03)
@nevillehobson (12:18)
Shel Holtz (12:30)
that it is accurate and to disclose where AI was specifically employed in producing that content.
@nevillehobson (13:08)
So you could follow any regulatory pathway you want and do all the guidance you want. You’re still gonna get this until as the conversation reports, as as ever his research, it dies away and no one has any idea when that might be. So this is a minefield without doubt.
Shel Holtz (13:36)
Yeah, but I think what they’re getting at is that if the disclosure being applied was consistent and specific so that when you looked at a disclosure, it was the same nature of a disclosure that you were getting from some other content producer, some other organization, you would begin to develop some sense of reliability or consistency that, okay, this is one of these. I know now what I’m going to be looking at here and can…
consume it through that lens. So I think it would be helpful, you know, not that I’m always a big fan of excess regulation, but this is a minefield. And I think even if it’s voluntary compliance to a consistent set of standards, although we know that how that’s played out when it’s been proposed in other places online over the last 20, 25 years. But I think, think consistency and specificity
are what’s required here. And I don’t know how we get to that without regulation.
@nevillehobson (14:50)
I’d go the route of, we need something, and this is where professional bodies could come in to help, I think, in proposing this kind of thing. Others who do it share what they’re doing. So we need something like that, in my view, where there may well be lots of this in place, but I don’t see people talking too much about it. I do see people talking much about the worry about getting accused of whatever it is that people accuse you of, of using AI.
That’s not pleasant at all. And you need to have thick skin and also be pretty confident. I mean, I’d like to say in my case, I am pretty confident that if I say I’ve done this with AI, I can weather any accusations even if they are well meant, some are not. And they’re based not on informed opinion, really, it’s uninformed, I suppose you could argue.
Anyway, it is a minefield and there’s no easy solution on the horizons. But in the meantime, disclose, do not hide it.
Shel Holtz (16:10)
@nevillehobson (16:27)
Shel Holtz (16:28)
That’s it. need a creative commons-like solution to the disclosure issue. And that’ll be a 30 for this episode of Four Immediate Release.
The post FIR #464: Research Finds Disclosing Use of AI Erodes Trust appeared first on FIR Podcast Network.
3.3
33 ratings
Debate continues about when to disclose that you have used AI to create an output. Do you disclose any use at all? Do you confine disclosure to uses of AI that could lead people to feel deceived? Wherever you land on this question, it may not matter when it comes to building trust with your audience. According to a new study, audiences lose trust as soon as they see an AI disclosure. This doesn’t mean you should not disclose, however, since finding out that you used AI and didn’t disclose is even worse. That leaves little wiggle room for communicators taking advantage of AI and seeking to be as transparent as possible. In this short midweek FIR episode, Neville and Shel examine the research along with recommendations about how to be transparent while remaining trusted.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, May 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz (00:05)
@nevillehobson (00:13)
13 experiments over 5,000 participants from students and hiring managers to legal analysts and investors. And the results are consistent across all groups, across all scenarios. People trust others less when they’re told that AI played a role in getting the work done. We’ll get into this right after this.
So imagine this, you’re a job applicant who says you used AI to polish a CV, or a manager who mentions AI helped write performance reviews, or a professor who says grades were assessed using AI. In each case, just admitting you used AI is enough to make people view you as less trustworthy. Now this isn’t about AI doing the work alone. In fact, the study found that people trusted a fully autonomous AI more than they trusted a human.
who disclosed they had help from an AI. That’s the paradox. So why does this happen? Well, the researchers say it comes down to legitimacy. We still operate with deep seated norms that say proper work should come from human judgment, effort and expertise. So when someone reveals they used AI, it triggers a reaction, a kind of social red flag. Even if AI helped only a little, even if the work is just a good.
Changing how the disclosure is worded doesn’t help much. Whether you say, AI assisted me lightly, or I proofread the AI output, or I’m just being transparent, trust still drops. There’s one twist. If someone hides their AI use, and it’s later discovered by a third party, the trust hit is even worse. So you’re damned if you do, but potentially more damned if you don’t. Now here’s where it gets interesting.
Just nine months earlier in July, 2024, Elsevier published a different report, Insights 2024 Attitudes Towards AI, based on a global survey of nearly 3,000 researchers and clinicians. That survey found most professionals are enthusiastic about AI’s potential, but they demand transparency to trust the tools. So on the one hand, we want transparency from AI systems. On the other hand, we penalize people who are transparent about using AI.
It’s not a contradiction. It’s about who we’re trusting. In the 2024 study, trust is directed at the AI tool. In the 2025 study, trust is directed at the human disclosure. And that’s a key distinction. It shows just how complex and fragile trust is in the age of AI. So where does this leave us? It leaves us in a space where the social norms around AI use still lag behind the technology itself.
And that has implications for how we communicate, lead teams and build credibility. As generative AI becomes ever more part of everyday workflows, we’ll need to navigate this carefully. Being open about AI use is the right thing to do, but we also need to prepare for how people will respond to that honesty. It’s not a tech issue, it’s a trust issue. And as communicators, we’re right at the heart of it. So how do you see it, Shail?
Shel Holtz (03:53)
@nevillehobson (03:56)
Shel Holtz (04:18)
false information. There was a study that was conducted, actually I don’t know who actually did the study, but it was published in the Strategic Management Journal. This was related specifically to the issue that you mentioned with writing performance reviews or automating performance evaluations or recommending performance improvements for somebody who’s not doing that well on the job.
So on the one hand, know, powerful AI data analytics increase the quality of feedback, which may enhance employee productivity, according to this research. They call that the deployment effect. But on the other hand, employees may develop a negative perception of AI feedback once it’s disclosed to them, harming productivity. And that’s referred to as the disclosure effect. And there was one other bit of research that I found.
And this was from Trusting News. This was research conducted with a grant that says what audiences really need in order for a disclosure to be of any use to them is specificity. They respond better to detailed disclosures about how AI is being used as opposed to generic disclaimers, which are viewed less favorably and produced.
less trust. Word choice matters less. Audiences wanted to know specifically what AI was used to do with the words that the disclosers used to present that information mattering less. And finally, Epic has, that’s the Electronic Privacy and Information Center, had some recommendations. They said that
both direct and indirect disclosures, direct being a disclosure that says, hey, before you read or listen or watch this or view it, you should know that we used AI on it. And an indirect disclosure is where it’s somehow baked into the content itself. But they said, regardless of whether it’s direct or indirect, to ensure persistence and to meaningfully notify viewers that the content is synthetic, disclosures cannot be the only tool used to address the harms that stem from generative AI.
And they recommended specificity, just as you did see from the other research that I cited. says disclosure should be specific about what the components of the content are, which components are actually synthetic. Direct disclosures must be clear and conspicuous such that a reasonable person would not mistake a piece of content as being authentic.
Robustness, disclosures must be technically shielded from attempts to remove or otherwise tamper with them. Persistence, disclosures must stay attached to a piece of content even when reshared. There’s an interesting one. And format neutral, the disclosure must stay attached to the content even if it is transformed, such as from a JPEG to a .PNG or a .TXT to a .doc file.
@nevillehobson (07:34)
Shel Holtz (07:40)
@nevillehobson (07:50)
of mistrust will fade over time. They say as AI becomes more widespread and potentially more reliable, disclosing its use may eventually seem less suspect. They also mentioned that there is absolutely no consensus on how organizations should handle AI disclosure from the research that they carried out. One option they talk about is making transparency voluntary, which leads a decision to disclose the individual. Another is a mandatory disclosure policy.
And they say their research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors. And finally, they mentioned a third approach is cultural, building a workplace where AI use is seen as normal, accepted and legitimate. And they say that we think this kind of environment could soften the trust penalty and support both transparency and credibility. they…
In my view, certainly, I would continue disclosing my AI use in the way I have been, which is not blowing trumpets about it or making a huge deal out of it. Just saying as it’s appropriate, I have an AI use thing on my website. Been there now for a year and a bit. And I’ve not yet had anyone ask me, so what are you telling us about your AI use? It’s very open. The one thing I have found that I think helps
in this situation where you might get negative feedback on AI use is if you’ve written something, for instance, that you published that AI has helped you in the construction of that document, primarily through researching the topic. So it could be summarizing a lengthy article or report. I did that not long ago on a 50 page PDF and it produced the summary in like four paragraphs, a little too concise. So that comes down to the prompt. What do you ask it to do?
But I found that if you share clearly the citations, i.e. the links to sources that often are referenced, or rather they’re not referenced, let’s say, or you add a reference because you think it’s relevant, that suggests you have taken extra steps to verify that content and that therefore means you have not just, you
shares something an AI has created. And I think that’s probably helpful. That said, I think the report though, the basis of it is quite clear. There is no solution to this currently at hand. And I think the worst thing anyone can do, and that’s to the conversation’s first point, leaving it a voluntary disclosure option, is probably not a good idea because some people aren’t going to do it. Others won’t be clear on how to do it. And so they won’t do it.
And then if they found out the penalty is severe, not only what you’ve done, but your own reputation, and that’s not good. you’re kind of between the devil and the deep blue sea here, but bottom line, you should still disclose, but you need to do it the right way. And there ought to be some guidance in organizations in particular on how to disclose, what to disclose, when to disclose. I’ve not seen a lot of discussion about that though.
Shel Holtz (11:10)
@nevillehobson (11:15)
Right.
Shel Holtz (11:36)
@nevillehobson (11:49)
Shel Holtz (12:03)
@nevillehobson (12:18)
Shel Holtz (12:30)
that it is accurate and to disclose where AI was specifically employed in producing that content.
@nevillehobson (13:08)
So you could follow any regulatory pathway you want and do all the guidance you want. You’re still gonna get this until as the conversation reports, as as ever his research, it dies away and no one has any idea when that might be. So this is a minefield without doubt.
Shel Holtz (13:36)
Yeah, but I think what they’re getting at is that if the disclosure being applied was consistent and specific so that when you looked at a disclosure, it was the same nature of a disclosure that you were getting from some other content producer, some other organization, you would begin to develop some sense of reliability or consistency that, okay, this is one of these. I know now what I’m going to be looking at here and can…
consume it through that lens. So I think it would be helpful, you know, not that I’m always a big fan of excess regulation, but this is a minefield. And I think even if it’s voluntary compliance to a consistent set of standards, although we know that how that’s played out when it’s been proposed in other places online over the last 20, 25 years. But I think, think consistency and specificity
are what’s required here. And I don’t know how we get to that without regulation.
@nevillehobson (14:50)
I’d go the route of, we need something, and this is where professional bodies could come in to help, I think, in proposing this kind of thing. Others who do it share what they’re doing. So we need something like that, in my view, where there may well be lots of this in place, but I don’t see people talking too much about it. I do see people talking much about the worry about getting accused of whatever it is that people accuse you of, of using AI.
That’s not pleasant at all. And you need to have thick skin and also be pretty confident. I mean, I’d like to say in my case, I am pretty confident that if I say I’ve done this with AI, I can weather any accusations even if they are well meant, some are not. And they’re based not on informed opinion, really, it’s uninformed, I suppose you could argue.
Anyway, it is a minefield and there’s no easy solution on the horizons. But in the meantime, disclose, do not hide it.
Shel Holtz (16:10)
@nevillehobson (16:27)
Shel Holtz (16:28)
That’s it. need a creative commons-like solution to the disclosure issue. And that’ll be a 30 for this episode of Four Immediate Release.
The post FIR #464: Research Finds Disclosing Use of AI Erodes Trust appeared first on FIR Podcast Network.