
Sign up to save your podcasts
Or


Take a stroll through LinkedIn. You’ll find no shortage of posts stridently deriding the notion that anyone should ever use AI to write for them. While that case isn’t hard to make for professional writers, there are countless professionals in other fields who struggle with writing, never trained to be writers, yet now have to write everything from emails to reports as part of their jobs. Should they really sweat for hours over wording, time they could be devoting to the core areas of subject expertise, when AI can produce content that is cogent, clear, and direct? In this short mid-week episode, Neville and Shel look at the trends in using AI for writing, despite the plethora of opinions from the pundits.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville: Hi everyone and welcome to For Immediate Release episode 507. I’m Neville Hobson.
Shel: And I’m Shel Holtz. And if you spend any time at all on LinkedIn, you’ll see the degree to which anti-AI sentiment is ramping up. A lot of it’s aimed at using AI for writing and how absolutely wrong that is. Yet just last week, on the same day, Wired Magazine and The Wall Street Journal both published articles on reporters using AI to help write and edit their stories. So today, let’s talk about using AI to write.
Specifically, is it okay for employees to use AI to help them write for work? And my answer is not only is it okay for many employees, it might be one of the most genuinely useful things AI can do. Here’s the framing I would push back on. When we talk about AI writing assistants, we tend to picture a journalist or a marketer or a communications professional, someone whose craft is writing, it’s what they’re paid for, handing their keyboard over to a robot. And for those of us who are professional writers, that raises legitimate professional and ethical questions. But that’s not the population we’re talking about when we’re communicating AI adoption in most organizations. Think about who actually has to write at work. Engineers document processes. Product managers write status updates. Safety officers draft incident reports.
Shel: Finance analysts compose budget justifications. Scientists write up findings for non-technical stakeholders. These are not people who chose their careers because they love writing. Writing is a tax they pay to do the work they actually care about. And many of them pay that tax really, really badly. The idea that a structural engineer should produce elegant prose unaided is the same logic as saying a communications director should coordinate the concrete mix for a construction project. We don’t expect that. So why do we expect every knowledge worker to be a competent writer? Muckrack’s 2026 State of Journalism report found that 82% of journalists, professional writers, people whose job this is, are now using at least one AI tool. That’s up from 77% the year before.
If the people whose professional identity is tied to their writing are using AI tools, it shouldn’t surprise us that everyone else is too, or that they should. Now the research does tell us something important about how to use these tools. A University of Florida study of 1,100 professionals found that AI tools can make workplace writing more professional.
But regular heavy use can undermine trust between managers and employees, particularly for relationship-oriented messages like praise, motivation, or personal feedback. The study found that employees are more skeptical when they perceive a supervisor is leaning heavily on AI for those kinds of communications. Now that’s a meaningful finding and it’s exactly the kind of nuance internal communicators need to help their organizations understand.
It’s not an argument against AI writing assistance. It’s an argument for knowing when it’s appropriate. Purdue Business School Professor Casey Roberson, who literally wrote one of the first business writing textbooks to address AI, puts it this way: AI is a great tool for brainstorming when you’re stuck, for outlining and structuring documents, for revising drafts to improve clarity and tone, but it should not be used for confidential information, and using it to write first drafts can stifle creativity and critical thinking. The Wharton communication program makes a similar distinction. Their guidance frames AI tools as powerful and skilled hands for the right task, valuable for brainstorming, editing, improving conciseness, and anticipating challenging questions, but a liability when used as a substitute for your own thinking, your own knowledge of your audience, and your own credibility.
So what’s the practical guidance for internal communicators trying to help their colleagues use AI responsibly in their writing? First, make the distinction between communication types explicit. Routine informational writing — process documentation, project updates, meeting recaps, technical reports — that’s where AI assistance is most defensible and most valuable. That’s exactly where the trust risk is lowest and the productivity gain is highest. Conversely, messages that carry relationship weight, like a manager recognizing someone’s contribution or a leader addressing a team through a difficult moment, that deserves a human voice. Help your employees understand that difference.
Second, reframe the conversation around who’s actually writing. A systematic review published in the International Journal of Business Communication found that AI can significantly help with idea generation, structure, literature synthesis, editing, and refinement. Essentially all the phases of writing that non-writers find most daunting. AI isn’t replacing a writer’s voice. In many cases, it’s giving non-writers a voice they otherwise wouldn’t even have.
Third, be honest about the nuance inside the journalism conversation. The Columbia Journalism Review published a fascinating piece where journalists across major newsrooms shared their practices. Nicholas Thompson, the CEO of The Atlantic, described using AI the way he’d use a fast, well-read research assistant who’s also a terrible writer — helpful for checking consistency, flagging chronological issues, examining logical claims, but not for the writing itself. Amelia Daly, a senior reporter at VentureBeat, put it this way: AI helps her productivity, but she refuses to use it to write because writing is how she maintains trust with her readers. That distinction — AI as research and process support versus AI as voice — maps directly to the guidance you should be giving your colleagues.
I read one other reporter in one of these articles who said he actually does use it to write because he didn’t become a journalist in order to write. He didn’t like writing; he liked reporting. So he did all the other work and then lets the AI produce the writing.
And here’s the thing I’d leave your employees with because I think it gets lost in this debate. Wharton’s communication faculty make the argument that writing is thinking, that when you rely on AI for drafting, you don’t know your content as deeply as you should, and you lose the nimbleness to adapt when the moment requires it. And that’s true. But for an engineer who agonizes over every sentence of a procedure document, who spends four times as long on the writing as on the analysis,
Shel: AI doesn’t replace their thinking. It clears away the friction so their thinking can actually reach the page. For internal communicators, this is a genuinely useful message to take to your AI adoption rollouts. AI writing assistance isn’t about cutting corners. It’s about removing a barrier that prevents good ideas from being communicated clearly while still insisting on the judgment, authenticity, and relational awareness that only human beings can bring.
Neville: Yeah, it’s a big topic, I have to admit. And I think of it from not the employee communication point of view so much. That’s pretty a major part of it, I think, major usage. Is anyone writing, in fact? Whether you’re in public relations, whether you’re a journalist, et cetera, people who need to write as part of their roles is what’s in my mind mostly.
I’m also drawn by a very good analysis by Josh Bernoff. You and I interviewed Josh, what, two, three months ago. He wrote an assessment of Charlene Li’s new book, Winning with AI, which she used AI extensively in the creation of the book. Worth pointing out that the book — the AI didn’t write any of the content.
She and her co-author, Katia Walsh, talked about the way in which they divvied up the work. And the AIs, plural, did research amongst other tasks, too. But Josh did a lengthy post setting out all the areas where they found AI useful and AI not so useful. And it struck me reading Josh’s post and then also Charlene’s postscripts, as it were, in the book itself, which I am reading, by the way, that this would apply to anyone writing, not just would-be book authors, in my view. Whether you’re writing fiction or nonfiction doesn’t make any difference. Whether you’re writing a report, whether you’re writing an article or for a blog or for a newspaper, whatever, doesn’t matter. These principles, I think, apply to that. And it’s not so much about whether your role in your organization or in your job is to do with this and you’re not very good at writing. It’s not so much that. It’s more focused on those whose job is writing, or writing is part of their job in some form.
So there are a number of things that I took from it. But to go to the main point about Charlene’s book Winning with AI, AI wasn’t doing the writing, as I mentioned. It was supporting the thinking. It handled things like the research, summaries, the structure, which speeds everything up. But the ideas, the voice, and the judgment — that all stayed firmly human. And to quote from Josh’s post, he says that the two authors describe how they used Claude to structure the content, ChatGPT to create a custom GPT with four years of their work, which it used in a sense as a training aid, Perplexity to do the research, and Gemini to search a vast collection of interview transcripts. It’s much more detailed than that. It’s well set out in the book. And I thought, that’s interesting. That’s a very intelligent way to go about using different AI chatbots for different purposes on your projects.
So three things I took from this, and this applies to all the points you made, Shel, and it will repeat some of those, but it just shows you that this is how you need to think of this. First, AI works best as a thinking partner, not a writer. Like I said, the two authors used AI as a note taker, researcher, brainstorming partner — essentially a third collaborator. It helped them structure the ideas, surface insights, and challenge assumptions, and they did not rely on it to produce the final prose.
The second point: it saved time on the drudge work, as Josh called it, but it requires human judgment. It was highly effective for research and summarization, structuring outlines, surfacing missed ideas from earlier drafts. That resonated with me because I often find in my own experience when I’m doing research on either blog posts or articles or reports or just research about something I’m interested in, it usually surfaces something AI wouldn’t have thought of, or I might have done, but it might have surfaced later after I’d written it, and it requires a rewrite or something like that. Structuring the outlines, too, is another thing. And this is definitely worth noting — we’ve discussed this before. Everything still required the humans to fact-check and validate everything the AI produces, because in Charlene’s words, AI has no built-in truth function. And I think that’s a worthwhile way of looking at it.
And the final point that I took from this: you can’t outsource originality, voice, or quality — i.e., the writing. They tried it. AI failed at core creative tasks. There are three of them that Josh points out in his article. Generating genuinely new ideas — this is not very good at this, because it’s trained on existing writing that humans have done over the years and the centuries even. It can’t create something new from that other than guesswork. It’s about the same as what we do, I think, except we’re likely to do the more informed approach. It can’t write in a compelling human voice. And it cannot edit to a high standard. They all described — Charlene and Katia and Josh, for that matter — AI writing as bland, repetitive, and jargon-heavy. And in fact, Charlene talks about how they could not stop jargon creep in anything that the AI produced. And she had this big thing about one draft where they used AI to review it — it changed every use of the word “use” to “utilize.” The AI changed it to that, full of that kind of jargon.
Shel: One of my biggest pet peeves, by the way, is “utilize.”
Neville: Right, totally. And the final quality, nuance, personality, and insight remained entirely human because the humans wrote it. So I take all of that, add it to what you’ve been talking about, and say, I guess I’d conclude from that: it doesn’t matter what your role is. These are the principles you need to pay attention to and approach your use of AI as an aid. And we’re not, you know, suddenly coming out with a revelation here. I see people saying this all over the place. AI is an aid to help you, in a sense, create extremely good content, either as a writer or something else that you might be doing, where this is contributing to that end. And it doesn’t matter what your role is, whether you’re no good at this or that — that reporter you talked about likes to report but not to write. I’m wondering how the hell he gets away with doing that. Reporters have to write, don’t they?
Shel: Well, I’m sure he just poured a lot of effort and energy into it when he would have rather been out in the field gathering information.
Neville: Got it, got it. So yeah, this is not too difficult a thing to kind of grasp, in my view, yet I’m constantly bemused by the fact that I see — and maybe LinkedIn’s not the best place to look for this stuff — but I see it all the time. You and I were talking about this before we started recording about people posting there about, you know, you should never use AI. Here’s a list of words I see, and if I see them in LinkedIn posts, I’m going to unfollow that person and call them out. I see this all the time. And I think your example you mentioned to me about the person who wrote a LinkedIn post saying that you should — it was like, you should never, ever — and there’s the list of things — use AI for. That’s insane. That’s insane.
Shel: Yeah, she said nobody wants to read emails written by AI. Nobody wants to read reports written by AI. And she just went down every form of writing you can think of. And I was thinking, really? Nobody? Nobody wants to read this? And I’ve got data that says people prefer emails written by AI when they’re written by people who are terrible writers and have a hard time expressing the main point they’re trying to get to. Their own writing — the AI has actually made the emails of these people better, and people would rather read those.
Neville: So did you use AI to research this?
Shel: To research, to find that data? Yeah, of course I did. It’s easier than using Google, but I also verified the source of that research.
Neville: Right, okay. No, no, no, hang on a second. The point of that though is it’s illustrative of something that I’m astonished when I hear people that have not heard of doing this before. “That’s a good idea,” which is: anything you’re working on, literally anything, and you either have your list of things you need to research, but something that occurs to you during your work — I wonder who said X, or I wonder how you do this — ask your AI to go research it. And it then becomes a natural part of your workflow. And that’s one of the things it’s very good at.
But we’ve got the example we talked about last October with Deloitte in Australia and Canada. You’ve got to check everything it creates, particularly if it’s a topic you really don’t know about yet. But even if you do know, you’ve still got to check it. That means when you tell it to go out and look for stuff, and you’ve already given it your preferences — like anything it finds, it’s going to come back with a link to the source as well — so you’ve got all that stuff, you’ve got to then go and check all those things too. So there are no easy shortcuts here to this use. But it still saves you a huge amount of time because you’re then spending time, in a sense, understanding the output that you’re going to use to create your final version of this.
That I see people often criticizing — “If you use AI, your brain gets kind of frozen and doesn’t learn stuff.” Yeah, that’s not, in my experience, the case, because you’re doing it differently is how I would see it. You are asking your assistant to go and find this and this and this, and they come back with this and this and this, and you then go and research it yourself to check up that it is this, this, and this and not that.
So it’s, I think, an interesting aspect to the broader debate on those who are anti and those who aren’t, where most of us are sort of somewhere in the middle there. But you need to totally understand the pros and the cons of this and indeed the limitations of AI, as well as the human limitations, and work out what works best for you.
The reality, though — I guess the bottom line in terms of how I see this — is that you cannot take the human being out of the picture. This tool is purely that: something to assist you that gives you what you need to create the final product, if you like. And that doesn’t matter your job role. That’s what it’s about.
Shel: Well, I would argue that if you are in a job where writing was not taught in school beyond what you learned in your basic English class or whatever language you were raised with, and you need to produce writing, and this tool is now there to help you do that — if you’re an engineer, for example, engineers are brilliant. Many of them are
Neville: Not good writers.
Shel: Terrible writers. And they have to produce something that’s going to be useful to the people that they’re distributing it to. And if AI is going to write a better draft than they could do on their own and produce better output that people can make better use of, then they should let AI write that stuff. In an engineer’s report, there is no need for lived human experience that we keep hearing about. Empathy does not have to come into these reports. They’re technical in nature. Let the AI write it for them. Absolutely edit it, review all the facts to make sure it’s right. Presumably it’s writing based on what you gave it in terms of the information that you have learned that you need to produce in this report. So less opportunity for hallucination when you’re telling it: only use this data that I have put into this ChatGPT project for the output. But you still have to review it very, very carefully. That’ll still save you time and grief if you’re not a writer and you need to produce this stuff. I feel really strongly: we have this great tool here that’s going to make the outputs better and make business better.
Neville: Yeah, I think I don’t disagree with you at all, but I think I’m not as optimistic about it as you are in the sense of this is going to work seamlessly if people do all the things you just said, because typically they’re not going to do that. I think the key — and I can see scenarios exactly as you’ve outlined, someone in a job that’s a valuable job and he or she does a great job but lacks the skills to write — then I would say that’s fine, get the AI to write. You need to be educated then on how to get the AI to do what you want. You then need to, without fail, verify and check every single thing that the AI has created. And I’m not sure that many of the folks that you might think of are truly geared up to do that kind of thing. So you might need to have colleagues assist you then. I mean, I guess the point is that…
Shel: Well, it’s…
Neville: This is going to be a debating point forever, I would imagine, until people stop talking about it. But you’re going to encounter — I can see it now — “But yeah, you’ve got to disclose the fact that you used AI.” No, you don’t. You get down to that rabbit hole argument about, do you do that when you use Grammarly? Do you do that with your spell checker? No, you don’t. So why would you say you’d have to do this? Because it’s such an emotive topic where logic is missing in many of the arguments. It’s all emotional.
That’s the minefield you have to walk. For much of the work that many people might do, they won’t use the AI to write it. They’ll use AI to assist them in creating it. And that could mean they do an outline, or it suggests the construct of a draft, or you draft it and it reviews it and makes suggestions on how to improve it.
I do that quite a bit with my AI assistants. And I don’t have a rigid format. Much depends on the topic and how I feel about it, basically. And often I’ll ask it a topic that is something I’ve been thinking about and say, is this worth writing about? If so, give me some suggestions on the angle I should approach it from. And that always sparks much more discussion and thought on what the content might be, including, “Now this is not worth writing about for me.”
So it’s a big topic. You had in your prep for this loads of links to articles all over the place about this. And I think it’s good to do that. But this is emotive. And it’s going to not be a simple thing to avoid criticisms.
Shel: Yeah, and I think it’s a governance issue inside organizations. I hear about the lack of AI training going on in many organizations or how superficial it is. I think for those people who have to write in their jobs, you want to do targeted training about how to use this to write. From the idea generation to the brainstorming to the back-and-forth discussions that you might have about approaches to take, or
Shel: using it to structure the document right down to writing it for that first draft, if you just could do better with that than you can on your own and you’re not a professional writer. All of that needs to be trained and it needs to be articulated in the governance policies in the organization around AI, and there need to be resources. And yeah, we need to have subject matter experts that people can call. This is on us right now as internal communicators who deal with writing in general to lead this conversation in the organization and make sure that these kinds of governance activities are implemented.
Neville: Work to do.
Shel: And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #507: Should Nobody Really Ever Write with AI? appeared first on FIR Podcast Network.
By The FIR Podcast Network Everything Feed4.5
2424 ratings
Take a stroll through LinkedIn. You’ll find no shortage of posts stridently deriding the notion that anyone should ever use AI to write for them. While that case isn’t hard to make for professional writers, there are countless professionals in other fields who struggle with writing, never trained to be writers, yet now have to write everything from emails to reports as part of their jobs. Should they really sweat for hours over wording, time they could be devoting to the core areas of subject expertise, when AI can produce content that is cogent, clear, and direct? In this short mid-week episode, Neville and Shel look at the trends in using AI for writing, despite the plethora of opinions from the pundits.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville: Hi everyone and welcome to For Immediate Release episode 507. I’m Neville Hobson.
Shel: And I’m Shel Holtz. And if you spend any time at all on LinkedIn, you’ll see the degree to which anti-AI sentiment is ramping up. A lot of it’s aimed at using AI for writing and how absolutely wrong that is. Yet just last week, on the same day, Wired Magazine and The Wall Street Journal both published articles on reporters using AI to help write and edit their stories. So today, let’s talk about using AI to write.
Specifically, is it okay for employees to use AI to help them write for work? And my answer is not only is it okay for many employees, it might be one of the most genuinely useful things AI can do. Here’s the framing I would push back on. When we talk about AI writing assistants, we tend to picture a journalist or a marketer or a communications professional, someone whose craft is writing, it’s what they’re paid for, handing their keyboard over to a robot. And for those of us who are professional writers, that raises legitimate professional and ethical questions. But that’s not the population we’re talking about when we’re communicating AI adoption in most organizations. Think about who actually has to write at work. Engineers document processes. Product managers write status updates. Safety officers draft incident reports.
Shel: Finance analysts compose budget justifications. Scientists write up findings for non-technical stakeholders. These are not people who chose their careers because they love writing. Writing is a tax they pay to do the work they actually care about. And many of them pay that tax really, really badly. The idea that a structural engineer should produce elegant prose unaided is the same logic as saying a communications director should coordinate the concrete mix for a construction project. We don’t expect that. So why do we expect every knowledge worker to be a competent writer? Muckrack’s 2026 State of Journalism report found that 82% of journalists, professional writers, people whose job this is, are now using at least one AI tool. That’s up from 77% the year before.
If the people whose professional identity is tied to their writing are using AI tools, it shouldn’t surprise us that everyone else is too, or that they should. Now the research does tell us something important about how to use these tools. A University of Florida study of 1,100 professionals found that AI tools can make workplace writing more professional.
But regular heavy use can undermine trust between managers and employees, particularly for relationship-oriented messages like praise, motivation, or personal feedback. The study found that employees are more skeptical when they perceive a supervisor is leaning heavily on AI for those kinds of communications. Now that’s a meaningful finding and it’s exactly the kind of nuance internal communicators need to help their organizations understand.
It’s not an argument against AI writing assistance. It’s an argument for knowing when it’s appropriate. Purdue Business School Professor Casey Roberson, who literally wrote one of the first business writing textbooks to address AI, puts it this way: AI is a great tool for brainstorming when you’re stuck, for outlining and structuring documents, for revising drafts to improve clarity and tone, but it should not be used for confidential information, and using it to write first drafts can stifle creativity and critical thinking. The Wharton communication program makes a similar distinction. Their guidance frames AI tools as powerful and skilled hands for the right task, valuable for brainstorming, editing, improving conciseness, and anticipating challenging questions, but a liability when used as a substitute for your own thinking, your own knowledge of your audience, and your own credibility.
So what’s the practical guidance for internal communicators trying to help their colleagues use AI responsibly in their writing? First, make the distinction between communication types explicit. Routine informational writing — process documentation, project updates, meeting recaps, technical reports — that’s where AI assistance is most defensible and most valuable. That’s exactly where the trust risk is lowest and the productivity gain is highest. Conversely, messages that carry relationship weight, like a manager recognizing someone’s contribution or a leader addressing a team through a difficult moment, that deserves a human voice. Help your employees understand that difference.
Second, reframe the conversation around who’s actually writing. A systematic review published in the International Journal of Business Communication found that AI can significantly help with idea generation, structure, literature synthesis, editing, and refinement. Essentially all the phases of writing that non-writers find most daunting. AI isn’t replacing a writer’s voice. In many cases, it’s giving non-writers a voice they otherwise wouldn’t even have.
Third, be honest about the nuance inside the journalism conversation. The Columbia Journalism Review published a fascinating piece where journalists across major newsrooms shared their practices. Nicholas Thompson, the CEO of The Atlantic, described using AI the way he’d use a fast, well-read research assistant who’s also a terrible writer — helpful for checking consistency, flagging chronological issues, examining logical claims, but not for the writing itself. Amelia Daly, a senior reporter at VentureBeat, put it this way: AI helps her productivity, but she refuses to use it to write because writing is how she maintains trust with her readers. That distinction — AI as research and process support versus AI as voice — maps directly to the guidance you should be giving your colleagues.
I read one other reporter in one of these articles who said he actually does use it to write because he didn’t become a journalist in order to write. He didn’t like writing; he liked reporting. So he did all the other work and then lets the AI produce the writing.
And here’s the thing I’d leave your employees with because I think it gets lost in this debate. Wharton’s communication faculty make the argument that writing is thinking, that when you rely on AI for drafting, you don’t know your content as deeply as you should, and you lose the nimbleness to adapt when the moment requires it. And that’s true. But for an engineer who agonizes over every sentence of a procedure document, who spends four times as long on the writing as on the analysis,
Shel: AI doesn’t replace their thinking. It clears away the friction so their thinking can actually reach the page. For internal communicators, this is a genuinely useful message to take to your AI adoption rollouts. AI writing assistance isn’t about cutting corners. It’s about removing a barrier that prevents good ideas from being communicated clearly while still insisting on the judgment, authenticity, and relational awareness that only human beings can bring.
Neville: Yeah, it’s a big topic, I have to admit. And I think of it from not the employee communication point of view so much. That’s pretty a major part of it, I think, major usage. Is anyone writing, in fact? Whether you’re in public relations, whether you’re a journalist, et cetera, people who need to write as part of their roles is what’s in my mind mostly.
I’m also drawn by a very good analysis by Josh Bernoff. You and I interviewed Josh, what, two, three months ago. He wrote an assessment of Charlene Li’s new book, Winning with AI, which she used AI extensively in the creation of the book. Worth pointing out that the book — the AI didn’t write any of the content.
She and her co-author, Katia Walsh, talked about the way in which they divvied up the work. And the AIs, plural, did research amongst other tasks, too. But Josh did a lengthy post setting out all the areas where they found AI useful and AI not so useful. And it struck me reading Josh’s post and then also Charlene’s postscripts, as it were, in the book itself, which I am reading, by the way, that this would apply to anyone writing, not just would-be book authors, in my view. Whether you’re writing fiction or nonfiction doesn’t make any difference. Whether you’re writing a report, whether you’re writing an article or for a blog or for a newspaper, whatever, doesn’t matter. These principles, I think, apply to that. And it’s not so much about whether your role in your organization or in your job is to do with this and you’re not very good at writing. It’s not so much that. It’s more focused on those whose job is writing, or writing is part of their job in some form.
So there are a number of things that I took from it. But to go to the main point about Charlene’s book Winning with AI, AI wasn’t doing the writing, as I mentioned. It was supporting the thinking. It handled things like the research, summaries, the structure, which speeds everything up. But the ideas, the voice, and the judgment — that all stayed firmly human. And to quote from Josh’s post, he says that the two authors describe how they used Claude to structure the content, ChatGPT to create a custom GPT with four years of their work, which it used in a sense as a training aid, Perplexity to do the research, and Gemini to search a vast collection of interview transcripts. It’s much more detailed than that. It’s well set out in the book. And I thought, that’s interesting. That’s a very intelligent way to go about using different AI chatbots for different purposes on your projects.
So three things I took from this, and this applies to all the points you made, Shel, and it will repeat some of those, but it just shows you that this is how you need to think of this. First, AI works best as a thinking partner, not a writer. Like I said, the two authors used AI as a note taker, researcher, brainstorming partner — essentially a third collaborator. It helped them structure the ideas, surface insights, and challenge assumptions, and they did not rely on it to produce the final prose.
The second point: it saved time on the drudge work, as Josh called it, but it requires human judgment. It was highly effective for research and summarization, structuring outlines, surfacing missed ideas from earlier drafts. That resonated with me because I often find in my own experience when I’m doing research on either blog posts or articles or reports or just research about something I’m interested in, it usually surfaces something AI wouldn’t have thought of, or I might have done, but it might have surfaced later after I’d written it, and it requires a rewrite or something like that. Structuring the outlines, too, is another thing. And this is definitely worth noting — we’ve discussed this before. Everything still required the humans to fact-check and validate everything the AI produces, because in Charlene’s words, AI has no built-in truth function. And I think that’s a worthwhile way of looking at it.
And the final point that I took from this: you can’t outsource originality, voice, or quality — i.e., the writing. They tried it. AI failed at core creative tasks. There are three of them that Josh points out in his article. Generating genuinely new ideas — this is not very good at this, because it’s trained on existing writing that humans have done over the years and the centuries even. It can’t create something new from that other than guesswork. It’s about the same as what we do, I think, except we’re likely to do the more informed approach. It can’t write in a compelling human voice. And it cannot edit to a high standard. They all described — Charlene and Katia and Josh, for that matter — AI writing as bland, repetitive, and jargon-heavy. And in fact, Charlene talks about how they could not stop jargon creep in anything that the AI produced. And she had this big thing about one draft where they used AI to review it — it changed every use of the word “use” to “utilize.” The AI changed it to that, full of that kind of jargon.
Shel: One of my biggest pet peeves, by the way, is “utilize.”
Neville: Right, totally. And the final quality, nuance, personality, and insight remained entirely human because the humans wrote it. So I take all of that, add it to what you’ve been talking about, and say, I guess I’d conclude from that: it doesn’t matter what your role is. These are the principles you need to pay attention to and approach your use of AI as an aid. And we’re not, you know, suddenly coming out with a revelation here. I see people saying this all over the place. AI is an aid to help you, in a sense, create extremely good content, either as a writer or something else that you might be doing, where this is contributing to that end. And it doesn’t matter what your role is, whether you’re no good at this or that — that reporter you talked about likes to report but not to write. I’m wondering how the hell he gets away with doing that. Reporters have to write, don’t they?
Shel: Well, I’m sure he just poured a lot of effort and energy into it when he would have rather been out in the field gathering information.
Neville: Got it, got it. So yeah, this is not too difficult a thing to kind of grasp, in my view, yet I’m constantly bemused by the fact that I see — and maybe LinkedIn’s not the best place to look for this stuff — but I see it all the time. You and I were talking about this before we started recording about people posting there about, you know, you should never use AI. Here’s a list of words I see, and if I see them in LinkedIn posts, I’m going to unfollow that person and call them out. I see this all the time. And I think your example you mentioned to me about the person who wrote a LinkedIn post saying that you should — it was like, you should never, ever — and there’s the list of things — use AI for. That’s insane. That’s insane.
Shel: Yeah, she said nobody wants to read emails written by AI. Nobody wants to read reports written by AI. And she just went down every form of writing you can think of. And I was thinking, really? Nobody? Nobody wants to read this? And I’ve got data that says people prefer emails written by AI when they’re written by people who are terrible writers and have a hard time expressing the main point they’re trying to get to. Their own writing — the AI has actually made the emails of these people better, and people would rather read those.
Neville: So did you use AI to research this?
Shel: To research, to find that data? Yeah, of course I did. It’s easier than using Google, but I also verified the source of that research.
Neville: Right, okay. No, no, no, hang on a second. The point of that though is it’s illustrative of something that I’m astonished when I hear people that have not heard of doing this before. “That’s a good idea,” which is: anything you’re working on, literally anything, and you either have your list of things you need to research, but something that occurs to you during your work — I wonder who said X, or I wonder how you do this — ask your AI to go research it. And it then becomes a natural part of your workflow. And that’s one of the things it’s very good at.
But we’ve got the example we talked about last October with Deloitte in Australia and Canada. You’ve got to check everything it creates, particularly if it’s a topic you really don’t know about yet. But even if you do know, you’ve still got to check it. That means when you tell it to go out and look for stuff, and you’ve already given it your preferences — like anything it finds, it’s going to come back with a link to the source as well — so you’ve got all that stuff, you’ve got to then go and check all those things too. So there are no easy shortcuts here to this use. But it still saves you a huge amount of time because you’re then spending time, in a sense, understanding the output that you’re going to use to create your final version of this.
That I see people often criticizing — “If you use AI, your brain gets kind of frozen and doesn’t learn stuff.” Yeah, that’s not, in my experience, the case, because you’re doing it differently is how I would see it. You are asking your assistant to go and find this and this and this, and they come back with this and this and this, and you then go and research it yourself to check up that it is this, this, and this and not that.
So it’s, I think, an interesting aspect to the broader debate on those who are anti and those who aren’t, where most of us are sort of somewhere in the middle there. But you need to totally understand the pros and the cons of this and indeed the limitations of AI, as well as the human limitations, and work out what works best for you.
The reality, though — I guess the bottom line in terms of how I see this — is that you cannot take the human being out of the picture. This tool is purely that: something to assist you that gives you what you need to create the final product, if you like. And that doesn’t matter your job role. That’s what it’s about.
Shel: Well, I would argue that if you are in a job where writing was not taught in school beyond what you learned in your basic English class or whatever language you were raised with, and you need to produce writing, and this tool is now there to help you do that — if you’re an engineer, for example, engineers are brilliant. Many of them are
Neville: Not good writers.
Shel: Terrible writers. And they have to produce something that’s going to be useful to the people that they’re distributing it to. And if AI is going to write a better draft than they could do on their own and produce better output that people can make better use of, then they should let AI write that stuff. In an engineer’s report, there is no need for lived human experience that we keep hearing about. Empathy does not have to come into these reports. They’re technical in nature. Let the AI write it for them. Absolutely edit it, review all the facts to make sure it’s right. Presumably it’s writing based on what you gave it in terms of the information that you have learned that you need to produce in this report. So less opportunity for hallucination when you’re telling it: only use this data that I have put into this ChatGPT project for the output. But you still have to review it very, very carefully. That’ll still save you time and grief if you’re not a writer and you need to produce this stuff. I feel really strongly: we have this great tool here that’s going to make the outputs better and make business better.
Neville: Yeah, I think I don’t disagree with you at all, but I think I’m not as optimistic about it as you are in the sense of this is going to work seamlessly if people do all the things you just said, because typically they’re not going to do that. I think the key — and I can see scenarios exactly as you’ve outlined, someone in a job that’s a valuable job and he or she does a great job but lacks the skills to write — then I would say that’s fine, get the AI to write. You need to be educated then on how to get the AI to do what you want. You then need to, without fail, verify and check every single thing that the AI has created. And I’m not sure that many of the folks that you might think of are truly geared up to do that kind of thing. So you might need to have colleagues assist you then. I mean, I guess the point is that…
Shel: Well, it’s…
Neville: This is going to be a debating point forever, I would imagine, until people stop talking about it. But you’re going to encounter — I can see it now — “But yeah, you’ve got to disclose the fact that you used AI.” No, you don’t. You get down to that rabbit hole argument about, do you do that when you use Grammarly? Do you do that with your spell checker? No, you don’t. So why would you say you’d have to do this? Because it’s such an emotive topic where logic is missing in many of the arguments. It’s all emotional.
That’s the minefield you have to walk. For much of the work that many people might do, they won’t use the AI to write it. They’ll use AI to assist them in creating it. And that could mean they do an outline, or it suggests the construct of a draft, or you draft it and it reviews it and makes suggestions on how to improve it.
I do that quite a bit with my AI assistants. And I don’t have a rigid format. Much depends on the topic and how I feel about it, basically. And often I’ll ask it a topic that is something I’ve been thinking about and say, is this worth writing about? If so, give me some suggestions on the angle I should approach it from. And that always sparks much more discussion and thought on what the content might be, including, “Now this is not worth writing about for me.”
So it’s a big topic. You had in your prep for this loads of links to articles all over the place about this. And I think it’s good to do that. But this is emotive. And it’s going to not be a simple thing to avoid criticisms.
Shel: Yeah, and I think it’s a governance issue inside organizations. I hear about the lack of AI training going on in many organizations or how superficial it is. I think for those people who have to write in their jobs, you want to do targeted training about how to use this to write. From the idea generation to the brainstorming to the back-and-forth discussions that you might have about approaches to take, or
Shel: using it to structure the document right down to writing it for that first draft, if you just could do better with that than you can on your own and you’re not a professional writer. All of that needs to be trained and it needs to be articulated in the governance policies in the organization around AI, and there need to be resources. And yeah, we need to have subject matter experts that people can call. This is on us right now as internal communicators who deal with writing in general to lead this conversation in the organization and make sure that these kinds of governance activities are implemented.
Neville: Work to do.
Shel: And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #507: Should Nobody Really Ever Write with AI? appeared first on FIR Podcast Network.