Jonathan Gillham talks with Jason Barnard about publishers embrace integrity in the era of generative AI.
Jonathan Gillham is the Founder and CEO of Originality.ai that provides a complete toolset that helps Website Owners, Content Marketers, Writers and Publishers hit publish with integrity in the world of Generative AI.
Jonathan reveals the hidden risks and ethical challenges of AI content creation. He emphasizes the importance of keeping the human in the loop to mitigate risks and maintain authenticity. He discusses the necessity of unique data and insights in content to add value beyond simple text. He also highlights the challenges of setting clear policies within organizations regarding the use of AI.
From dangerous mushroom-picking guides to content that puts brands at risk, learn why maintaining the human element is crucial for business success. You will get insider insights on balancing AI efficiency with authenticity, implementing smart content policies, and avoiding Google's AI spam penalties. Plus, you will uncover practical strategies for creating value beyond words in today's AI-driven content landscape.
What you’ll learn from Jonathan Gillham
00:00 Jonathan Gillham and Jason Barnard
02:45 What Exactly Does Originality.ai Do?
03:28 What Specific Words Are Commonly Introduced by ChatGPT and Other AI?
03:57 Why Does Jonathan Gillham Think Humans Are Starting to Write Like Machines?
04:54 Why Does Jonathan Gillham Think AI’s Ability to Imitate Humans Can Be an Increasingly Big Problem?
05:49 What Are the Main Problems With AI-Generated Content?
06:10 What Are the Two Critical Additional Problems With AI-Generated Content Based on Jonathan Gillham?
09:04 Why is Fact-Checking Such a Huge Problem for Large Language Models (LLMs)?
10:56 Why is It So Important to Keep Humans Involved in the Process of AI Content Creation?
12:20 What Does It Mean to Go Beyond Words in Content Marketing?
13:49 Why is Simply Feeding an AI With Your Content Not Enough for It to Fully Replicate Your Knowledge?
16:06 What are the Effective Ways to Use Prompts to Give the Bot the Instructions for Content Creation?
17:14 How Can You Distinguish Between Human Writing and Bot-Generated Content?
19:40 How Can A Business Person Ensure Policies for the Use of AI Are Applied Across Their Organization?
This episode was recorded live on video November 26th 2024
https://youtube.com/live/3_v7Mz9zwgM
Links to pieces of content relevant to this topic:https://originality.ai/Jonathan Gillham
Transcript from Publishers Embrace Integrity in the Era of Generative AI - Fastlane Founders with Jonathan Gillham
[00:00:00] Narrator: Fastlane Founders and Legacy with Jason Barnard. Each week, Jason sits down with successful entrepreneurs, CEOs, and executives, and get them to share how they mastered the delicate balance between rapid growth and enduring success in the business world. How can we quickly build a profitable business that stands the test of time and becomes our legacy? A legacy we're proud of. Fastlane Founders and Legacy with Jason Barnard.
[00:00:31] Jason Barnard: Hi, everybody and welcome Fastlane Founders and Legacy. I'm here with Jonathan Gillham. A quick hello and we're good to go. Welcome to the show, Jonathan Gillham.
[00:00:46] Jonathan Gillham: Hey, thanks, Jason. Thanks for having me.
[00:00:48] Jason Barnard: An absolute delight. You're the founder of Originality.ai and we're going to be talking about AI and ethics and not using AI to produce all of your content that you need the human aspect and you need to make sure that human aspect is maintained over time for your corporation. It's very tempting to try to save time. But before we do that, our specialty at Kalicube is Brand SERPs. And I was looking around at your name and Google has you in its Knowledge Graph. It understands who you are and you've got what we call a tiny Knowledge Panel sprout for people listening rather than watching on the video. We're now looking at a tiny Knowledge Panel sprout with Jon's name and his photo. And that is a great start to understanding from Google and a great start to getting it to represent you this way. And we're going to be talking about AI number one. Google on the left hand side, where Google is representing Scott Duffy as the superstar he is as an entrepreneur. And on the right hand side, ChatGPT being able to explain exactly who he is, what he's done and that he's worked with Richard Branson in the past, for example.
So educating AI is what we do. What exactly do you do at Originality.ai? Please explain, Jonathan.
[00:01:58] Jonathan Gillham: Yeah, sure. So I'll give a kind of quick background to make it all sort of make sense. But we ran a content marketing agency for a number of years, ended up selling that was the heaviest user of Jasper AI, but predated ChatGPT where we were transparently using AI to create content for clients and passing on that efficiency savings. The question that sort of started to come up was how do we know your writers aren't using AI? It's like, well, we have a policy but we didn't really have the right sort of mitigating steps and controls in place. And so we sort of saw this wave of generative AI coming, ended up building an AI detection tool, launched it actually the weekend before ChatGPT launched. So a bit of a, bit of a lucky unlucky on the timing of some different respects and yeah, it's been a ride, a ride since then. But what we do is we help, we help anyone that's acting as a copy editor ensure that the content that they're going to be publishing meets their standards and whether that's published by AI or not, plagiarism checking, fact checking, readability, grammar, spelling.
[00:03:00] Jason Barnard: And so it's a way for the content writers or the bosses of the content writers to check for originality.
[00:03:09] Jonathan Gillham: Yeah, the bosses of the content writers often or the copy editors who sort of are generally functioning as the bosses of that team of writers. But you know, I think a lot of people are happy to pay $100 thousand dollars for a piece of content, not super happy to find out it was copied and pasted out of ChatGPT.
[00:03:28] Jason Barnard: Right. Well, what I generally do is look, look for words like elevate and a new frontier. And these are all words that ChatGPT and other AI have introduced into the language as this kind of common term that people supposedly use. And for me, we don't use frontier, new frontier very often. We don't use elevate. And weirdly, people are now starting to use it. Are we starting to write like the machines? And it's going to get harder to detect.
[00:03:57] Jonathan Gillham: So it's actually interesting. You know, as humans we have two biases, cognitive biases that make us think that we can sort of identify content where we have this sort of overconfidence bias, where if you ask a room filled with people how many of you think you're an above average driver, 80% are going to put up their hands. And then we often try and sort of think we can see patterns where no patterns exist. And so there are definitely some words that are being used by ChatGPT at a higher rate than normal. And as that sort of gets into the world of literature that people are reading and consuming, it's going to drive us to use those words more. But the ability of humans to actually pick out AI content, especially if there's been any level of sort of attempt to write like so and so. Our ability gets down to basically a flip of a coin.
[00:04:44] Jason Barnard: Right. And do you think that's going to be an increasingly big problem, that is AI is going to get better and better at imitating real people?
[00:04:55] Jonathan Gillham: Yeah, I think it's going to be an interesting problem that society as a whole is going to have to wrestle with is, where are we okay with AI generated content? Where are we not okay with it? I think there's great use cases for it. There's also use cases that we're not very happy about. If we read a review for baby formula that was AI generated. We're not very happy about reading that versus a human generated review. Similarly, we've helped with a couple oddly. There's been a couple of mushroom picking books that were AI generated and then published on Amazon and the books had some dangerous material in it that would have resulted in death if somebody had followed what the book had suggested. And those turned out to be AI generated books. And so I think that it's a risk to brands that are using it for them to understand. It's not necessarily a bad thing or a good thing. It's just a risk that needs to be managed and correctly mitigated.
[00:05:47] Jason Barnard: Right. So just off the top of my head I can see multiple problems with AI generated content. One is factual correctness. They don't fact check that hallucinations are common. Another is style, another is expertise adding something new to human existence. Those are just three. I'm sure there are a lot more.
[00:06:10] Jonathan Gillham: Yeah, the sort of problems that it introduces to a business is one, the fairness component on if you're okay with somebody using AI and you're paying the writer on a freelance basis, then who should be the one that generates captures the value of that efficiency lift. And so that's sort of one. And the second one is Google has come out and been extremely clear that they are very much against mass published AI spam. And then it's up to our interpretation on where does that mass published AI spam start and stop and sort of when does it turn into value added useful content and when does it turn into spam? And that's something that although, you know, we'd love it if it was a nice clean answer from Google, the reality is that Google will do things trying to sort of deal with one problem and then we'll have some sort of ripple effects elsewhere.