Let AI handle the chores, and humans do the thinking: such should be the future of content marketing. In this piece, I try and debunk a few myths. Firstly, generative AI can be creative — and often is. Secondly, AI doesn’t necessarily make us stupid; we don’t need it for that. And thirdly, becoming a prompting Guru isn’t necessarily the key to producing great content. The question of AI’s role in content marketing is actually more strategic than technical: it’s about why and for whom we create content. This is the major issue at stake for today’s and tomorrow’s marketers. In this presentation, I urge readers not to outsource their thinking to AI, and rather offload the chores of low-value tasks to machines. Unfortunately, it should be noted that they aren’t always doing a good job with that.
Chores to AI, Ideas to Humans
Since the machines started thinking, we’ve had more time to do the dishes, wrote Joanna Maciejewska. Like her, I’d rather it were the other way round.
Ms Bernard is an SEO agency avatar who adds links to Visionary Marketing on “her” website. Her “work” raises some fundamental questions. Criticisms aimed at AI often miss the mark and overlook fundamental issues: why we write, for whom, for what purpose…We also dismiss a few myths such as ‘AI can’t be creative’, ‘AI makes us stupid’, and ‘mastering prompting is a silver bullet’.Hence, the question of AI’s role in content marketing is more about strategy than it is about tech.In this presentation, I urge content creators (and readers alike) not to outsource their reasoning and to leave the chores to AI.This piece owes a lot to Ms Joanna Maciejewska
AI and Marie Bernard, the e-commerce Queen
Ms Benard is adding links to Visionary Marketing. She is very nice but unfortunately she isn’t a real person.
Let me introduce you to Ms Marie Bernard. This pretty young woman, somewhat artificial in appearance, exists only in Midjourney’s archives and on the website of “her” SEO agency. This supposed e-commerce expert found herself embroiled in a semantic mix-up that was both amusing and revealing.
Taking inspiration from one of my articles, this visionary author mixed up ‘snow globe’, an expression used by one of my expert interviewees as a metaphor, and ‘snowball effect’. Thank God, she inserted a link to Visionary Marketing so that I could correct that fatal mistake. Far from being trivial, this anecdote raises a few fundamental questions. Who is writing? For whom? How? And for what purpose? In fact, it even poses bigger questions such as “what is humans’ place in society, and what sort of society do we want for our children and children’s children?”
AI Information Overload
Content about generative AI is so ubiquitous that we have gone past information overload. AI content analysts are skirmishing via X (formerly Twitter) and LinkedIn posts, mainly on the technical front (this AI is better than that one), creativity (AI produces interesting ideas or rather, is dull and inferior to humans), and usage (“download my ultimate prompting guide!”). Yet all these debates (and sadly others that are less prevalent, like the poorly documented issue of energy consumption) fail to address other key questions: who are we creating for, why, and for whom do we work — or more broadly, what kind of society do we want in the future?
Generative AI at the Heart of the World’s Issues
AI, and in particular generative AI, have generated most of the noise on social media, blogs, newsletters, and chat around the pub. Traditional economy seems to be ignoring the phenomenon or treating it as incidental — a recurring habit when it comes to digital innovations, but online debates live on unabated.
Whether and how we should use generative artificial intelligence is now a central question in our modern societies, and that’s understandable. Machines have been able to play around with text since the 1950s, but computing power and large-scale training on such a vast and decent dataset — despite criticisms — have never been so strong. In recent weeks, engineers in London have even shown how two AI bots can talk to one another. Even if it’s only a demo, we’ve known since the early 2000s that machines can buy and sell stock (algorithmic trading roughly amounts to 60-75% of total trading in the most developed markets, and this was already true back in 2006 when I worked in that field). So, why shouldn’t an AI known as “agentic” buy train tickets?
Hence these legitimate questions.
A machine capable of writing “like” humans?
The fact that a programme — literally a “machine” in the sense of a computer — is capable of writing like humans, or nearly, is disconcerting.
*[Machine] A mechanically, electrically, or electronically operated device for performing a task
first entry from Merriam-Webster
What’s even more unsettling is that humans often write more poorly than machines. This is what Loubna Ben Allal, a researcher at Huggingface and an expert in training generative AIs, describes in a video on the underscore channel, which is worth watching.
She explains how content is filtered during training sequences and, surprise, surprise, she says that good Ai-generated content is often better than bad human content. Sadly, poor human content is everywhere.
Note that there are also texts, 100% AI-generated, aimed at proving that Loubna is right.
A text designed to show that separating the wheat from the chaff in content creation is a non-issue. Unfortunately, it was written by an LLM.
Language, an operating system?!
If these mock texts are so disconcerting, it’s because language and the written word are indeed some of the fundamental characteristics of the human species.
In the beginning was the word. Language is the operating system of human culture.
Yuval Harari — NYT March 2023
Yuval Harari, with a kind of reverse anthropomorphic twist, even calls it the “operating system of human culture”. Despite this idiosyncrasy, Harari is zeroing in on the real issue.
The real core problem isn’t technical, but deeply philosophical, especially when the most famous generative AI tools are led by a maverick who’s trying all he can to put us in a Spike Jonze film. Ultimately, philosophy could or should redefine how AI is trained, explain Michael Schrage and David Kiron of MIT Sloan Management Review.
The Real Problem With So-called Generative Tools
The real problem with these generative tools isn’t technical, nor is it about creativity or even how well one uses the tool. It’s more fundamental, relating to the very essence of work and, more broadly, of human societies. Whatever human shortcomings and flaws there may be, and they are indeed numerous.
This is all the more important, given discussions about new tools such as Manus, which promise even more autonomous intelligence capable of “agenticity”, a direction that appears to be a goal for many of the creators of these programmes.
Generative AI is going to vanish? Really…
There’s no point in playing down generative AI, as I saw here and there, by predicting their demise (you don’t just eliminate tools that the whole world has made their own, no matter how imperfect), nor in overestimating their potential (there are simply too many tools and possible uses).
Denying how astonishing these tools are is pointless.
Likewise, describing LLMs as “stochastic parrots”, is no longer relevant. It used to be apt, barely two years ago. Yet, that’s no longer the case. Safety nets exist, the biggest pitfalls (such as asking ChatGPT to prove that the Earth is flat) as former Apple Siri cofounder Luc Julia claimed recently in a Swiss daily are old hat. The right way forward is hybrid systems combining the power of LLMs with more conventional computing. It’s a matter of time before this merger is done and it might not even take too long. Whoever has witnessed the development of IT and the Web over the past 40 years knows it takes time to innovate. Time is of the essence.
Hence, even though the results we get today are still often disappointing, patchy, or downright wrong, GenAI models of 2025 hallucinate far less than they used to, provided you pay and pick your model carefully.
You may check for yourself with Perplexity.ai, which will answer your question on this subject while delivering links (sometimes off-target, so you’ll still have to cross-check that information).
In short, four breakthroughs occurred from 2024 to 2025 in this field:
Reduced error rates from 1 to 3% thanks to techniques like Retrieval-Augmented Generation (RAG), drawing on existing documents. Model improvements including the inevitable OpenAI, with its GPT-4.5 model, and others (I particularly recommend Claude.ai).Innovative methods like “deep research” or “chain of thought”, often flawed and slow, but give them time and they will improve dramatically. Checks and adjustments: Tools like “Automated Reasoning Checks” introduced by AWS have been designed to detect and correct hallucinations before production use.Still, hallucinations remain common and won’t vanish soon. Again, it will take time before all control mechanisms are in place. Chain-of-thought is one example: it’s still a bit awkward, but it gives a flavour of future possibilities.
That said, even if I’m not a big fan of AGI (see the following article), generative AI challenges human skills and abilities and as a consequence of that, our very place within society.
Three directions for deeper exploration
Essentially, there are three areas that need to be investigated. First, our capacity to be truly creative. Second, AI’s impact on our cognitive and intellectual abilities and finally, there’s the question of usage.
1. Let’s start with creativity
Obviously, one could wonder whether GenAI is creative or not. But above all, this very question challenges us, humans. Thus, the real question should read: are humans any more creative than GenAI?
The answer isn’t straightforward, even if that may come as a surprise. One could argue that GenAI texts are good or bad, depending on one’s point of view. Yet, one shouldn’t discount that texts produced by humans aren’t always better. And that’s what’s disturbing. As we mentioned above, Loubna Ben Allal calls into question the notion that “human = good, synthetic = bad”.
The same applies to creativity. Alan Turing, in his 1950 piece Computing Machinery and Intelligence, had already invalidated a number of objections to the idea that a machine could be innovative. One of these objections claimed: “A machine can’t create.”
Creativity is also, and above all, about combinations, de-combinations and recombination. A bit like a puzzle if you wish. One cherry picks from others’, or even one’s work, sometimes unconsciously, and recombine from this to build a new story, a new blog, a new project. Even artists aren’t necessarily all that ‘creative’ in the sense of making something new entirely from scratch. They often rely on self-references. Tinguely with his zanyish machines aka antimuseums, Monet and his views of Rouen and his infinite variations on water lilies, Soulages with his black paintings, Rothko with his ubiquitous RED. Series are an integral part of Art, and one of the main creative mechanisms.
Jonathan Gibbs in Randall even states that Young British Artists, as all artists, can at best come up with four genuinely original ideas in their entire career, the ones we’ll remember them for.
‘The way it works is that you’re only going to be remembered for four things.’
Gibbs, Jonathan. Randall or The Painted Grape
And Gibbs is right. If artists give in to reproducing their own ideas, that’s also because it’s what people are asking for. That’s why, for instance, minimal music these days — once dubbed repetitive and lately rebranded ‘neoclassical’ (Max Richter, Nils Frahm, Nicklas Paschburg, GrandBrothers…) — is so successful. It’s principally because it’s based on a never-ending repetition of fairly similar musical patterns. And I won’t even mention popular — as in ‘pop’ inclusive of jazz — music, which is even more standardised (check rhythm changes if you don’t believe me).
Thus, the question of whether machines are more or less creative than humans is anything but trivial.
2. Is AI making us stupid?
The next question is whether we end up being stupid from the misuse of these thinking machines (as one of my friends put it to me, “These tools are extremely addictive”). This question echoes what Nicholas Carr wrote a few years ago in The Atlantic: “Is Google Making Us Stupid”?
In that piece**, he argued that even though he wasn’t raised in the digital age and learned to read “normally” in books, he ended up using search engines and found that they made him lazy, encouraging minimal effort rather than combing through documents for hours before forming an opinion.
** yet another AI-written piece, by the way. I only inserted the link out of mischief. Our dear readers will find the Atlantic link by themselves using old-fashioned search engines or Perplexity.ai.
With generative AI, all that Carr described is blown out of proportion. Perplexity.ai is the epitome of this issue. Instead of using search engines, one enters a prompt, and hey presto! Perplexity will gather the answers, summarise them and provide a list of links. The latter are not always relevant, but on average, they’re not that bad either. This process isn’t really less effective than wading through a so-called SERP (Search Engine Results Page) of questionable relevance or provenance, many of which results were written by ‘SEO experts’ to trick the very same search engine (i.e. Google, see this post for details).
Some years ago, those SEO experts had such low quality texts made by hand, often in low-wage countries, and now they create them almost entirely with LLMs (it’s estimated that about 19% of Google’s top 20 results are AI-generated). By the way, those people in low-wage countries must have been made redundant but who cares about poor people struggling to make a living. This is a dog-eat-dog kind of world, is it not?
As to the question of whether generative AI is making us stupid, it’s a bit disingenuous, just as it was for Google. But what’s certain is it could be making us lazy (again, just like Google, especially since they introduced position zero).
Getting direct answers to our questions means we lose the habit of digging for them ourselves, and above all, we lose your critical thinking abilities. But it’s not that simple.
Concluding that AI alone is responsible for dumbing down the world — assuming that’s even happening — would be going a bit far. Intellectual laziness and lack of critical thinking aren’t new, and if you want to see evidence of that, I recommend you browse the site of the Reboot Foundation.
3. Prompting wizards
Our third angle is usage quality of AI tools. Is it a real problem?
There’s indeed a misconception about the usage of these tools by the population. Their usage is certainly widespread and it happened in a flash. That’s for certain. Now, whether most users are wielding these tools properly is another kettle of fish. Believing we’ve all become prompt experts overnight is spurious. I’m not seeing that happening in the field.
For starters, we have a massive digital skills deficit. How can people who struggle to remember a password or sign a PDF form could instantly be able to use generative AI effectively?
I see far too much straightforward copy-pasting in class and elsewhere. Also, few people, as I noticed in the course of my training sessions (thousands of people and students), are able to take the necessary step back to refine the content produced by these algorithms — even when encouraged to do so.
However much I regret this isn’t relevant. That’s why Steve Yegge is right: generative AI won’t help beginners nor average employees become brilliant, but it will help experts get rid of them altogether. Getting started in business won’t be easy in the coming years.
Moreover, the generative AI scene is so hectic and unstable that even experts are losing track of which model is most effective. Almost every day, there’s some headline-grabbing announcement overshadowing yesterday’s.
And the ‘experts’ keep dishing out their analyses and forecasts. Some foresee the demise of generative AI (but that’s total nonsense), while others predict that GenAI will on the contrary be an all-out revolution (which is equally silly).
The technology digestion curve is our own special way of highlighting the hype surrounding innovations.
The truth is, as we can see in the field, that we are in a learning curve, which isn’t too different from what we’ve been through with other digital innovations in the past.
Caption: Kathy Sierra once put forward the notion of “feature-itis”, which was spot on.
As a system grows more complex, Kathy Sierra showed, you end up losing your grip, and a user who once felt in full control of the tool finds he or she loses that control and is going backward dramatically.
More recently, Maurizio Bisogni described the fluctuation of knowledge in ChatGPT over time in relation to what he calls the Dunning-Kruger effect, a psychological phenomenon identified by David Dunning and Justin Kruger in 1999 in their paper: “We lack competence and we don’t know it: how difficulties in recognising our own incompetence can lead us to overestimate our abilities.”
This shows we have a tendency to overrate our abilities when we have too little information. A warning we might well direct at many of the analysts clogging up our social timelines with their views on the subject.
Conversely, the most expert people tend to underestimate their competence. This is something we also know as the ‘impostor syndrome’. Perhaps I suffer from the latter myself, because I hate the term ‘expert’. Even though I’ve been working in digital marketing for over 30 years, started my career in AI nearly 40 years ago, and have been documenting these topics for decades, I still feel I don’t know much. It seems only natural and necessary, given how volatile and complex this environment is.
Yet I see too many experts, some of whom are even behind some of these discoveries, such as Jeffrey Hinton, one of the discoverers of neural networks and winner of a Nobel Prize, who understand nothing about generative artificial intelligence, despite the fact that it is based on these very neural networks. In a BBC video, Hinton looks at ChatGPT and concludes that these machines can reprogramme themselves. In the long term, this is undoubtedly true. But it is not yet the case. I’ll come back to that later. So we need to have a certain humility when it comes to these subjects.
With all due respect for Hinton’s outstanding achievements in machine learning, he shows a nearly childlike ignorance when he claims neural networks can feel emotions. […] Emotions are so complex, a bridge between thought and will, a gateway to shared understanding between people and the world. By saying the machine can experience feelings, Hinton shows that he doesn’t understand what he’s built.
Robert M. Burnside – Robo Robert on Substack (2024)
I’m not writing this to diminish the great talents of the renowned British-Canadian scientist, Jeffrey Hinton, a Nobel Prize winner in the field of neural networks, but to illustrate how siloed these areas can be and how no one can honestly claim complete understanding of the field of AI, if there is any such thing. As for LinkedIn influencers’ rush opinions…
Truth be told, regarding usage, I don’t believe that a science of prompting — which I see more as a practice of common sense, trial and error — is essential. What I find vital, rather, is taking a step back, thinking carefully, applying reason, and sharpening one’s critical thinking skills.
Besides, returning to prompts, I had already guessed we’d see prompt generators appear. And here they are, because that kind of interface is cumbersome and awkward. Prompting is powerful, but lengthy and tedious, requiring voice dictation abilities (which most people don’t have) or quick and accurate touch-typing, which is basically only for those who learned on an uncompromising typewriter, and/or how to touch-type without looking at the keys, like yours truly.
To be honest, I create all my Midjourney prompts on Claude or ChatGPT because I find the exercise quite tedious and slow, and LLMs are best placed to tailor a prompt for another generative AI in the required style.
Some folks even bet that chatbots will chat to each other in a language only they can understand (see this video. Careful! It’s not a product but a demo made during a hackathon).
In short, usage doesn’t strike me as a major problem, even if most users’ results are far from great — even when guided.
So, what’s the problem with generative AI?
Let’s rule out a couple of areas at once. Ecological issues to start with. Apart from a few ritualistic mentions and greenwashing initiatives, not much appears to be on the menu in that area. As someone who’s been strongly committed to environmental concerns for ages, I’m deeply saddened about that. I will have to bite the bullet, nobody cares about that. And the current T**mpmania isn’t going to help.
Bubble threats are real too, as Ed Zitron keeps hammering. Yet, the history of innovation has always shown that when some tech stuff is needed and the whole world is using it, money will always be found and invested. When there’s a will…
The Web’s “enshittification”
Web rot is probably a good avenue for our quest. I predicted it as soon as generative AI first emerged and GPT-3 was launched in 2020. Back then, I forecasted during a Pushengage webinar that the Web would be flooded with SEO content no longer created by humans but by machines. The latter deliver both quantitatively and qualitatively better (according to the standards of these ‘SEO experts’) than the armies of content creators from low-wage countries paid to boost webpage rankings through back-linking.
Five years later — a lifetime in Internet terms — what do we see?
As it happened link-building requests died out instantly in 2023 and were replaced by proposals for AI-generated content creation. I saw them crop up on Visionary Marketing immediately, and the change was savage. SEO content became more professionalised and multiplied at a frantic pace, as The Verge showed in its 2023 investigation of synthetic content farms.
The result today is conspicuous.
What was foreseeable has indeed happened. It took five years. So much for those who talk about an overnight revolution. Even for something as simple as replacing human content writers in low-wage countries with LLMs that churn out copy at high speed with a few basic instructions, it still took five years. As for the rest, we may have to wait a bit. By the way, we’ve laid off hundreds of impoverished people unless they’ve retrained for AI-based content, which is likely but not proven.
That’s the genuine underlying problem. And it’s why I created humansubstance.com with some friends.
A group of stubborn bloggers who decided to write with their hands and their brains, not with machines. Like this 4,000-plus-word article that I could very well have churned out in three seconds using ChatGPT — assuming ChatGPT can count words and by Jove it can’t.
Because there’s the hitch: we do need artificial intelligence to take out the rubbish and count words, fix our grammar, punctuation, and spelling mistakes.
But we don’t need it to think in our place. And if the Web is rotting, or ‘enshittifying’, to borrow Cory Doctorow’s term, that doesn’t necessarily mean the end of real content marketing (genuine content, not SEO fodder).
It may not happen on the Web and this is sad news for Sir Tim Berners-Lee. Quality content will always find a way to be shared. If not on the Web, then somewhere else. Perhaps my vision is somewhat naïve, but I’ll own that. I’m inclined to believe good things can still and always happen. Let’s assume I’m wrong; at least I will die happy.
What’s the point of generative AI if it doesn’t relieve us of chores?
If Gemini can’t deduplicate data, what’s it for? Examples posted by a LinkedIn user
I also see plenty of players, analysts, and professionals around me who think, search, dig deep, and document beyond the surface. They don’t buy into the big headlines from generative AI evangelists, who increasingly come across like transhumanists, to quote Jean-Gabriel Ganascia.
Chores to AI
Make no mistake, I have nothing against generative AI. I just want it to take out the bins instead of trying to think in my place. And when I see some of the results from these tools, I’m not convinced the game is over yet.
If ChatGPT can’t read an Apple Pages file and orders me around to switch to Word format, what’s the point?
I use them a lot for preparing my lectures (most of which aimed at training students to keep enough distance to interpret these tools’ results rationally rather than emotionally), to summarise my articles for my students’ presentations.
But I’m always the one doing the thinking, and all I want from these tools is to take out the bins and turn my most relevant punchlines into PowerPoint.
Why? Because copying out your own words into PowerPoint is basically a chore. And that’s why, for my keynotes I refrain from using slides. Those addicted to PowerPoint can still download the slides from my blog if they wish.
Finally, at the heart of this debate about AI’s role in content marketing lies a big confusion about the automation of creative processes, which aren’t continuous. It’s an illusion to think you can simply press a button to get a result. Sure, you get some sort of result, but which one and what value does it have? For an SEO content producer (I can’t get down to call them ‘authors’, sorry) it’s probably a thousand times better and faster than what a human being could write. But automating such tasks for true authors, those who write with their brains and for their readers, not a Google bot, gives you the impression you’re saving time, whereas reality is often radically different. Randall Munroe illustrates this brilliantly in his schematic about coding. And it’s even more apt for content marketing.
And all that AI SEO copy for what outcome? More efficiency? Neil Patel shows otherwise in the following chart.
So, to wrap up this article, I urge you never to relinquish your capacity to ponder nor your critical thinking skills. Certainly, humans are prone to error. Sometimes they’re even worse than LLMs, as Kevin Roose demonstrated in the New York Times. And that’s the real tragedy.
Even if general artificial intelligence is probably an overstatement (we can’t define it anyway), insisting the opposite — that all humans are brilliant — is an even bigger mistake.
But despite these flaws, it was us, humans, who built these machines. It’s our job to use them for the better, not for the worse. It’s up to you to do the thinking and let AI do the chores. That’s what ought to be.
The post Chores to AI, Thinking to Humans appeared first on Marketing and Innovation.