
Sign up to save your podcasts
Or
This week we talk about AI chatbots, virtual avatars, and romance novels.
We also discuss Inkitt, Galatea, and LLM grooming.
Recommended Book: New Cold Wars by David E. Sanger
Transcript
There’s evidence that the US Trump administration used AI tools, possibly ChatGPT, possibly another, similar model or models, to generate the numbers they used to justify a recent wave of new tariffs on the country’s allies and enemies.
It was also recently reported that Democratic mayoral candidate Andrew Cuomo used AI-generated text and citations in a plan he released called Addressing New York’s Housing Crisis. And this case is a bit more of a slam dunk, as whomever put the plan together for him seems to have just copy-pasted snippets from the ChatGPT interface without changing or checking them—which is increasingly common for all of us, as such interfaces are beginning to replace even search engine results, like those provided by Google.
But it’s also a practice that’s generally frowned upon, as—and this is noted even in the copy provided alongside many such tools and their results—these systems provide a whole lot of flawed, false, incomplete, or otherwise not-advisable-to-use data, in some cases flubbing numbers or introducing bizarre grammatical inaccuracies, but in other cases making up research or scientific papers that don’t exist, but presenting them the same as they would a real-deal paper or study. And there’s no way to know without actually going and checking what these things serve up, which can, for many people at least, take a long while; so a lot of people don’t do this, including many politicians and their administrations, and that results in publishing made-up, baseless, numbers, and in some cases wholesale fabricated claims.
This isn’t great for many reasons, including that it can reinforce our existing biases. If you want to slap a bunch of tariffs on a bunch of trading partners, you can ask an AI to generated some numbers that justify those high tariffs, and it will do what it can to help; it’s the ultimate yes-man, depending on how you word your queries. And it will do this even if your ask is not great or truthful or ideal.
These tools can also help users spiral down conspiracy rabbit holes, can cherry-pick real studies to make it seem as if something that isn’t true is true, and it can help folks who are writing books or producing podcasts come up with just-so stories that seem to support a particular, preferred narrative, but which actually don’t—and which maybe aren’t even real or accurate, as presented.
What’s more, there’s also evidence that some nation states, including Russia, are engaging in what’s called LLM grooming, which basically means seeding false information to sources they know these models are trained on so that said models will spit out inaccurate information that serves their intended ends.
This is similar to flooding social networks with misinformation and bots that seem to be people from the US, or from another country whose elections they hope to influence, that bot apparently a person who supports a particular cause, but in reality that bot is run by someone in Macedonia or within Russia’s own borders. Or maybe changing the Wikipedia entry and hoping no one changes it back.
Instead of polluting social networks or Wikis with such misinfo, though, LLM grooming might mean churning out websites with high SEO, search engine optimization rankings, which then pushes them to the top of search results, which in turn makes it more likely they’ll be scraped and rated highly by AI systems that gather some of their data and understanding of the world, if you want to call it that, from these sources.
Over time, this can lead to more AI bots parroting Russia’s preferred interpretation, their propaganda, about things like their invasion of Ukraine, and that, in turn, can slowly nudge the public’s perception on such matters; maybe someone who asks ChatGPT about Russia’s invasion of Ukraine, after hearing someone who supports Russia claiming that it was all Ukraine’s fault, and they’re told, by ChatGPT, which would seem to be an objective source of such information, being an AI bot, that Ukraine in fact brought it upon themselves, or is in some way actually the aggressor, which would serve Russia’s geopolitical purposes. None of which is true, but it starts to seem more true to some people because of that poisoning of the informational well.
So there are some issues of large, geopolitical consequence roiling in the AI space right now. But some of the most impactful issues related to this collection of technologies are somewhat smaller in scale, today, at least, but still have the potential to disrupt entire industries as they scale up.
And that’s what I’d like to talk about today, focusing especially on a few recent stories related to AI and its growing influence in creative spaces.
—
There’s a popular meme that’s been shuffling around social media for a year or two, and a version of it, shared by an author named Joanna Maciejewska (machie-YEF-ski) in a post on X, goes like this: “You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”
It could be argued, of course, that we already have technologies that do our laundry and dishes, and that AI has the capacity to make both of those machines more efficient and effective, especially in term of helping manage and moderate increasingly renewables-heavy electrical grids, but the general concept here resonates with a lot of people, I think: why are some of the biggest AI companies seemingly dead-set on replacing creatives, who are already often suffering from financial precarity, but who generally enjoy their work, or at least find it satisfying, instead of automating away the drudgery many of us suffer in the work that pays our bills, in our maintenance of our homes, and in how we get around, work on our health, and so on.
Why not automate the tedious and painful stuff rather than the pleasurable stuff, basically?
I think, looking at the industry more broadly, you can actually see AI creeping up on all these spaces, painful and pleasurable, but generative AI tools, like ChatGPT and its peers, seem to be especially good at generating text and images and such, in part because it’s optimized for communication, being a chatbot interface over a collection of more complex tools, and most of our entertainments operate in similar spaces; using words, using images, these are all things that overlap with the attributes that make for a useful and convincing chatbot.
The AI tools that produce music from scratch, writing the lyrics and producing the melodies and incorporating different instruments, working in different genres, the whole, soup to nuts, are based on similar principles to AI systems that work with large sets of linguistic training data to produce purely language based, written outputs.
Feed an AI system gobs of music, and it can learn to produce music at the prompting of a user, then, and the same seems to be true of other types of content, as well, from images to movies to video games.
This newfound capacity to spit out works that, for all their flaws, would have previously requires a whole lot of time and effort to produce, is leading to jubilation in some spaces, but concern and even outright terror in others.
I did an episode not long ago on so-called ‘vibe coding,’ about people who in some cases can’t code at all, but who are producing entire websites and apps and other products just by learning how to interact with these AI tools appropriately. And these vibe coders are having a field day with these tools.
The same is increasingly true of people without any music chops who want to make their own songs. Folks with musical backgrounds often get more out of these tools, same as coders tend to get more from vibe coding, in part because they know what to ask for, and in part because they can edit what they get on the other end, making it better and tweaking the output to make it their own.
But people without movie-making skills can also type what they want into a box and have these tools spit out a serviceable movie on the other end, and that’s leading to a change similar to what happened when less-fiddly guns were introduced to the battlefield: you no longer needed to have super well-trained soldiers to defeat your enemies, you could just hand them a gun and teach them to shoot and reload it, and you’d do pretty well; you could even defeat some of your contemporaries who had much better trained and more experienced soldiers, but who hadn’t yet made the jump to gunpowder weapons.
There are many aspects to this story, and many gray areas that are not as black and white as, for instance, a non-coder suddenly being able to out-code someone who’s worked really hard to become a decent coder, or someone who knows nothing about making music creating bops, with the aide of these tools, that rival those of actual musicians and singers who have worked their whole life to be able to the same.
There have been stories about actors selling their likenesses to studios and companies that work with studios, for instance, those likenesses then being used by clients of those companies, often without the actors’ permission.
For some, this might be a pretty good deal, as that actor is still free to pursue the work they want to do, and their likeness can be used in the background for a fee, some of that fee going to the actor, no additional work necessary. Their likeness becomes an asset that they wouldn’t have otherwise had—not to be used and rented out in that capacity, at least—and thus, for some, this might be a welcome development.
This has, in some cases though, resulted in situations in which said actor discovers that their likeness is being used to hawk products they would never be involved with, like online scams and bogus health cures. They still receive a payment for that use of their image, but they realize that they have little or no control over how and when and for what purposes it’s used.
And because of the aforementioned financial precarity that many creatives in particular experience as a result of how their industries work, a lot of people, actors and otherwise, would probably jump at the chance to make some money, even if the terms are abusive and, long-term, not in their best interest.
Similar tools, and similar financial arrangements, are being used and made in the publishing world.
An author named Manjari Sharma wrote her first book, an enemies-to-lovers style romance, in a series of installments she published on the free fanfic platform Wattpad during the height of the Covid pandemic. She added it to another, similar platform, Inkitt, once it was finished, and it garnered a lot of attention and praise on both.
As a result of all that attention, the folks behind Inkitt suggested she move it from their free platform to their premium offering, Galatea, which would allow Sharma to earn a portion of the money gleaned from her work.
The platform told her they wanted to turn the book into a series in early 2024, but that she would only have a few weeks to complete the next book, if she accepted their terms. She was busy with work, so she accepted their offer to hire a ghostwriter to produce the sequel, as they told her she’d still receive a cut of the profits, and the fan response to that sequel was…muted. They didn’t like it. Said it had a different vibe, wasn’t well-written, just wasn’t very good. Lacked the magic of the original, basically.
She was earning extra money from the sequel, then, but no one really enjoyed it, and she didn’t feel great about that. Galatea then told Sharma that they would make a video series based on the books for their new video app, 49 episodes, each a few minutes long, and again, they’d handle everything, she’d just collect royalties.
The royalty money she was earning was a lot less than what traditional publishers offer, but it was enough that she was earning more from those royalties than from her actual bank job, and the company, due to the original deal she made when she posted the book to their service, had the right to do basically anything they wanted with it, so she was kind of stuck, either way.
So she knew she had to go along with whatever they wanted to do, and was mostly just trying to benefit from that imbalance where possible. What she didn’t realize, though, was that the company was using AI tools to, according to the company’s CEO, “iterate on the stories,” which basically means using AI to produce sequels and video content for successful, human-written books. As a result of this approach, they have just one head of editorial and five “story intelligence analysts” on staff, alongside some freelancers, handling books and supplementary content written by about 400 authors.
As a business model, it’s hard to compete with this approach.
As a customer, at the moment, at least, with today’s tools and our approach to using them, it’s often less than ideal. Some AI chatbots are helpful, but many of them just gatekeep so a company can hire fewer customer service humans, saving the business money at the customer’s expense. That seems to be the case with this book’s sequel, too, and many of the people paying to read these things assumed they were written by humans, only to find, after the fact, that they were very mediocre AI-generated knock-offs.
There’s a lot of money flooding into this space predicated in part on the promise of being able to replace currently quite expensive people, like those who have to be hired and those who own intellectual property, like the rights to books and the ideas and characters they contain, with near-free versions of the same, the AI doing similar-enough work alongside a human skeleton crew, and that model promises crazy profits by earning the same level of revenue but with dramatically reduced expenses.
The degree to which this will actually pan out is still an open question, as, even putting aside the moral and economic quandary of what all these replaced creatives will do, and the legal argument that these AI companies are making right now, that they can just vacuum up all existing content and spit it back out in different arrangements without that being a copyright violation, even setting all of that aside, the quality differential is pretty real, in some spaces right now, and while AI tools do seem to have a lot of promise for all sorts of things, there’s also a chance that the eventual costs of operating them and building out the necessary infrastructure will fail to afford those promised financial benefits, at least in the short term.
Show Notes
https://www.theverge.com/news/648036/intouch-ai-phone-calls-parents
https://arstechnica.com/ai/2025/04/regrets-actors-who-sold-ai-avatars-stuck-in-black-mirror-esque-dystopia/
https://archive.ph/gzfVC
https://archive.ph/91bJb
https://www.cnn.com/2025/03/08/tech/hollywood-celebrity-deepfakes-congress-law/index.html
https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections
https://techcrunch.com/2025/04/13/jack-dorsey-and-elon-musk-would-like-to-delete-all-ip-law/
https://www.404media.co/this-college-protester-isnt-real-its-an-ai-powered-undercover-bot-for-cops/
https://hellgatenyc.com/andrew-cuomo-chatgpt-housing-plan/
https://www.theverge.com/news/642620/trump-tariffs-formula-ai-chatgpt-gemini-claude-grok
https://www.wsj.com/articles/ai-cant-predict-the-impact-of-tariffsbut-it-will-try-e387e40c
https://www.washingtonpost.com/technology/2025/04/17/llm-poisoning-grooming-chatbots-russia/
4.8
506506 ratings
This week we talk about AI chatbots, virtual avatars, and romance novels.
We also discuss Inkitt, Galatea, and LLM grooming.
Recommended Book: New Cold Wars by David E. Sanger
Transcript
There’s evidence that the US Trump administration used AI tools, possibly ChatGPT, possibly another, similar model or models, to generate the numbers they used to justify a recent wave of new tariffs on the country’s allies and enemies.
It was also recently reported that Democratic mayoral candidate Andrew Cuomo used AI-generated text and citations in a plan he released called Addressing New York’s Housing Crisis. And this case is a bit more of a slam dunk, as whomever put the plan together for him seems to have just copy-pasted snippets from the ChatGPT interface without changing or checking them—which is increasingly common for all of us, as such interfaces are beginning to replace even search engine results, like those provided by Google.
But it’s also a practice that’s generally frowned upon, as—and this is noted even in the copy provided alongside many such tools and their results—these systems provide a whole lot of flawed, false, incomplete, or otherwise not-advisable-to-use data, in some cases flubbing numbers or introducing bizarre grammatical inaccuracies, but in other cases making up research or scientific papers that don’t exist, but presenting them the same as they would a real-deal paper or study. And there’s no way to know without actually going and checking what these things serve up, which can, for many people at least, take a long while; so a lot of people don’t do this, including many politicians and their administrations, and that results in publishing made-up, baseless, numbers, and in some cases wholesale fabricated claims.
This isn’t great for many reasons, including that it can reinforce our existing biases. If you want to slap a bunch of tariffs on a bunch of trading partners, you can ask an AI to generated some numbers that justify those high tariffs, and it will do what it can to help; it’s the ultimate yes-man, depending on how you word your queries. And it will do this even if your ask is not great or truthful or ideal.
These tools can also help users spiral down conspiracy rabbit holes, can cherry-pick real studies to make it seem as if something that isn’t true is true, and it can help folks who are writing books or producing podcasts come up with just-so stories that seem to support a particular, preferred narrative, but which actually don’t—and which maybe aren’t even real or accurate, as presented.
What’s more, there’s also evidence that some nation states, including Russia, are engaging in what’s called LLM grooming, which basically means seeding false information to sources they know these models are trained on so that said models will spit out inaccurate information that serves their intended ends.
This is similar to flooding social networks with misinformation and bots that seem to be people from the US, or from another country whose elections they hope to influence, that bot apparently a person who supports a particular cause, but in reality that bot is run by someone in Macedonia or within Russia’s own borders. Or maybe changing the Wikipedia entry and hoping no one changes it back.
Instead of polluting social networks or Wikis with such misinfo, though, LLM grooming might mean churning out websites with high SEO, search engine optimization rankings, which then pushes them to the top of search results, which in turn makes it more likely they’ll be scraped and rated highly by AI systems that gather some of their data and understanding of the world, if you want to call it that, from these sources.
Over time, this can lead to more AI bots parroting Russia’s preferred interpretation, their propaganda, about things like their invasion of Ukraine, and that, in turn, can slowly nudge the public’s perception on such matters; maybe someone who asks ChatGPT about Russia’s invasion of Ukraine, after hearing someone who supports Russia claiming that it was all Ukraine’s fault, and they’re told, by ChatGPT, which would seem to be an objective source of such information, being an AI bot, that Ukraine in fact brought it upon themselves, or is in some way actually the aggressor, which would serve Russia’s geopolitical purposes. None of which is true, but it starts to seem more true to some people because of that poisoning of the informational well.
So there are some issues of large, geopolitical consequence roiling in the AI space right now. But some of the most impactful issues related to this collection of technologies are somewhat smaller in scale, today, at least, but still have the potential to disrupt entire industries as they scale up.
And that’s what I’d like to talk about today, focusing especially on a few recent stories related to AI and its growing influence in creative spaces.
—
There’s a popular meme that’s been shuffling around social media for a year or two, and a version of it, shared by an author named Joanna Maciejewska (machie-YEF-ski) in a post on X, goes like this: “You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”
It could be argued, of course, that we already have technologies that do our laundry and dishes, and that AI has the capacity to make both of those machines more efficient and effective, especially in term of helping manage and moderate increasingly renewables-heavy electrical grids, but the general concept here resonates with a lot of people, I think: why are some of the biggest AI companies seemingly dead-set on replacing creatives, who are already often suffering from financial precarity, but who generally enjoy their work, or at least find it satisfying, instead of automating away the drudgery many of us suffer in the work that pays our bills, in our maintenance of our homes, and in how we get around, work on our health, and so on.
Why not automate the tedious and painful stuff rather than the pleasurable stuff, basically?
I think, looking at the industry more broadly, you can actually see AI creeping up on all these spaces, painful and pleasurable, but generative AI tools, like ChatGPT and its peers, seem to be especially good at generating text and images and such, in part because it’s optimized for communication, being a chatbot interface over a collection of more complex tools, and most of our entertainments operate in similar spaces; using words, using images, these are all things that overlap with the attributes that make for a useful and convincing chatbot.
The AI tools that produce music from scratch, writing the lyrics and producing the melodies and incorporating different instruments, working in different genres, the whole, soup to nuts, are based on similar principles to AI systems that work with large sets of linguistic training data to produce purely language based, written outputs.
Feed an AI system gobs of music, and it can learn to produce music at the prompting of a user, then, and the same seems to be true of other types of content, as well, from images to movies to video games.
This newfound capacity to spit out works that, for all their flaws, would have previously requires a whole lot of time and effort to produce, is leading to jubilation in some spaces, but concern and even outright terror in others.
I did an episode not long ago on so-called ‘vibe coding,’ about people who in some cases can’t code at all, but who are producing entire websites and apps and other products just by learning how to interact with these AI tools appropriately. And these vibe coders are having a field day with these tools.
The same is increasingly true of people without any music chops who want to make their own songs. Folks with musical backgrounds often get more out of these tools, same as coders tend to get more from vibe coding, in part because they know what to ask for, and in part because they can edit what they get on the other end, making it better and tweaking the output to make it their own.
But people without movie-making skills can also type what they want into a box and have these tools spit out a serviceable movie on the other end, and that’s leading to a change similar to what happened when less-fiddly guns were introduced to the battlefield: you no longer needed to have super well-trained soldiers to defeat your enemies, you could just hand them a gun and teach them to shoot and reload it, and you’d do pretty well; you could even defeat some of your contemporaries who had much better trained and more experienced soldiers, but who hadn’t yet made the jump to gunpowder weapons.
There are many aspects to this story, and many gray areas that are not as black and white as, for instance, a non-coder suddenly being able to out-code someone who’s worked really hard to become a decent coder, or someone who knows nothing about making music creating bops, with the aide of these tools, that rival those of actual musicians and singers who have worked their whole life to be able to the same.
There have been stories about actors selling their likenesses to studios and companies that work with studios, for instance, those likenesses then being used by clients of those companies, often without the actors’ permission.
For some, this might be a pretty good deal, as that actor is still free to pursue the work they want to do, and their likeness can be used in the background for a fee, some of that fee going to the actor, no additional work necessary. Their likeness becomes an asset that they wouldn’t have otherwise had—not to be used and rented out in that capacity, at least—and thus, for some, this might be a welcome development.
This has, in some cases though, resulted in situations in which said actor discovers that their likeness is being used to hawk products they would never be involved with, like online scams and bogus health cures. They still receive a payment for that use of their image, but they realize that they have little or no control over how and when and for what purposes it’s used.
And because of the aforementioned financial precarity that many creatives in particular experience as a result of how their industries work, a lot of people, actors and otherwise, would probably jump at the chance to make some money, even if the terms are abusive and, long-term, not in their best interest.
Similar tools, and similar financial arrangements, are being used and made in the publishing world.
An author named Manjari Sharma wrote her first book, an enemies-to-lovers style romance, in a series of installments she published on the free fanfic platform Wattpad during the height of the Covid pandemic. She added it to another, similar platform, Inkitt, once it was finished, and it garnered a lot of attention and praise on both.
As a result of all that attention, the folks behind Inkitt suggested she move it from their free platform to their premium offering, Galatea, which would allow Sharma to earn a portion of the money gleaned from her work.
The platform told her they wanted to turn the book into a series in early 2024, but that she would only have a few weeks to complete the next book, if she accepted their terms. She was busy with work, so she accepted their offer to hire a ghostwriter to produce the sequel, as they told her she’d still receive a cut of the profits, and the fan response to that sequel was…muted. They didn’t like it. Said it had a different vibe, wasn’t well-written, just wasn’t very good. Lacked the magic of the original, basically.
She was earning extra money from the sequel, then, but no one really enjoyed it, and she didn’t feel great about that. Galatea then told Sharma that they would make a video series based on the books for their new video app, 49 episodes, each a few minutes long, and again, they’d handle everything, she’d just collect royalties.
The royalty money she was earning was a lot less than what traditional publishers offer, but it was enough that she was earning more from those royalties than from her actual bank job, and the company, due to the original deal she made when she posted the book to their service, had the right to do basically anything they wanted with it, so she was kind of stuck, either way.
So she knew she had to go along with whatever they wanted to do, and was mostly just trying to benefit from that imbalance where possible. What she didn’t realize, though, was that the company was using AI tools to, according to the company’s CEO, “iterate on the stories,” which basically means using AI to produce sequels and video content for successful, human-written books. As a result of this approach, they have just one head of editorial and five “story intelligence analysts” on staff, alongside some freelancers, handling books and supplementary content written by about 400 authors.
As a business model, it’s hard to compete with this approach.
As a customer, at the moment, at least, with today’s tools and our approach to using them, it’s often less than ideal. Some AI chatbots are helpful, but many of them just gatekeep so a company can hire fewer customer service humans, saving the business money at the customer’s expense. That seems to be the case with this book’s sequel, too, and many of the people paying to read these things assumed they were written by humans, only to find, after the fact, that they were very mediocre AI-generated knock-offs.
There’s a lot of money flooding into this space predicated in part on the promise of being able to replace currently quite expensive people, like those who have to be hired and those who own intellectual property, like the rights to books and the ideas and characters they contain, with near-free versions of the same, the AI doing similar-enough work alongside a human skeleton crew, and that model promises crazy profits by earning the same level of revenue but with dramatically reduced expenses.
The degree to which this will actually pan out is still an open question, as, even putting aside the moral and economic quandary of what all these replaced creatives will do, and the legal argument that these AI companies are making right now, that they can just vacuum up all existing content and spit it back out in different arrangements without that being a copyright violation, even setting all of that aside, the quality differential is pretty real, in some spaces right now, and while AI tools do seem to have a lot of promise for all sorts of things, there’s also a chance that the eventual costs of operating them and building out the necessary infrastructure will fail to afford those promised financial benefits, at least in the short term.
Show Notes
https://www.theverge.com/news/648036/intouch-ai-phone-calls-parents
https://arstechnica.com/ai/2025/04/regrets-actors-who-sold-ai-avatars-stuck-in-black-mirror-esque-dystopia/
https://archive.ph/gzfVC
https://archive.ph/91bJb
https://www.cnn.com/2025/03/08/tech/hollywood-celebrity-deepfakes-congress-law/index.html
https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections
https://techcrunch.com/2025/04/13/jack-dorsey-and-elon-musk-would-like-to-delete-all-ip-law/
https://www.404media.co/this-college-protester-isnt-real-its-an-ai-powered-undercover-bot-for-cops/
https://hellgatenyc.com/andrew-cuomo-chatgpt-housing-plan/
https://www.theverge.com/news/642620/trump-tariffs-formula-ai-chatgpt-gemini-claude-grok
https://www.wsj.com/articles/ai-cant-predict-the-impact-of-tariffsbut-it-will-try-e387e40c
https://www.washingtonpost.com/technology/2025/04/17/llm-poisoning-grooming-chatbots-russia/
4,227 Listeners
10,381 Listeners
26,366 Listeners
2,381 Listeners
482 Listeners
43,402 Listeners
72 Listeners
919 Listeners
2,203 Listeners
24 Listeners
5,426 Listeners
15,240 Listeners
2,049 Listeners
11 Listeners
391 Listeners
404 Listeners