www... more
Share Vimoh IRL
Share to email
Share to Facebook
Share to X
A large part of the discourse on the future of work leaves out huge chunks of the working class. Just as, a decade or so ago, the discourse on the connecting power of social media left out the capitalistic motivations of the corporations running these platforms. Every technology has an ethic of its own, and the ethic behind generative AI has so far proven to be somewhat akin to robbery on a global scale. Maybe things will change. Maybe they won’t. But questions need to be asked of this new system that seeks to govern how we see labour and how we define work.
In the age of artificial intelligence, we encounter creators and content makers who are deeply optimistic about its potential. Their enthusiasm is understandable; their livelihoods often depend on the increasing use and acceptance of AI. Many of these creators actively encourage others to embrace AI, presenting it as a tool that can revolutionize industries and transform workflows.
Recently, I watched a video where an AI advocate made a point that struck a chord with me. They said, "Customers don't care how the software is made." Their argument was straightforward: whether you write the code manually or use AI assistance, customers only care about the product's functionality. This mindset might work for software, but its implications for creative fields deserve deeper exploration.
Creativity is Personal
Unlike software, creative endeavors don’t operate solely on utility. Creative works depend on interpersonal relationships between the creator and the audience. For instance, if I write a story and share it with you, the relationship we form is based on my honesty about the work's origins. If I claim it’s entirely my creation but secretly relied on AI, you might feel deceived. Transparency here becomes critical.
Honesty in creativity is about acknowledging the tools and processes used. If I incorporate AI into my writing, I owe it to my audience to share that fact. Concealing it risks undermining the trust that defines the relationship between creators and their audience.
Art is More Than a Commodity
The internet is full of voices insisting that creative works are merely commodities. A book, for example, can be seen as a product—written, packaged, and sold. But art transcends commodification. The act of creation, the labor behind it, and the authenticity it carries all add dimensions that cannot be reduced to market value.
This distinction becomes apparent when controversies arise about artists and their personal lives. Readers often stop supporting a writer whose ethics they find questionable. This reaction demonstrates that art, for many, is not separate from the artist. It is not merely about consuming a product; it is about connecting with the creator's values and vision.
The Role of AI in Art
AI's growing influence in creative fields brings us to a crossroads. Several scenarios could unfold:
* Over-Saturation: AI-generated content becomes ubiquitous, losing its novelty and value. We might adjust to this as the new normal, just as we have with Photoshop-edited images.
* Exclusivity: The high cost of AI tools limits their use to a few creators, making AI-generated content a niche rather than the norm.
* Integration: Human creators and AI collaborate in honest, balanced ways, leading to new forms of creativity that audiences learn to appreciate.
Over time, we may view AI-assisted creations as distinct yet legitimate categories, akin to photography or digital art. History has shown us that new mediums often face skepticism before gaining acceptance. Digital art faced similar controversies but is now widely acknowledged as a legitimate form of expression. AI art may follow a similar trajectory.
The Need for New Definitions
As AI reshapes creative processes, we may need new terms to define the roles of those using these tools. This linguistic gap often fuels the tensions we see online. Are these individuals artists, writers, or something else entirely? Until we find answers, the best path forward is honesty.
Honesty as the Foundation
Creativity thrives on authenticity and trust. If you’re an artist experimenting with AI tools, be upfront with your audience. Acknowledge where and how you’ve used AI, and don’t claim sole credit for work that isn’t entirely yours. This openness not only preserves the integrity of your relationship with your audience but also enriches the evolving conversation about creativity in the AI age.
Disclaimer: This essay is based on a podcast episode originally recorded in Hindi. The content was translated and edited with the assistance of AI tools. While efforts have been made to preserve the original meaning and tone, some nuances may have been adapted during the editing process. This disclaimer was also written by AI. The previous sentence, as well as this sentence, were written by a human.
Hello everyone, and welcome to another episode of the podcast.
I wanted to talk about importance—about how we know something is important and how we know something is not important—and how that method of measuring importance has been somewhat impaired in our information superhighway age.
I wanted to talk about letters because that is exactly what I found myself talking to my father and mother about some weeks ago. I was inundated with messages, and I asked them how many messages they had to deal with back in the days before even telephones existed. Back when the primary methods of entertainment and information were radio, and people used to send letters to each other in order to communicate because there was no electronic method of doing so.
My mother said she had a lot of friends who were really, you know, avid letter writers, so she used to get multiple letters every week—like maybe four or five. My father said maybe twice a week, and on really uneventful months, maybe a few—like four or five a month. And it varied, of course, depending on how many people one was in contact with and how much information people wanted to share.
But I got to thinking about the process of letters, how they're composed, how they're sent, how much time they take to reach the person for whom they're intended. And I quickly realized that time plays a huge role in it, doesn't it?
It takes time to live life. It takes time to think about, “Okay, I have lived life for a month, and now these are the things that stand out.” And then it takes time to compose the letter because you're curating your life. You'll mention only the things that you think are important, and you leave out most of it because there is a limited amount of space in the letter. And also because, you know, not everything is worth mentioning.
Then you write the letter; you take time to write the letter in a very proper way because readability matters. And then you send it, and then you wait for the letter to get to the person you have sent it to. That person gets the letter a few days later, by which time, if there was something urgent in it, it is probably already outdated. And if there wasn’t anything urgent in it, then they read the letter and absorb it because it is a valuable thing. It stands out from the mundane reality of their life.
They were doing work, they were living life, they were doing other things, and then the letter comes. They take time out of their life to read that letter, to absorb it, to appreciate it, and to think about the person who sent the letter. Then they preserve the letter because they can. It doesn't take much space, and they preserve the letter so that they may perhaps read it at a later point in time.
In all of this, time played a huge role. Time was the limiting factor. Time and space got in the way of the letter. And almost miraculously, they made the experience important because we measure things—we measure how important something is based on how much time and how much space they occupy.
I mean, in our times, practically 90% of the value of a luxury item is its price. And if a thing does not cost that much, then it must have less value. We measure the value of things in our life using how much they cost.
I got to thinking about our messaging systems, about the systems that we use to contact each other. How easy it is to contact anyone—even with a video call. And as far as text messaging goes, that's even easier and faster. I mean, faster doesn't even come into it. It's instantaneous. And in fact, the thing that gets in the way of getting to a message that has been sent to you and reading it and appreciating it is, again, time—but it is a lack of time.
People can send you whatever they want, whenever they want, and it will get to you instantaneously. Whether or not you are able to get to it, read it, and reply depends on how much time you have. And quite possibly, the reason you don't have time is because you are reading other messages. Forget getting four messages a week. You're getting orders of magnitude higher as far as the number of messages is concerned. On an hourly basis, your message boxes are full.
People are sending you random stuff—a link, a video, maybe a single word or a single emoji. Sometimes people are not writing longer messages. Letter writing is a forgotten art almost. People talk about it. And the result of all this has been that there has been a great flattening as far as meaning and importance are concerned. In the life of someone who lived before the time of telephones, a letter stood out. It was important because you could tell that it took time, space, and effort to produce it and to bring it to you.
Now, we don’t get letters. We don’t get messages that are of vital importance. We get much more by way of messaging, and all of it has the same value—which is zero. If, in your messaging landscape right now, you were to look at two messages and try to decide which one is more important, which one is something that you will return to later and re-read because it has intrinsic value of its own, you might struggle to find the answer to that question.
One of the things that our electronic messaging landscape has done to us is that it has flattened all our messages into the same level of importance. The meme that a friend sent you is of the same importance as an office email. The office email is of the same importance as a video link someone sent to you. And as a result of all this, we live in a time where nothing is more important or less important than any other thing.
And a side effect of this is that our understanding of what is important is now a free-for-all thing. It is out there, and people are looking to decide for us what is important for us. So a quarrel between two social media influencers becomes the most important thing in our life, despite the fact that if we look at real life, we have other more important things to worry about. But a social media platform powered by an algorithm that feeds on our emotions has managed to become something that can decide for us what is more important.
In this attention-based economy of ours, a politician or a political party spokesperson can stand on a stage, either real or virtual, and tell us that their personal religious agenda is more important than the future of your children or the fact that you don’t have enough money to buy food for your family.
And people fall for it too because we have lost our ability to measure importance. The scale we use to decide what is more important and what is less important is no longer in our hands. And it is out there in the hands of people with power and advertisers and corporations that feed us “important” things using an algorithm.
Just so I'm clear, I’m not really calling for a return to the good old days like an old man. What I am calling for is more control over the mechanisms in our brain that decide what is more important and what is less important. Because if we allow this brave new world of ours to decide what is more important for us, it will tell us. And what it will tell us will probably not be the right answer.
Thank you for listening. If you want to support this podcast, you can do so by clicking the support button on this website, or you can do so by supporting me on Patreon at patreon.com/vimoh.
If you’re listening to this episode on Spotify or Apple Podcasts, please try to give it a five-star rating. It helps. Thank you for listening. I'll see you in the next one.
Disclaimer: This episode transcript has been refined and formatted using an AI tool for clarity and readability.
In this episode, I delve into the cultural and linguistic challenges we face in understanding artificial intelligence. I argue that our current dissonance stems from the lack of appropriate language to describe AI entities that exhibit human-like behavior, such as expressing emotions or giving advice. Traditionally, words like "robot" and "artificial intelligence" have denoted futuristic concepts, but now, as these technologies become part of our everyday lives, we’re forced to redefine them in the present, creating confusion and unease.
I liken this challenge to having a future guest arrive unexpectedly early, prompting a need to adjust quickly. This dissonance, I suggest, is partly rooted in the speed at which AI has arrived, catching us off guard. Now, society must confront this technological shift and determine whether to welcome this change or resist it. The episode closes with a call for reflection on how to integrate AI into our world thoughtfully and pragmatically.
Disclaimer: This summary was generated with the assistance of an AI tool.
As a YouTuber who publishes content about atheism, culture, and society, I frequently get questions from people who want answers to specific points. I am therefore often at pains to explain to them that I have already answered this particular query in a previous video or essay. However, I don’t always remember where I have done so. Often, it is a bit of a task to look it up, find the exact link and then the exact timestamp and them email it back to the person. And if my correspondent is angry or rude or annoying, my motivation to perform any or all of these tasks is even lower.
But Google’s NotebookLM offers me a great way out of it. I have created a few notebooks to help me organise and sift through my own output from previous years. This includes text, video, and audio files in both Hindi and English. NotebookLM not only understands and catalogues it all with great efficiency, it also helps me recall any of it at a moment’s notice in the form of a simple chat.
So I can type this into the chatbot:
Please generate an email response to someone who is angrily asking me why I never speak abut Islam. Be calm and polite yet firm.
and get this as a reply:
Subject: Re: Your question about my views on Islam
Dear [Name],
Thank you for your email. It seems there's been a misunderstanding. I do talk about Islam and a variety of religious traditions in my work.
To address your concern directly, I have discussed Abrahamic ideas like the burden of proof, a concept that often arises in debates about the existence of God.12 The sources show that when challenged about their beliefs, theists will often shift the burden of proof onto skeptics.2 However, I argue that the burden of proof lies with the person making the claim.2 If someone claims that a box contains only dates, the burden of proof is on them to demonstrate that this is true.3
You might also be interested in my discussion of the watchmaker analogy, a popular argument for the existence of God.4 The analogy suggests that just as a watch requires a watchmaker, the universe's complexity points to an intelligent designer.4 However, I counter that we only assume a watchmaker exists because we have prior knowledge of human-made objects.4 We don't apply the same logic to naturally occurring phenomena like rabbits or rocks.4 Therefore, we cannot conclude that the universe requires a creator simply because it appears orderly.4
Moreover, I frequently address the dangers of blind faith and tradition.567 I argue that these are not sufficient grounds for belief and urge viewers to think critically and demand evidence.567 For example, I have discussed how many religious people's belief in God stems from childhood indoctrination rather than reasoned arguments.6 They accept their faith because their family did, without questioning its validity.6
My goal is to encourage viewers to examine their beliefs and challenge dogma, regardless of the source. I believe that open and honest discussions about religion are essential, and I strive to approach these conversations with nuance and respect.
I hope this clarifies my position.
Best regards, [Your name]
The numbers next to some words are citations to the exact places in the transcripts of my videos where I say the thing being mentioned.
I guess all I need now is for this tool to be integrated into Gmail.
A secondary thing that I get out of it is an audio file expressed as two people chatting about my work. I am sharing the “Deep Dive” that NotebookLM generated after feeding on 34 of my YouTube videos.
Okay, so this thought kind of woke me up in the middle of the night, so I’m recording it because I don’t want to forget it. It’s about the nature of writing right now. You have people who call themselves writers and who use AI to write. And then there are people like me who say that if you’re not actually writing, then you’re not really a writer. If you’re using AI to write, are you truly a writer? That stance can come across as an insult, like I’m some snooty purist saying, “You’ll never be like us—real writers.” But I’m actually talking about something else.
We didn’t have video for a huge portion of human history. Then, when video came along, cinema happened, and a whole lot of professions mushroomed around it: cinematographer, choreographer, director, actor, and so on. Directors and actors existed before, but they performed in live plays on stage, with no recording involved. Cinema was something new. Pardon the expression (and forgive the barking dog in the background—it’s out of my control). Cinema was sort of like a play, except now the performance could be recorded and replayed. It was a new form of storytelling that hadn’t existed before, and its existence opened up new ways to tell stories.
The people who make movies aren’t writers, nor are they simply people engaged in theater. It’s a bit of both, and maybe neither. So, when I say that someone using AI to “write” a story isn’t a writer, I know it sounds purist. But I think that those people shouldn’t call themselves writers. The AI tools they’re using aren’t tools for writers. If you’re a writer, you’ll write. These tools are for a new kind of creator—one we don’t have a name for yet.
They’re people who, in a way, direct prose into existence. Until now, creating prose has been the job of a writer, so these individuals are mistakenly being classified as writers. But they’re not writers. They’re something else, and we need a term for them soon, or this debate will continue longer than it needs to.
That’s what I wanted to say. Thanks for listening to this midnight rant. Now I’ll go back to sleep.
This episode was recorded without a script, transcribed by Substack, and then the transcript was cleaned up using ChatGPT.
Hello, everyone, and welcome to another episode of the podcast.
Today, I want to discuss something that's been on my mind for a while. It relates to my recent decision to leave social media. One of the reasons behind this was that I felt my way of thinking was being influenced by the need to produce content in very specific formats. I realized that, over time, this could harm my creative process. So, I want to break down my thoughts on this, hoping to make it understandable to others.
Initially, I considered putting this out as a public video, but then I realized that not everyone might resonate with it. Instead, I’m sharing it here, knowing that many of my subscribers are creators themselves and will likely understand what I’m saying.
Think of it this way: if you’re a creator—a writer, painter, filmmaker, musician, songwriter, or podcaster—you naturally think in terms of the medium you use to convey your message. For example, a filmmaker who has an idea will start thinking in cinematic terms, translating that idea into visuals before expressing it. A writer will seek the best words to convey the idea, while a painter will think about colors and painting styles. And this applies to every creative field.
In essence, the medium we’re used to can shape how we think about and express our ideas. The reason I left social media was that my mode of thinking was adapting itself to fit the formats commonly used on those platforms—like short videos on a mobile phone, often with the camera pointed at my face, cramming complex ideas into just 60 seconds.
This led to a shift: my brain was starting to force ideas into a format that wasn’t always suitable. Some ideas just don’t fit certain formats. Sometimes, a filmmaker’s idea should be written down first; sometimes, a writer's idea might be best suited for film. But social media compels us to think in constrained formats, whether or not they fit the ideas we have.
And here’s where social media adds a new layer: as creators and consumers, we’re all reacting to and thinking about the same trending topics. Social media’s nature compels us to align with popular topics, often pushing us to form "topic tribes." We’re expected to have opinions on whatever’s trending, sometimes to the extent that not participating can lead to accusations of not caring or even undermining the interests of others.
As creators on social media, our thinking gets shaped by the format. It’s affecting us in ways that were unimaginable a few years or decades ago. We’re often forced to condense our thoughts into 280-character tweets or 60-second videos, or whatever the trending format may be. Some ideas can’t be adequately expressed in these formats, and yet we try, creating distorted versions of the ideas we wanted to share.
Look at television debates, for example. The format requires antagonism and aggression, even though the topics are often public-interest issues meant to inform. The format twists these topics into something more about entertainment than genuine public enlightenment, leading audiences to anger rather than thoughtful decision-making. Issues like unemployment, healthcare, and education become polarized, and we’re encouraged to view them through a divisive lens.
I left social media because I have ideas that I’ve spent years developing—ideas that don’t fit into 60 seconds. These aren’t concepts best expressed through short-form content meant to be consumed quickly and then forgotten. These ideas need time, attention, and depth, things that social media, with its fragmented attention spans, often works against.
Perhaps some of you can juggle multiple projects simultaneously. I’m not one of those people. I tend to focus on one project at a time, maybe two at most, where one is primary, and the other is secondary. I know people who can maintain a writing routine while running a YouTube channel with regular live streams, but that’s not me. I had to make the choice to step away from social media to reclaim the part of my mind best able to engage with these deeper ideas.
I’m happy to report that this choice has already started to pay off. If you, too, find that social media affects your thinking or ability to focus, maybe take a moment to reflect on this.
Thank you for listening to this episode. If you enjoyed it, you can support my work on Patreon at patreon.com/vimoh. If you’re listening on Spotify or Apple Podcasts, please consider giving it a five-star rating. And if you’re listening on Substack, there’s a support link at the top of the page for one-time contributions. Your support helps me keep creating content that encourages deeper thinking on topics we don’t often discuss.
I’ll see you next time.
This episode was recorded without a script, transcribed by Substack, and then the transcript was edited and cleaned up by ChatGPT.
Hello everyone, and welcome to another episode of the podcast. This is being recorded on Diwali night, so if you hear some background noise, that’s the neighborhood celebrating.
Tonight, I want to talk about a shift in how we think of misinformation. Traditionally, we imagine someone actively keeping the truth from us or replacing it with lies. In this scenario, we picture essential information—critical for making informed decisions—not reaching us. Instead, we’re fed propaganda, and as a result, our choices get skewed by falsehoods.
But today, I think we’re in an age of meaninglessness, which has a few layers. Our issue isn’t the lack of information; it’s that we have too much of it. We’re drowning in multiple versions of the same event, endless claims, and interpretations, making it difficult to discern the truth. Picture a fantasy scene where an evil sorcerer multiplies themselves, creating countless versions. Which one is the real sorcerer? By the time you figure it out, the opportunity to act has slipped away.
In this age, meaning isn’t withheld; instead, we're overwhelmed with so many interpretations that identifying the truth becomes nearly impossible. For instance, fact-checkers like Alt News have noted that misinformation has grown so vast, it’s impossible to debunk it all. To bury one truth, you need twenty lies, each similar enough to confuse. In this haze of half-truths and close-but-not-quite-facts, the moment to recognize the truth often vanishes.
This information overload forces us to sift endlessly through a vast pool of misleading narratives. Even with the world at our fingertips, the clarity we once dreamed of has turned into a nightmare. Today, our devices, rather than providing answers, often contribute to our confusion. This issue has grown so pervasive that chatbots like ChatGPT and others even include disclaimers—they might be providing inaccurate information, not out of malice, but because the system itself reflects our fractured landscape of truth and misinformation.
I mention this because I'm currently writing a story that touches on these themes. It’s a sequel to an earlier science fiction piece, and it examines how the dream of an “information superhighway” turned into a trap of misinformation. Once, we thought we’d gain clear access to the world’s knowledge, but now, every search leaves us questioning if what we found is real.
On a side note, I recently explored AI art generators, and I started wondering whether it’s the tool creating art or my imagination filling in the gaps. The generator produces something close to my vision but not quite there. It's a thought I’ll leave you with this weekend.
If you’d like to support the podcast, you can join on patreon at patreon.com/vimoh. Patreon supporters get early access to episodes. This one, recorded tonight, will go live tomorrow morning. As for social media, I’ve officially left Instagram as of today, which I might discuss in an upcoming episode.
Thank you for listening, and see you next time.
The podcast episode was recorded in one take without any script. The transcript was generated by Substack and cleaned up using ChatGPT.
This episode is a sticthed together version of a bunch of videos I made for a short-lived channel I had some time ago. So ignore my changing appearance throughout it. The episode was recorded over several days.
Below is an AI generated (NotebookLM) summary of the episode.
The provided text is an excerpt from a podcast episode where the speaker discusses four books that explore the impact of technology, particularly the internet and social media, on human thought, behavior, and culture. The speaker begins by mentioning Digital Minimalism by Cal Newport, which advocates for a mindful approach to technology use by suggesting a digital detox to reduce the negative effects of constant online engagement. Next, the speaker highlights The Cult of the Amateur by Andrew Keen, a book published around 2007, which critiques the internet's impact on culture and quality work due to its emphasis on amateurism and the devaluation of expertise. Following that, the speaker mentions Free Ride by Robert Levine, a book that focuses on how the internet is affecting the entertainment and publishing industries. Finally, the speaker concludes by discussing The Shallows by Nicholas Carr, which examines how the internet is altering our cognitive processes, particularly our abilities to focus, read deeply, and retain information. The speaker concludes by stressing the importance of recognizing how technology shapes us and the need to remain aware of the potential negative consequences of its widespread use.
The real danger from AI tools to writers is not that people using AI will replace writers. It is that writers will get too reliant on AI tools and consequently lose the spark that makes them writers.
This video is from a YouTube channel I used to run. The channel no longer exists (because the only thing more difficult than running one YouTube channel is running four YouTube channels) and I figured this Substack is a good place for the things I had made for it.
The podcast currently has 22 episodes available.