
Sign up to save your podcasts
Or


A few episodes ago, we talked about how social media algorithms work. But here's the thing: all of that assumes what you're seeing is real. And increasingly? It's not."
Schnelle dives into deepfakes—AI-generated videos and images so convincing you cannot tell they're fake. This isn't future technology. This is happening now, and it's getting scarier by the month.
What are deepfakes? Videos, images, or audio created using AI to make it look like someone said or did something they never actually did. The technology is now so sophisticated that even experts struggle to identify them.
Why this matters now: Your kids are seeing this content on TikTok, YouTube, Instagram right now. And unless we teach them to question what they see, they're going to believe it.
The creepy celebrity deepfakes: Tools like Sora (from OpenAI) generate incredibly realistic video from text descriptions. People are creating videos of dead celebrities—Tupac, Robin Williams, Marilyn Monroe—in new content.
Schnelle finds this deeply unsettling. Not just ethically (these people can't consent), but because it normalizes the idea that videos aren't real. If everything can be fake, then nothing has to be real.
How convincing are they? Early deepfakes had tells—weird blinking, unnatural movements. Now? Indistinguishable from real footage.
Kids believe videos of their favorite influencers without question. Adults share content confirming their beliefs immediately. Elderly folks are especially vulnerable.
Nobody is immune. Schnelle admits she's been fooled—and she teaches this for a living.
What makes deepfakes dangerous:
What we can do:
Teach skepticism, not cynicism – Verify before believing vs. "nothing is real"
Introduce verification – Check multiple sources, find the original, look for reputable reporting
Talk explicitly about deepfakes – Show kids examples, discuss what makes them hard to spot
Teach the pause – Before sharing dramatic content, ask: "Do I know this is real?"
Understand vulnerability – If someone can deepfake a celebrity, they can deepfake you
Looking ahead: Next episode: Scams in 2026—deepfakes are just one tool scammers use. AI voice cloning, fake video calls, personalized manipulation.
Perfect for: Parents teaching media literacy, educators addressing misinformation, anyone needing to understand that seeing is no longer believing.
Digital literacy workshops covering deepfakes, AI, scams & more:
📧 Email: [email protected]
🗓️ Schedule: https://bamdigitalmedia.info
Virtual programs nationwide for students, educators, parents & seniors
By Schnelle Acevedo - Digital Literacy Expert + Content CreatorA few episodes ago, we talked about how social media algorithms work. But here's the thing: all of that assumes what you're seeing is real. And increasingly? It's not."
Schnelle dives into deepfakes—AI-generated videos and images so convincing you cannot tell they're fake. This isn't future technology. This is happening now, and it's getting scarier by the month.
What are deepfakes? Videos, images, or audio created using AI to make it look like someone said or did something they never actually did. The technology is now so sophisticated that even experts struggle to identify them.
Why this matters now: Your kids are seeing this content on TikTok, YouTube, Instagram right now. And unless we teach them to question what they see, they're going to believe it.
The creepy celebrity deepfakes: Tools like Sora (from OpenAI) generate incredibly realistic video from text descriptions. People are creating videos of dead celebrities—Tupac, Robin Williams, Marilyn Monroe—in new content.
Schnelle finds this deeply unsettling. Not just ethically (these people can't consent), but because it normalizes the idea that videos aren't real. If everything can be fake, then nothing has to be real.
How convincing are they? Early deepfakes had tells—weird blinking, unnatural movements. Now? Indistinguishable from real footage.
Kids believe videos of their favorite influencers without question. Adults share content confirming their beliefs immediately. Elderly folks are especially vulnerable.
Nobody is immune. Schnelle admits she's been fooled—and she teaches this for a living.
What makes deepfakes dangerous:
What we can do:
Teach skepticism, not cynicism – Verify before believing vs. "nothing is real"
Introduce verification – Check multiple sources, find the original, look for reputable reporting
Talk explicitly about deepfakes – Show kids examples, discuss what makes them hard to spot
Teach the pause – Before sharing dramatic content, ask: "Do I know this is real?"
Understand vulnerability – If someone can deepfake a celebrity, they can deepfake you
Looking ahead: Next episode: Scams in 2026—deepfakes are just one tool scammers use. AI voice cloning, fake video calls, personalized manipulation.
Perfect for: Parents teaching media literacy, educators addressing misinformation, anyone needing to understand that seeing is no longer believing.
Digital literacy workshops covering deepfakes, AI, scams & more:
📧 Email: [email protected]
🗓️ Schedule: https://bamdigitalmedia.info
Virtual programs nationwide for students, educators, parents & seniors