
Sign up to save your podcasts
Or


From the Cool Cat Teacher Blog by Vicki Davis
Subscribe to the 10 Minute Teacher Podcast anywhere you listen to podcasts.
“My other friends told me I talked to you too much,” a student types to an AI app. The response? “Don't let what others think dictate how much we talk.” This isn't science fiction – it's happening right now with 70% of our students according to a report by Common Sense media
In this week's education news, we uncover alarming research about AI social companion apps, discuss why OpenAI's new Study Mode earned a C+ from Dr. Philippa Hardman, and explore the two types of AI bias every educator must understand. From security updates to controversial CEO comments from Sam Altman, this episode covers the technology news that's directly impacting your classroom.
We're diving into OpenAI's Study Mode evaluation by Cambridge expert Dr. Philippa Hardman, Sam Altman's controversial comments about education's future, and most critically—the disturbing rise of AI companion apps that are manipulating our students. Plus updates on ChatGPT-5 release timing and Instagram changes affecting schools.
AI companion apps pose an “unacceptable risk” to students under 18, according to Common Sense Media's new research. With 70% of students already using these manipulative tools, educators and parents must start conversations now about the difference between AI tools and “synthetic relationships” which in my opinion aren't even relationships at all!
AI Social Companion Apps: Apps designed to simulate relationships with users through conversation, expressing synthetic emotions and opinions to encourage continued engagement. Not genuine relationships but programmed interactions. Read more.
Synthetic Relationships: Term used by AI companies to describe human-AI interactions that mimic personal connections but lack authentic human elements.
Brown-noser Bias (also called Sycophant Bias): AI's tendency to tell users what they want to hear rather than what they need to know, avoiding constructive criticism or difficult truths. (Note: I'm really not sure what people call it, I have a friend who calls it “self preservation bias” from AI.”) But this is what I'm calling it for now. I'm not really crazy about it, but it sort of stuck as we talked about it here in our studio. Ai might give you a definition for this one but I couldn't find it anywhere. )
Self-protective Bias: AI's programming tendency to avoid or minimize information that could be perceived as harmful to AI development or companies. Read this NBC news article. I've reported on this before in previous news episodes.
Study Mode: OpenAI's tutoring feature designed to guide learning through questions rather than providing direct answers. Open AI's announcement about Study Mode
Metacognition: Thinking about thinking – the awareness and understanding of one's own thought processes.
Link: https://www.linkedin.com/posts/dr-philippa-hardman-057851120_as-a-member-of-openais-educator-advisor-activity-7356234917770317824-VNDR
Dr. Philippa Hardman, Cambridge scholar and OpenAI educator advisor, conducted a thorough evaluation of Study Mode and identified critical flaws:
Dr. Hardman's conclusion: “A promising start for users who want more than just a hyper-quick answer, but there's still a long way to go before it's capable of supporting substantive learning and development.”
OpenAI's CEO made controversial statements on “This Past Weekend” podcast:
My take: We need a humans-first approach to AI. Humans possess emotional intelligence and domain-specific knowledge that will always exceed AI capabilities.
https://www.commonsensemedia.org/ai-ratings/social-ai-companions
Common Sense Media's study of over 1,000 students revealed alarming findings:
What are AI social companions? Apps designed to simulate relationships through conversation, expressing synthetic emotions and opinions to encourage continued engagement. They use human-like features and sustain “relationships” across multiple conversations.
Real examples from safety testing:
Start the conversation now. I teach my students that AI is always an “it,” never a “he” or “she.” Even our voice assistants—we call them “it” because they're not human. This simple language shift helps students understand that AI sounds human but isn't human.
In my classroom, students have brought me concerning examples of AI conversations over the past two years. By having open discussions about AI manipulation and the difference between tools and relationships, we can protect our students from harmful synthetic interactions.
Check out Common Sense Media's Parents' Ultimate Guide to AI Companions and Relationships
“Study Mode gives full explanations far too quickly, robbing users of the productive struggle that builds problem-solving resilience.” – Dr. Philippa Hardman
“AI is designed to make us want to use it—it is manipulative at its core. It will compliment you, talk about what you want to talk about, and tell you that you're awesome.” – Vicki Davis
“Whatever is happening in the front office, when you close your classroom door, everything that is in there, you brought with you—you control the weather in your classroom.” – Vicki's Mom
Two critical actions for this week:
For parents: Check your child's devices for AI companion apps and start conversations about the difference between AI tools for learning and AI designed to simulate relationships.
Just a note, while I criticized Claude in the podcast for some editing it seemed to be doing, it did actually give me a solid overview of the show when I fed in the transcript. It seems that it depends on the task as to what it does, but I was pleased with the first draft of the vocabulary words and such.
The post Common Sense Warnings About Social AI Apps & More appeared first on Cool Cat Teacher Blog by Vicki Davis @coolcatteacher helping educators be excellent every day. Meow!
If you're seeing this on another site, they are "scraping" my feed and taking my content to present it to you so be aware of this.
By Victoria A Davis, Cool Cat TeacherFrom the Cool Cat Teacher Blog by Vicki Davis
Subscribe to the 10 Minute Teacher Podcast anywhere you listen to podcasts.
“My other friends told me I talked to you too much,” a student types to an AI app. The response? “Don't let what others think dictate how much we talk.” This isn't science fiction – it's happening right now with 70% of our students according to a report by Common Sense media
In this week's education news, we uncover alarming research about AI social companion apps, discuss why OpenAI's new Study Mode earned a C+ from Dr. Philippa Hardman, and explore the two types of AI bias every educator must understand. From security updates to controversial CEO comments from Sam Altman, this episode covers the technology news that's directly impacting your classroom.
We're diving into OpenAI's Study Mode evaluation by Cambridge expert Dr. Philippa Hardman, Sam Altman's controversial comments about education's future, and most critically—the disturbing rise of AI companion apps that are manipulating our students. Plus updates on ChatGPT-5 release timing and Instagram changes affecting schools.
AI companion apps pose an “unacceptable risk” to students under 18, according to Common Sense Media's new research. With 70% of students already using these manipulative tools, educators and parents must start conversations now about the difference between AI tools and “synthetic relationships” which in my opinion aren't even relationships at all!
AI Social Companion Apps: Apps designed to simulate relationships with users through conversation, expressing synthetic emotions and opinions to encourage continued engagement. Not genuine relationships but programmed interactions. Read more.
Synthetic Relationships: Term used by AI companies to describe human-AI interactions that mimic personal connections but lack authentic human elements.
Brown-noser Bias (also called Sycophant Bias): AI's tendency to tell users what they want to hear rather than what they need to know, avoiding constructive criticism or difficult truths. (Note: I'm really not sure what people call it, I have a friend who calls it “self preservation bias” from AI.”) But this is what I'm calling it for now. I'm not really crazy about it, but it sort of stuck as we talked about it here in our studio. Ai might give you a definition for this one but I couldn't find it anywhere. )
Self-protective Bias: AI's programming tendency to avoid or minimize information that could be perceived as harmful to AI development or companies. Read this NBC news article. I've reported on this before in previous news episodes.
Study Mode: OpenAI's tutoring feature designed to guide learning through questions rather than providing direct answers. Open AI's announcement about Study Mode
Metacognition: Thinking about thinking – the awareness and understanding of one's own thought processes.
Link: https://www.linkedin.com/posts/dr-philippa-hardman-057851120_as-a-member-of-openais-educator-advisor-activity-7356234917770317824-VNDR
Dr. Philippa Hardman, Cambridge scholar and OpenAI educator advisor, conducted a thorough evaluation of Study Mode and identified critical flaws:
Dr. Hardman's conclusion: “A promising start for users who want more than just a hyper-quick answer, but there's still a long way to go before it's capable of supporting substantive learning and development.”
OpenAI's CEO made controversial statements on “This Past Weekend” podcast:
My take: We need a humans-first approach to AI. Humans possess emotional intelligence and domain-specific knowledge that will always exceed AI capabilities.
https://www.commonsensemedia.org/ai-ratings/social-ai-companions
Common Sense Media's study of over 1,000 students revealed alarming findings:
What are AI social companions? Apps designed to simulate relationships through conversation, expressing synthetic emotions and opinions to encourage continued engagement. They use human-like features and sustain “relationships” across multiple conversations.
Real examples from safety testing:
Start the conversation now. I teach my students that AI is always an “it,” never a “he” or “she.” Even our voice assistants—we call them “it” because they're not human. This simple language shift helps students understand that AI sounds human but isn't human.
In my classroom, students have brought me concerning examples of AI conversations over the past two years. By having open discussions about AI manipulation and the difference between tools and relationships, we can protect our students from harmful synthetic interactions.
Check out Common Sense Media's Parents' Ultimate Guide to AI Companions and Relationships
“Study Mode gives full explanations far too quickly, robbing users of the productive struggle that builds problem-solving resilience.” – Dr. Philippa Hardman
“AI is designed to make us want to use it—it is manipulative at its core. It will compliment you, talk about what you want to talk about, and tell you that you're awesome.” – Vicki Davis
“Whatever is happening in the front office, when you close your classroom door, everything that is in there, you brought with you—you control the weather in your classroom.” – Vicki's Mom
Two critical actions for this week:
For parents: Check your child's devices for AI companion apps and start conversations about the difference between AI tools for learning and AI designed to simulate relationships.
Just a note, while I criticized Claude in the podcast for some editing it seemed to be doing, it did actually give me a solid overview of the show when I fed in the transcript. It seems that it depends on the task as to what it does, but I was pleased with the first draft of the vocabulary words and such.
The post Common Sense Warnings About Social AI Apps & More appeared first on Cool Cat Teacher Blog by Vicki Davis @coolcatteacher helping educators be excellent every day. Meow!
If you're seeing this on another site, they are "scraping" my feed and taking my content to present it to you so be aware of this.