Share Mystery AI Hype Theater 3000
Share to email
Share to Facebook
Share to X
By Emily M. Bender and Alex Hanna
4.4
2222 ratings
The podcast currently has 40 episodes available.
Dr. Clara Berridge joins Alex and Emily to talk about the many 'uses' for generative AI in elder care -- from "companionship," to "coaching" like medication reminders and other encouragements toward healthier (and, for insurers, cost-saving) behavior. But these technologies also come with questionable data practices and privacy violations. And as populations grow older on average globally, technology such as chatbots is often used to sidestep real solutions to providing meaningful care, while also playing on ageist and ableist tropes.
Dr. Clara Berridge is an associate professor at the University of Washington’s School of Social Work. Her research focuses explicitly on the policy and ethical implications of digital technology in elder care, and considers things like privacy and surveillance, power, and decision-making about technology use.
References:
Care.Coach's 'Avatar' chat program*
For Older People Who Are Lonely, Is the Solution a Robot Friend?
Care Providers’ Perspectives on the Design of Assistive Persuasive Behaviors for Socially Assistive Robots
Socio-Digital Vulnerability
***Care.Coach's 'Fara' and 'Auger' products, also discussed in this episode, are no longer listed on their site.
Fresh AI Hell:
Apple Intelligence hidden prompts include the command "don't hallucinate"
The US wants to use facial recognition to identify migrant children as they age
Family poisoned after following fake mushroom book
It is a beautiful evening in the neighborhood, and you are a horrible Waymo robotaxi
Dynamic pricing + surveillance hell at the grocery store
Chinese social media's newest trend: imitating AI-generated videos
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
Alex
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
The Washington Post is going all in on AI -- surely this won't be a repeat of any past, disastrous newsroom pivots! 404 Media journalist Samantha Cole joins to talk journalism, LLMs, and why synthetic text is the antithesis of good reporting.
References:
The Washington Post Tells Staff It’s Pivoting to AI: "AI everywhere in our newsroom."
Response: Defector Media Promotes Devin The Dugong To Chief AI Officer, Unveils First AI-Generated Blog
The Washington Post's First AI Strategy Editor Talks LLMs in the Newsroom
Also: New Washington Post CTO comes from Uber
The Washington Post debuts AI chatbot, will summarize climate articles.
Media companies are making a huge mistake with AI
When ChatGPT summarizes, it does nothing of the kind
404 Media: 404 Media Now Has a Full Text RSS Feed
404 Media: Websites are Blocking the Wrong AI Scrapers (Because AI Companies Keep Making New Ones)
Fresh AI Hell:
"AI" Alan Turning
Google advertises Gemini for writing synthetic fan letters
Dutch Judge uses ChatGPT's answers to factual questions in ruling
Is GenAI coming to your home appliances?
AcademicGPT (Galactica redux)
"AI" generated images in medical science, again (now retracted)
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
Alex
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
Could this meeting have been an e-mail that you didn't even have to read? Emily and Alex are tearing into the lofty ambitions of Zoom CEO Eric Yuan, who claims the future is a LLM-powered 'digital twin' that can attend meetings in your stead, make decisions for you, and even be tuned to different parameters with just the click of a button.
References:
The CEO of Zoom wants AI clones in meetings
All-knowing machines are a fantasy
A reminder of some things chatbots are not good for
Medical science shouldn't platform automating end-of-life care
The grimy residue of the AI bubble
On the phenomenon of bullshit jobs: a work rant
Fresh AI Hell:
LA schools' ed tech chatbot misusing student data
AI "teaching assistants" at Morehouse
"Diet-monitoring AI tracks your each and every spoonful"
A teacher's perspective on dealing with students who "asked ChatGPT"
Are Swiss researchers affiliated with Israeli military industrial complex? Swiss institution asks ChatGPT
Using a chatbot to negotiate lower prices
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
Alex
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
We regret to report that companies are still trying to make generative AI that can 'transform' healthcare -- but without investing in the wellbeing of healthcare workers or other aspects of actual patient care. Registered nurse and nursing care advocate Michelle Mahon joins Emily and Alex to explain why generative AI falls far, far short of the work nurses do.
Michelle Mahon is the Director of Nursing Practice with National Nurses United, the largest union of registered nurses in the country. Michelle has over 25 years of experience as a registered nurse in various settings. In her role with NNU, Michelle works with nurses across the United States to protect the vital role that RNs play in health care as direct caregivers and patient advocates.
References:
NVIDIA's AI Bot Outperforms Nurses: Here's What It Means
Hippocratic AI's roster of 'genAI healthcare agents'
Related: Nuance's DAX Copilot
Fresh AI Hell:
"AI-powered health coach" will urge you to drink water with lemon
50% of 2024 Q2 VC investments went to "AI"
Thanks to AI, Google no longer claiming to be carbon-neutral
Click work "jobs" soliciting photos of babies through teens
Screening of film "written by AI" canceled after backlash
Putting the AI in IPA
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
Alex
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
When is a research paper not a research paper? When a big tech company uses a preprint server as a means to dodge peer review -- in this case, of their wild speculations on the 'dangerous capabilities' of large language models. Ali Alkhatib joins Emily to explain why a recent Google DeepMind document about the hunt for evidence that LLMs might intentionally deceive us was bad science, and yet is still influencing the public conversation about AI.
Ali Alkhatib is a computer scientist and former director of the University of San Francisco’s Center for Applied Data Ethics. His research focuses on human-computer interaction, and why our technological problems are really social – and why we should apply social science lenses to data work, algorithmic justice, and even the errors and reality distortions inherent in AI models.
References:
Google DeepMind paper-like object: Evaluating Frontier Models for Dangerous Capabilities
Fresh AI Hell:
Hacker tool extracts all the data collected by Windows' 'Recall' AI
In NYC, ShotSpotter calls are 87 percent false alarms
"AI" system to make callers sound less angry to call center workers
Anthropic's Claude Sonnet 3.5 evaluated for "graduate level reasoning"
OpenAI's Mira Murati says "AI" will have 'PhD-level' intelligence
OpenAI's Mira Murati also says AI will take some creative jobs, maybe they shouldn't have been there to start out with
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
Alex
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
You've already heard about the rock-prescribing, glue pizza-suggesting hazards of Google's AI overviews. But the problems with the internet's most-used search engine go way back. UCLA scholar and "Algorithms of Oppression" author Safiya Noble joins Alex and Emily in a conversation about how Google has long been breaking our information ecosystem in the name of shareholders and ad sales.
References:
Blog post, May 14: Generative AI in Search: Let Google do the searching for you
Blog post, May 30: AI Overviews: About last week
Algorithms of Oppression: How Search Engines Reinforce Racism, by Safiya Noble
Fresh AI Hell:
AI Catholic priest demoted after saying it's OK to baptize babies with Gatorade
National Archives bans use of ChatGPT
ChatGPT better than humans at "Moral Turing Test"
Taco Bell as an "AI first" company
AGI by 2027, in one hilarious graph
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
Alex
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
The politicians are at it again: Senate Majority Leader Chuck Schumer's series of industry-centric forums last year have birthed a "roadmap" for future legislation. Emily and Alex take a deep dive on this report, and conclude that the time spent writing it could have instead been spent...making useful laws.
References:
Driving US Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States
Tech Policy Press: US Senate AI Insight Forum Tracker
Put the Public in the Driver's Seat: Shadow Report to the US Senate AI Policy Roadmap
Emily's opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge” virtual roundtable
Fresh AI Hell:
Homophobia in Spotify's chatbot
StackOverflow in bed with OpenAI, pushing back against resistance
OpenAI making copyright claim against ChatGPT subreddit
Introducing synthetic text for police reports
ChatGPT-like "AI" assistant ... as a car feature?
Scarlett Johansson vs. OpenAI
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
Alex
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
Will the LLMs somehow become so advanced that they learn to lie to us in order to achieve their own ends? It's the stuff of science fiction, and in science fiction these claims should remain. Emily and guest host Margaret Mitchell, machine learning researcher and chief ethics scientist at HuggingFace, break down why 'AI deception' is firmly a feature of human hype.
Reference:
Patterns: "AI deception: A survey of examples, risks, and potential solutions"
Fresh AI Hell:
Adobe's 'ethical' image generator is still pulling from copyrighted material
Apple advertising hell: vivid depiction of tech crushing creativity, as if it were good
"AI is more creative than 99% of people"
AI generated employee handbooks causing chaos
Bumble CEO: Let AI 'concierge' do your dating for you.
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
Alex
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
AI Hell froze over this winter and now a flood of meltwater threatens to drown Alex and Emily. Armed with raincoats and a hastily-written sea shanty*, they tour the realms, from spills of synthetic information, to the special corner reserved for ShotSpotter.
**Lyrics & video on Peertube.
*Surveillance:*
*Synthetic information spills:*
*Toxic wish fulfillment:*
*ShotSpotter:*
*Selling your data:*
*AI is always people:*
*TESCREAL corporate capture:*
*Accountability:*
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
Alex
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
Will AI someday do all our scientific research for us? Not likely. Drs. Molly Crockett and Lisa Messeri join for a takedown of the hype of "self-driving labs" and why such misrepresentations also harm the humans who are vital to scientific research.
Dr. Molly Crockett is an associate professor of psychology at Princeton University.
Dr. Lisa Messeri is an associate professor of anthropology at Yale University, and author of the new book, In the Land of the Unreal: Virtual and Other Realities in Los Angeles.
References:
AI For Scientific Discovery - A Workshop
Nature: The Nobel Turing Challenge
Nobel Turing Challenge Website
Eric Schmidt: AI Will Transform Science
Molly Crockett & Lisa Messeri in Nature: Artificial intelligence and illusions of understanding in scientific research
404 Media: Is Google's AI actually discovering 'millions of new materials?'
Fresh Hell:
Yann LeCun realizes generative AI sucks, suggests shift to objective-driven AI
In contrast:
https://x.com/ylecun/status/1592619400024428544
https://x.com/ylecun/status/1594348928853483520
https://x.com/ylecun/status/1617910073870934019
CBS News: Upselling “AI” mammograms
Ars Technica: Rhyming AI clock sometimes lies about the time
Ars Technica: Surveillance by M&M's vending machine
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
Alex
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
The podcast currently has 40 episodes available.
1,359 Listeners
1,400 Listeners
99 Listeners
3,766 Listeners
525 Listeners
148 Listeners
495 Listeners
2,099 Listeners
424 Listeners
176 Listeners
294 Listeners
401 Listeners
136 Listeners
299 Listeners
115 Listeners