
Sign up to save your podcasts
Or


What happens when AI-generated text masquerades as human research?
Kimberly Becker, PhD, a corpus linguist joins the show this week to talk about her study comparing human-written versus AI-generated abstracts in high-stakes healthcare research.
The findings reveal something unsettling about how LLMs may potentially reshape scientific communication. ChatGPT's outputs showed higher informational density, formulaic patterns, and a lack of hedging, the linguistic uncertainty that marks careful scientific thinking. The AI doesn't say "may suggest" or "could indicate." It asserts. Confidently. Even when it's wrong.
This matters beyond academia. When we optimize for speed and polish over depth and precision, we're changing how we write, and therefore changing how we think. We're externalizing cognition to systems trained on Reddit threads and blog posts, then wondering why the output feels sterile and an inch-deep.
Becker's work raises uncomfortable questions:
This episode is about whether we're paying attention to what we're losing while we chase efficiency.
Mentioned:
• • Linguistics Relevance Theory
By BKBT Productions5
1010 ratings
What happens when AI-generated text masquerades as human research?
Kimberly Becker, PhD, a corpus linguist joins the show this week to talk about her study comparing human-written versus AI-generated abstracts in high-stakes healthcare research.
The findings reveal something unsettling about how LLMs may potentially reshape scientific communication. ChatGPT's outputs showed higher informational density, formulaic patterns, and a lack of hedging, the linguistic uncertainty that marks careful scientific thinking. The AI doesn't say "may suggest" or "could indicate." It asserts. Confidently. Even when it's wrong.
This matters beyond academia. When we optimize for speed and polish over depth and precision, we're changing how we write, and therefore changing how we think. We're externalizing cognition to systems trained on Reddit threads and blog posts, then wondering why the output feels sterile and an inch-deep.
Becker's work raises uncomfortable questions:
This episode is about whether we're paying attention to what we're losing while we chase efficiency.
Mentioned:
• • Linguistics Relevance Theory