
Sign up to save your podcasts
Or


Summary
There is significant pressure in research communities to use LLMs like ChatGPT or Claude for research writing tasks. This includes summarising documents, brainstorming ideas, core writing and grammar / editing.
It has gotten to the point where, if you mention a research idea, people routinely respond "ask Claude," or "ask ChatGPT". The assumption is that researchers should no longer work alone on tasks, and should always work in AI-human teams, where the human takes the back seat.
I argue that LLMs should not be used for any core writing tasks or brainstorming, in any research activity. I'm open to the idea that they should be used to summarise large amounts of information, search queries, data crunching, editing and grammar.
My view comes from seeing firsthand the harms AI writing is causing to researchers. I have directly seen (i) deskilling, (ii) over-reliance, (iii) bias and errors of reasoning, (iv) gradual disempowerment.
I'm also concerned that AI writing distorts the job market, making unqualified candidates seem better than they are, leading to negative hiring outcomes.
Brief Background
AI writing seems like a natural thing to rely on. You can quickly generate ideas, generate drafts and generate feedback. The adoption [...]
---
Outline:
(00:11) Summary
(01:25) Brief Background
(02:18) A. Origins of My Concern
(03:40) Risks of Using AI for Research Writing
(05:07) The Groupthink Problem Elaborated
(06:16) A Brief Refutation of Counterarguments:
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By EA Forum TeamSummary
There is significant pressure in research communities to use LLMs like ChatGPT or Claude for research writing tasks. This includes summarising documents, brainstorming ideas, core writing and grammar / editing.
It has gotten to the point where, if you mention a research idea, people routinely respond "ask Claude," or "ask ChatGPT". The assumption is that researchers should no longer work alone on tasks, and should always work in AI-human teams, where the human takes the back seat.
I argue that LLMs should not be used for any core writing tasks or brainstorming, in any research activity. I'm open to the idea that they should be used to summarise large amounts of information, search queries, data crunching, editing and grammar.
My view comes from seeing firsthand the harms AI writing is causing to researchers. I have directly seen (i) deskilling, (ii) over-reliance, (iii) bias and errors of reasoning, (iv) gradual disempowerment.
I'm also concerned that AI writing distorts the job market, making unqualified candidates seem better than they are, leading to negative hiring outcomes.
Brief Background
AI writing seems like a natural thing to rely on. You can quickly generate ideas, generate drafts and generate feedback. The adoption [...]
---
Outline:
(00:11) Summary
(01:25) Brief Background
(02:18) A. Origins of My Concern
(03:40) Risks of Using AI for Research Writing
(05:07) The Groupthink Problem Elaborated
(06:16) A Brief Refutation of Counterarguments:
---
First published:
Source:
---
Narrated by TYPE III AUDIO.