
Sign up to save your podcasts
Or
This research paper investigates whether large language models (LLMs) like ChatGPT generate "bullshit," using Harry Frankfurt's definition. The authors develop a "Wittgensteinian Language Game Detector" (WLGD) to statistically analyze LLM output and compare it to human-generated text from politics and "bullshit jobs" (as defined by David Graeber). Two experiments using the WLGD demonstrate a correlation between LLM-generated text, political language, and text produced in bullshit jobs, suggesting the WLGD can reliably identify "bullshit." The study also explores why LLMs produce bullshit, attributing it partly to the design of chatbots and their interaction with users, highlighting the "Eliza effect" and the role of the "paratext." The WLGD is proposed as a potential "BS-meter" for detecting bullshit in various contexts.
https://arxiv.org/pdf/2411.15129
This research paper investigates whether large language models (LLMs) like ChatGPT generate "bullshit," using Harry Frankfurt's definition. The authors develop a "Wittgensteinian Language Game Detector" (WLGD) to statistically analyze LLM output and compare it to human-generated text from politics and "bullshit jobs" (as defined by David Graeber). Two experiments using the WLGD demonstrate a correlation between LLM-generated text, political language, and text produced in bullshit jobs, suggesting the WLGD can reliably identify "bullshit." The study also explores why LLMs produce bullshit, attributing it partly to the design of chatbots and their interaction with users, highlighting the "Eliza effect" and the role of the "paratext." The WLGD is proposed as a potential "BS-meter" for detecting bullshit in various contexts.
https://arxiv.org/pdf/2411.15129