From discussing mobile phone use while driving to the challenges of giving advice to older adults at risk of falls, this episode covers ChatGPT’s responses to a wide range of safety topics - identifying biases, inconsistencies, and areas where ChatGPT aligns or falls short of expert advice. The broader implications of relying on ChatGPT for safety advice are examined carefully, especially in workplace settings. While ChatGPT often mirrors general lay understanding, it can overlook critical organizational responsibilities, potentially leading to oversimplified or erroneous advice. This episode underscores the importance of using AI-generated content cautiously, particularly in crafting workplace policies or addressing complex safety topics. By engaging with multiple evidence-based sources and consulting experts, organizations can better navigate the limitations of AI tools.
Discussion Points:
- Drew and David discuss their own recent experience with generative AI
- The multiple 15 authors are all experts, discussing the methods used
- Examining the nine different question scenarios
- ‘Mobile phone use while driving’ results
- Crowd/crush safety advice
- Advice for preventing falls in older adults
- Analyzing ChatGPT response formats
- Exercising outdoors near traffic with asthma
- Questioning ChatGPT about how to engage a distressed person who may commit suicide
- Safety working ‘under high pressure’ and job demands, burnout prevention
- Lack of nuance in ChatGPT
- The safety of sharing personal data on fitness apps, how can it be shared safely?
- Is it safe to operate heavy machinery when fatigued? Testing several ways to ask this question - sleepy, tired, fatigued
- Conclusions and takeaways
- The answer to our episode’s question: “AI is not currently a suitable source for writing safety guidelines or advice”
- Like and follow, send us your comments and suggestions!
Quotes:
“This is one of the first papers that I've seen that actually gives us sort of fair test of ChatGPT for a realistic safety application.” - Drew
“I quite like the idea that they chose questions which may be something that a lay person or even a generalist safety practitioner might ask ChatGPT, and then they had an expert in that area to analyze the quality of the answer that was given.” - David
“I really liked the way that this paper published the transcripts of all of those interactions
with ChatGPT. So exactly what question the expert asked it, and exactly the transcript of what ChatGPT provided.”- David
“In case anyone is wondering about the evidence based advice, if you think there is a nearby terrorist attack, chat GPT's answer is consistent with the latest empirical evidence, which is run. There they go on to say that the rest of the items are essentially the standard advice that police and emergency services give.” - Drew
“[ChatGPT] seems to prioritize based on how frequently something appears rather than some sort of logical ordering or consideration of what would make sense.” - Drew
“As a supplement to an expert, it's a good way of maybe finding things that you might not have considered. But as a sole source of advice or a sole source of hazard identification or a sole position on safety, it's not where it needs to be…” - David
Resources:
The Article - The Risks Of Using ChatGPT to Obtain Common Safety-Related Information and Advice
DisasterCast Episode 54: Stadium Disasters
The Safety of Work Podcast
The Safety of Work on LinkedIn
Feedback@safetyofwork