
Sign up to save your podcasts
Or
From discussing mobile phone use while driving to the challenges of giving advice to older adults at risk of falls, this episode covers ChatGPT’s responses to a wide range of safety topics - identifying biases, inconsistencies, and areas where ChatGPT aligns or falls short of expert advice. The broader implications of relying on ChatGPT for safety advice are examined carefully, especially in workplace settings. While ChatGPT often mirrors general lay understanding, it can overlook critical organizational responsibilities, potentially leading to oversimplified or erroneous advice. This episode underscores the importance of using AI-generated content cautiously, particularly in crafting workplace policies or addressing complex safety topics. By engaging with multiple evidence-based sources and consulting experts, organizations can better navigate the limitations of AI tools.
Discussion Points:
Quotes:
“This is one of the first papers that I've seen that actually gives us sort of fair test of ChatGPT for a realistic safety application.” - Drew
“I quite like the idea that they chose questions which may be something that a lay person or even a generalist safety practitioner might ask ChatGPT, and then they had an expert in that area to analyze the quality of the answer that was given.” - David
“I really liked the way that this paper published the transcripts of all of those interactions
with ChatGPT. So exactly what question the expert asked it, and exactly the transcript of what ChatGPT provided.”- David
“In case anyone is wondering about the evidence based advice, if you think there is a nearby terrorist attack, chat GPT's answer is consistent with the latest empirical evidence, which is run. There they go on to say that the rest of the items are essentially the standard advice that police and emergency services give.” - Drew
“[ChatGPT] seems to prioritize based on how frequently something appears rather than some sort of logical ordering or consideration of what would make sense.” - Drew
“As a supplement to an expert, it's a good way of maybe finding things that you might not have considered. But as a sole source of advice or a sole source of hazard identification or a sole position on safety, it's not where it needs to be…” - David
Resources:
The Article - The Risks Of Using ChatGPT to Obtain Common Safety-Related Information and Advice
DisasterCast Episode 54: Stadium Disasters
The Safety of Work Podcast
The Safety of Work on LinkedIn
Feedback@safetyofwork
4.8
2020 ratings
From discussing mobile phone use while driving to the challenges of giving advice to older adults at risk of falls, this episode covers ChatGPT’s responses to a wide range of safety topics - identifying biases, inconsistencies, and areas where ChatGPT aligns or falls short of expert advice. The broader implications of relying on ChatGPT for safety advice are examined carefully, especially in workplace settings. While ChatGPT often mirrors general lay understanding, it can overlook critical organizational responsibilities, potentially leading to oversimplified or erroneous advice. This episode underscores the importance of using AI-generated content cautiously, particularly in crafting workplace policies or addressing complex safety topics. By engaging with multiple evidence-based sources and consulting experts, organizations can better navigate the limitations of AI tools.
Discussion Points:
Quotes:
“This is one of the first papers that I've seen that actually gives us sort of fair test of ChatGPT for a realistic safety application.” - Drew
“I quite like the idea that they chose questions which may be something that a lay person or even a generalist safety practitioner might ask ChatGPT, and then they had an expert in that area to analyze the quality of the answer that was given.” - David
“I really liked the way that this paper published the transcripts of all of those interactions
with ChatGPT. So exactly what question the expert asked it, and exactly the transcript of what ChatGPT provided.”- David
“In case anyone is wondering about the evidence based advice, if you think there is a nearby terrorist attack, chat GPT's answer is consistent with the latest empirical evidence, which is run. There they go on to say that the rest of the items are essentially the standard advice that police and emergency services give.” - Drew
“[ChatGPT] seems to prioritize based on how frequently something appears rather than some sort of logical ordering or consideration of what would make sense.” - Drew
“As a supplement to an expert, it's a good way of maybe finding things that you might not have considered. But as a sole source of advice or a sole source of hazard identification or a sole position on safety, it's not where it needs to be…” - David
Resources:
The Article - The Risks Of Using ChatGPT to Obtain Common Safety-Related Information and Advice
DisasterCast Episode 54: Stadium Disasters
The Safety of Work Podcast
The Safety of Work on LinkedIn
Feedback@safetyofwork
196 Listeners
874 Listeners
32,144 Listeners
169 Listeners
18 Listeners
95 Listeners
130 Listeners
318 Listeners
5,086 Listeners
15 Listeners
3 Listeners
68 Listeners
2,016 Listeners
12 Listeners
8 Listeners