
Sign up to save your podcasts
Or
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss codependency on generative AI and the growing risks of over-relying on generative AI tools like ChatGPT.
You’ll discover the hidden dangers when asking AI for advice, especially concerning health, finance, or legal matters. You’ll learn why AI’s helpful answers aren’t always truthful and how outdated information can mislead you. You’ll grasp powerful prompting techniques to guide AI towards more accurate and relevant results. You’ll find strategies to use AI more critically and avoid potentially costly mistakes. Watch the full episode for essential strategies to navigate AI safely and effectively!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
Download the MP3 audio here.
[podcastsponsor]
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
We have three areas where we do not just ask generative AI for information because of the way the model is trained. Those areas are finance, law and health. In those areas, they’re high risk areas. If you’re asking ChatGPT for advice without providing good data, the answers are really suspect. Katie, you also had some thoughts about how you’re seeing people using ChatGPT on LinkedIn.
Katie Robbert – 00:55
Every post starts with, “So I was talking with ChatGPT.” “ChatGPT was telling me this morning.” And the codependency that I’m seeing being built with these tools is alarming to me and I’m oversimplifying it, but I don’t see these tools as any better than when you were just doing an Internet search. What I mean by that is the quality of the data is not necessarily better.
Katie Robbert – 01:49
5
99 ratings
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss codependency on generative AI and the growing risks of over-relying on generative AI tools like ChatGPT.
You’ll discover the hidden dangers when asking AI for advice, especially concerning health, finance, or legal matters. You’ll learn why AI’s helpful answers aren’t always truthful and how outdated information can mislead you. You’ll grasp powerful prompting techniques to guide AI towards more accurate and relevant results. You’ll find strategies to use AI more critically and avoid potentially costly mistakes. Watch the full episode for essential strategies to navigate AI safely and effectively!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
Download the MP3 audio here.
[podcastsponsor]
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
We have three areas where we do not just ask generative AI for information because of the way the model is trained. Those areas are finance, law and health. In those areas, they’re high risk areas. If you’re asking ChatGPT for advice without providing good data, the answers are really suspect. Katie, you also had some thoughts about how you’re seeing people using ChatGPT on LinkedIn.
Katie Robbert – 00:55
Every post starts with, “So I was talking with ChatGPT.” “ChatGPT was telling me this morning.” And the codependency that I’m seeing being built with these tools is alarming to me and I’m oversimplifying it, but I don’t see these tools as any better than when you were just doing an Internet search. What I mean by that is the quality of the data is not necessarily better.
Katie Robbert – 01:49
171 Listeners
4 Listeners