
Sign up to save your podcasts
Or
With a healthy dose of skepticism, Dr. Anjalie Field joins our Faculty Factory Podcast this week to discuss the ethical considerations relevant to faculty regarding the use of artificial intelligence, with a specific emphasis on Large Language Models (LLMs) like ChatGPT.
Dr. Field is an assistant professor in the Department of Computer Science in the Whiting School of Engineering at Johns Hopkins University. This is her first time joining our podcast, and we are excited to have her on the show.
With expertise in natural language processing and social biases surrounding artificial intelligence, Dr. Field brings us the latest ethical considerations within the A.I. boom that we all need to be informed about.
Dr. Field's emphasis on critical thinking and skepticism when utilizing A.I. models serve as a cautionary tale to all of us A.I. users.
We must consider the hidden biases behind A.I.-generated outputs. As illustrated perfectly in this conversation, there is a growing and undeniable need to promote responsible and inclusive A.I. applications moving forward.
For more Faculty Factory resources and podcasts, please visit: https://facultyfactory.org/
4.8
1818 ratings
With a healthy dose of skepticism, Dr. Anjalie Field joins our Faculty Factory Podcast this week to discuss the ethical considerations relevant to faculty regarding the use of artificial intelligence, with a specific emphasis on Large Language Models (LLMs) like ChatGPT.
Dr. Field is an assistant professor in the Department of Computer Science in the Whiting School of Engineering at Johns Hopkins University. This is her first time joining our podcast, and we are excited to have her on the show.
With expertise in natural language processing and social biases surrounding artificial intelligence, Dr. Field brings us the latest ethical considerations within the A.I. boom that we all need to be informed about.
Dr. Field's emphasis on critical thinking and skepticism when utilizing A.I. models serve as a cautionary tale to all of us A.I. users.
We must consider the hidden biases behind A.I.-generated outputs. As illustrated perfectly in this conversation, there is a growing and undeniable need to promote responsible and inclusive A.I. applications moving forward.
For more Faculty Factory resources and podcasts, please visit: https://facultyfactory.org/
43,967 Listeners
10,942 Listeners
77,811 Listeners
1,155 Listeners
43,483 Listeners
676 Listeners
1,027 Listeners
708 Listeners
1,402 Listeners
9,188 Listeners
8,213 Listeners
1,371 Listeners
28,473 Listeners
151 Listeners