
Sign up to save your podcasts
Or


Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI's AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.
By Members of Technical Staff at the Software Engineering Institute4.5
1818 ratings
Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI's AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.

32,246 Listeners

273 Listeners

26,380 Listeners

1,105 Listeners

626 Listeners

371 Listeners

651 Listeners

44 Listeners

317 Listeners

8,077 Listeners

73 Listeners

0 Listeners

0 Listeners

6,097 Listeners

1,348 Listeners

139 Listeners

16,525 Listeners