SXSW

Break the Bot: Red-Teaming Large Language Models


Listen Later

Red-teaming has long been a crucial component of a robust security toolkit for software systems. Now, companies developing large language models (LLMs) and other GenAI products are increasingly applying the technique to model outputs, as a means of uncovering harmful content that generative models may produce. Thus, developers can identify and mitigate issues before they occur in production. In this session, join Numa Dhamani and Maggie Engler, co-authors of Introduction to Generative AI, to learn a complete workflow and arsenal of strategies for red-teaming LLMs. Speakers: Numa Dhamani, Maggie Engler

See omnystudio.com/listener for privacy information.

...more
View all episodesView all episodes
Download on the App Store

SXSWBy SXSW