
Sign up to save your podcasts
Or
This week, we're excited to be joined by Kyle O'Brien, Applied Scientist at Microsoft, to discuss his most recent paper, Composable Interventions for Language Models. Kyle and his team present a new framework, composable interventions, that allows for the study of multiple interventions applied sequentially to the same language model. The discussion will cover their key findings from extensive experiments, revealing how different interventions—such as knowledge editing, model compression, and machine unlearning—interact with each other.
Read it on the blog: https://arize.com/blog/composable-interventions-for-language-models/
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
5
1313 ratings
This week, we're excited to be joined by Kyle O'Brien, Applied Scientist at Microsoft, to discuss his most recent paper, Composable Interventions for Language Models. Kyle and his team present a new framework, composable interventions, that allows for the study of multiple interventions applied sequentially to the same language model. The discussion will cover their key findings from extensive experiments, revealing how different interventions—such as knowledge editing, model compression, and machine unlearning—interact with each other.
Read it on the blog: https://arize.com/blog/composable-interventions-for-language-models/
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
1,007 Listeners
587 Listeners
442 Listeners
296 Listeners
321 Listeners
210 Listeners
188 Listeners
90 Listeners
350 Listeners
128 Listeners
196 Listeners
72 Listeners
33 Listeners
22 Listeners
37 Listeners