
Sign up to save your podcasts
Or
Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Support this show http://supporter.acast.com/lawfare.
Hosted on Acast. See acast.com/privacy for more information.
4.7
61306,130 ratings
Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Support this show http://supporter.acast.com/lawfare.
Hosted on Acast. See acast.com/privacy for more information.
3,460 Listeners
1,943 Listeners
1,725 Listeners
32,412 Listeners
7,538 Listeners
11,563 Listeners
4,582 Listeners
5,533 Listeners
1,340 Listeners
10,271 Listeners
2,499 Listeners
450 Listeners
2,803 Listeners
7,083 Listeners
3,408 Listeners