
Sign up to save your podcasts
Or


This is a brief overview of the Center on Long-Term Risk (CLR)'s activities in 2025 and our plans for 2026. We are hoping to fundraise $400,000 to fulfill our target budget in 2026.
About us
CLR works on addressing the worst-case risks from the development and deployment of advanced AI systems in order to reduce s-risks. Our research primarily involves thinking about how to reduce conflict and promote cooperation in interactions involving powerful AI systems. In addition to research, we conduct a range of activities aimed at building a community of people interested in s-risk reduction, and support efforts that contribute to s-risk reduction via the CLR Fund.
2025 was a year of significant transition for CLR. Jesse Clifton stepped down as Executive Director in January, succeeded by Tristan Cook and Mia Taylor as Managing Director and Research Director. Following Mia's subsequent departure in August, Tristan continues as Managing Director with Niels Warncke leading empirical research.
During this period, we clarified the focus of our empirical and conceptual research agendas: respectively, studying the emergence of undesirable personas in LLMs, and developing interventions to get AIs to use “safe Pareto improvements” to prevent catastrophic conflict. We held another [...]
---
Outline:
(00:28) About us
(01:44) Review of 2025
(01:49) Research
(04:47) Community building
(05:15) Donate
(05:55) Get involved
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongThis is a brief overview of the Center on Long-Term Risk (CLR)'s activities in 2025 and our plans for 2026. We are hoping to fundraise $400,000 to fulfill our target budget in 2026.
About us
CLR works on addressing the worst-case risks from the development and deployment of advanced AI systems in order to reduce s-risks. Our research primarily involves thinking about how to reduce conflict and promote cooperation in interactions involving powerful AI systems. In addition to research, we conduct a range of activities aimed at building a community of people interested in s-risk reduction, and support efforts that contribute to s-risk reduction via the CLR Fund.
2025 was a year of significant transition for CLR. Jesse Clifton stepped down as Executive Director in January, succeeded by Tristan Cook and Mia Taylor as Managing Director and Research Director. Following Mia's subsequent departure in August, Tristan continues as Managing Director with Niels Warncke leading empirical research.
During this period, we clarified the focus of our empirical and conceptual research agendas: respectively, studying the emergence of undesirable personas in LLMs, and developing interventions to get AIs to use “safe Pareto improvements” to prevent catastrophic conflict. We held another [...]
---
Outline:
(00:28) About us
(01:44) Review of 2025
(01:49) Research
(04:47) Community building
(05:15) Donate
(05:55) Get involved
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,370 Listeners

2,450 Listeners

8,708 Listeners

4,174 Listeners

93 Listeners

1,599 Listeners

9,855 Listeners

93 Listeners

507 Listeners

5,529 Listeners

16,019 Listeners

543 Listeners

136 Listeners

94 Listeners

475 Listeners