LessWrong (30+ Karma)

“UK AISI’s Alignment Team: Research Agenda” by Benjamin Hilton, Jacob Pfau, Marie_DB, Geoffrey Irving


Listen Later

The UK's AI Security Institute published its research agenda yesterday. This post gives more details about how the Alignment Team is thinking about our agenda.

Summary: The AISI Alignment Team focuses on research relevant to reducing risks to safety and security from AI systems which are autonomously pursuing a course of action which could lead to egregious harm, and which are not under human control. No known technical mitigations are reliable past AGI.

Our plan is to break down promising alignment agendas by developing safety case sketches. We'll use these sketches to identify specific holes and gaps in current approaches. We expect that many of these gaps can be formulated as well-defined subproblems within existing fields (e.g., theoretical computer science). By identifying researchers with relevant expertise who aren't currently working on alignment and funding their efforts on these subproblems, we hope to substantially increase parallel progress on alignment.

[...]

---

Outline:

(01:41) 1. Why safety case-oriented alignment research?

(03:33) 2. Our initial focus: honesty and asymptotic guarantees

(07:07) Example: Debate safety case sketch

(08:58) 3. Future work

(09:02) Concrete open problems in honesty

(12:13) More details on our empirical approach

(14:23) Moving beyond honesty: automated alignment

(15:36) 4. List of open problems we'd like to see solved

(15:53) 4.1 Empirical problems

(17:57) 4.2 Theoretical problems

(21:23) Collaborate with us

---

First published:

May 7th, 2025

Source:

https://www.lesswrong.com/posts/tbnw7LbNApvxNLAg8/uk-aisi-s-alignment-team-research-agenda

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,367 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,397 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

7,779 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,103 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

87 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,442 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

8,778 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

89 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

355 Listeners

Hard Fork by The New York Times

Hard Fork

5,370 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,053 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

460 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

126 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

64 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

432 Listeners