
Sign up to save your podcasts
Or


One of my takeaways from EA Global this year was that most alignment people aren't explicitly focused on LLM-based agents (LMAs)[1] as a route to takeover-capable AGI. I want to better understand this position, since I estimate this path to AGI as likely enough (maybe around 60%) to be worth specific focus and concern.
Two reasons people might not care about aligning LMAs in particular:
I'm aware of arguments/questions like Have LLMs Generated Novel Insights?, LLM Generality is a Timeline Crux, and LLMs' weakness on what Steve Byrnes calls discernment: the ability to tell their better ideas/outputs from their worse ones.[2] I'm curious if these or other ideas play a major role in your thinking.
I'm even more curious about [...]
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongOne of my takeaways from EA Global this year was that most alignment people aren't explicitly focused on LLM-based agents (LMAs)[1] as a route to takeover-capable AGI. I want to better understand this position, since I estimate this path to AGI as likely enough (maybe around 60%) to be worth specific focus and concern.
Two reasons people might not care about aligning LMAs in particular:
I'm aware of arguments/questions like Have LLMs Generated Novel Insights?, LLM Generality is a Timeline Crux, and LLMs' weakness on what Steve Byrnes calls discernment: the ability to tell their better ideas/outputs from their worse ones.[2] I'm curious if these or other ideas play a major role in your thinking.
I'm even more curious about [...]
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.

26,354 Listeners

2,444 Listeners

9,124 Listeners

4,154 Listeners

92 Listeners

1,595 Listeners

9,902 Listeners

90 Listeners

506 Listeners

5,471 Listeners

16,051 Listeners

540 Listeners

131 Listeners

95 Listeners

521 Listeners