
Sign up to save your podcasts
Or
---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
narrator_time: 2h40m
qa_time: 0h50m
---
We’ve argued that preventing an AI-related catastrophe may be the world’s most pressing problem, and that while progress in AI over the next few decades could have enormous benefits, it could also pose severe, possibly existential risks. As a result, we think that working on some technical AI research — research related to AI safety — may be a particularly high-impact career path.
But there are many ways of approaching this path that involve researching or otherwise advancing AI capabilities — meaning making AI systems better at some specific skills — rather than only doing things that are purely in the domain of safety. In short, this is because capabilities work and some forms of safety work are intertwined, and many available ways of learning enough about AI to contribute to safety are via capabilities-enhancing roles.
So if you want to help prevent an AI-related catastrophe, should you be open to roles that also advance AI capabilities, or steer clear of them?
Original article:
https://80000hours.org/articles/ai-capabilities/
Narrated for 80,000 Hours by TYPE III AUDIO.
Share feedback on this narration.
---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
narrator_time: 2h40m
qa_time: 0h50m
---
We’ve argued that preventing an AI-related catastrophe may be the world’s most pressing problem, and that while progress in AI over the next few decades could have enormous benefits, it could also pose severe, possibly existential risks. As a result, we think that working on some technical AI research — research related to AI safety — may be a particularly high-impact career path.
But there are many ways of approaching this path that involve researching or otherwise advancing AI capabilities — meaning making AI systems better at some specific skills — rather than only doing things that are purely in the domain of safety. In short, this is because capabilities work and some forms of safety work are intertwined, and many available ways of learning enough about AI to contribute to safety are via capabilities-enhancing roles.
So if you want to help prevent an AI-related catastrophe, should you be open to roles that also advance AI capabilities, or steer clear of them?
Original article:
https://80000hours.org/articles/ai-capabilities/
Narrated for 80,000 Hours by TYPE III AUDIO.
Share feedback on this narration.