
Sign up to save your podcasts
Or
A new paper by Yoshua Bengio and the Safe Artificial Intelligence For Humanity (SAIFH) team argues that the current push towards building generalist AI agents presents catastrophic risks, creating a need for more caution and an alternative approach. We propose such an approach in the form of Scientist AI, a non-agentic AI system that aims to be the foundation for safe superintelligence. (Note that this paper is intended for a broad audience, including readers unfamiliar with AI safety.)
Abstract
The leading AI companies are increasingly focused on building generalist AI agents—systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. We discuss how these risks arise from current AI [...]
---
Outline:
(00:42) Abstract
(02:42) Executive Summary
(02:47) Highly effective AI without agency
(09:51) Mapping out ways of losing control
(15:24) The Scientist AI research plan
(20:21) Career Opportunities at SAIFH
---
First published:
Source:
Narrated by TYPE III AUDIO.
A new paper by Yoshua Bengio and the Safe Artificial Intelligence For Humanity (SAIFH) team argues that the current push towards building generalist AI agents presents catastrophic risks, creating a need for more caution and an alternative approach. We propose such an approach in the form of Scientist AI, a non-agentic AI system that aims to be the foundation for safe superintelligence. (Note that this paper is intended for a broad audience, including readers unfamiliar with AI safety.)
Abstract
The leading AI companies are increasingly focused on building generalist AI agents—systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. We discuss how these risks arise from current AI [...]
---
Outline:
(00:42) Abstract
(02:42) Executive Summary
(02:47) Highly effective AI without agency
(09:51) Mapping out ways of losing control
(15:24) The Scientist AI research plan
(20:21) Career Opportunities at SAIFH
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,345 Listeners
2,397 Listeners
8,002 Listeners
4,119 Listeners
90 Listeners
1,501 Listeners
9,263 Listeners
90 Listeners
426 Listeners
5,463 Listeners
15,405 Listeners
517 Listeners
125 Listeners
71 Listeners
469 Listeners