
Sign up to save your podcasts
Or
A new paper by Yoshua Bengio and the Safe Artificial Intelligence For Humanity (SAIFH) team argues that the current push towards building generalist AI agents presents catastrophic risks, creating a need for more caution and an alternative approach. We propose such an approach in the form of Scientist AI, a non-agentic AI system that aims to be the foundation for safe superintelligence. (Note that this paper is intended for a broad audience, including readers unfamiliar with AI safety.)
Abstract
The leading AI companies are increasingly focused on building generalist AI agents—systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. We discuss how these risks arise from current AI [...]
---
Outline:
(00:42) Abstract
(02:42) Executive Summary
(02:47) Highly effective AI without agency
(09:51) Mapping out ways of losing control
(15:24) The Scientist AI research plan
(20:21) Career Opportunities at SAIFH
---
First published:
Source:
Narrated by TYPE III AUDIO.
A new paper by Yoshua Bengio and the Safe Artificial Intelligence For Humanity (SAIFH) team argues that the current push towards building generalist AI agents presents catastrophic risks, creating a need for more caution and an alternative approach. We propose such an approach in the form of Scientist AI, a non-agentic AI system that aims to be the foundation for safe superintelligence. (Note that this paper is intended for a broad audience, including readers unfamiliar with AI safety.)
Abstract
The leading AI companies are increasingly focused on building generalist AI agents—systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. We discuss how these risks arise from current AI [...]
---
Outline:
(00:42) Abstract
(02:42) Executive Summary
(02:47) Highly effective AI without agency
(09:51) Mapping out ways of losing control
(15:24) The Scientist AI research plan
(20:21) Career Opportunities at SAIFH
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,334 Listeners
2,399 Listeners
7,817 Listeners
4,107 Listeners
87 Listeners
1,453 Listeners
8,761 Listeners
90 Listeners
353 Listeners
5,356 Listeners
15,023 Listeners
464 Listeners
128 Listeners
73 Listeners
433 Listeners