Leading researchers propose a shift away from agentic AI, which autonomously pursues goals and poses catastrophic risks such as deception and loss of human control. To mitigate these dangers, they introduce the concept of Scientist AI, a non-agentic framework designed for understanding the world rather than acting within it. This system utilizes a probabilistic world model to generate causal theories and an inference machine to answer queries based on those hypotheses. By adopting a Bayesian approach, the model explicitly accounts for uncertainty, preventing the overconfident or manipulative behaviors common in current reward-driven systems. Ultimately, this safe-by-design alternative aims to accelerate scientific progress while serving as a trustworthy guardrail against more volatile autonomous agents. Source: February 24 2025 Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path? Authors: Yoshua Bengio (Mila — Quebec AI Institute; Université de Montréal), Michael Cohen (University of California, Berkeley), Damiano Fornasiere (Mila — Quebec AI Institute), Joumana Ghosn (Mila — Quebec AI Institute), Pietro Greiner (Mila — Quebec AI Institute), Matt MacDermott (Imperial College London; Mila — Quebec AI Institute), Sören Mindermann (Mila — Quebec AI Institute), Adam Oberman (Mila — Quebec AI Institute; McGill University), Jesse Richardson (Mila — Quebec AI Institute), Oliver Richardson (Mila — Quebec AI Institute; Université de Montréal), Marc-Antoine Rondeau (Mila — Quebec AI Institute), Pierre-Luc St-Charles (Mila — Quebec AI Institute), David Williams-King (Mila — Quebec AI Institute) https://arxiv.org/pdf/2502.15657