
Sign up to save your podcasts
Or


David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.
You can learn more about David's work at ARIA here:
https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/
Timestamps:
00:00 What is Safeguarded AI?
16:28 Implementing Safeguarded AI
22:58 Can we trust Safeguarded AIs?
31:00 Formalizing more of the world
37:34 The performance cost of verified AI
47:58 Changing attitudes towards AI
52:39 Flexible Hardware-Enabled Guarantees
01:24:15 Mind uploading
01:36:14 Lessons from David's early life
By Future of Life Institute4.8
107107 ratings
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.
You can learn more about David's work at ARIA here:
https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/
Timestamps:
00:00 What is Safeguarded AI?
16:28 Implementing Safeguarded AI
22:58 Can we trust Safeguarded AIs?
31:00 Formalizing more of the world
37:34 The performance cost of verified AI
47:58 Changing attitudes towards AI
52:39 Flexible Hardware-Enabled Guarantees
01:24:15 Mind uploading
01:36:14 Lessons from David's early life

26,370 Listeners

2,450 Listeners

1,084 Listeners

594 Listeners

612 Listeners

288 Listeners

4,174 Listeners

1,599 Listeners

507 Listeners

543 Listeners

136 Listeners

121 Listeners

599 Listeners

154 Listeners

133 Listeners