
Sign up to save your podcasts
Or
Conversations around the short and long term risks and potentially significant unintended consequences of AI are increasing in volume. This has culminated recently in a controversial open letter, coordinated by Future of Life institute, from hundreds of leading figures in this space, including Elon Musk, asking for development and AI learning to be paused, until it is better understood. “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter says. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Dave, Sjoukje & Rob talk with Futurist Theo Preistley about the risks of AI already manifesting in society, what additional features and automations are coming, which potentially add more risk, and what can could and should be done at this point to ensure we bring AI into our lives responsibly.
Finally, in this weeks Trend, we talk about what organisations should do to prep for AI, and that fact that they still have time to act now to get the right arrangements in place.
TLDR:
01:15 Intros
02:08 Cloud conversation with Theo Preistley
38:25 What should organisations do to prep for AI?
45:27 The Evil Dead Rise and rewatch Picard!
Further Reading:
https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt
5
66 ratings
Conversations around the short and long term risks and potentially significant unintended consequences of AI are increasing in volume. This has culminated recently in a controversial open letter, coordinated by Future of Life institute, from hundreds of leading figures in this space, including Elon Musk, asking for development and AI learning to be paused, until it is better understood. “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter says. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Dave, Sjoukje & Rob talk with Futurist Theo Preistley about the risks of AI already manifesting in society, what additional features and automations are coming, which potentially add more risk, and what can could and should be done at this point to ensure we bring AI into our lives responsibly.
Finally, in this weeks Trend, we talk about what organisations should do to prep for AI, and that fact that they still have time to act now to get the right arrangements in place.
TLDR:
01:15 Intros
02:08 Cloud conversation with Theo Preistley
38:25 What should organisations do to prep for AI?
45:27 The Evil Dead Rise and rewatch Picard!
Further Reading:
https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt
890 Listeners
384 Listeners
201 Listeners
189 Listeners
108 Listeners
295 Listeners
96 Listeners
325 Listeners
267 Listeners
168 Listeners
186 Listeners
56 Listeners
412 Listeners
102 Listeners
196 Listeners