
Sign up to save your podcasts
Or
What are the benefits and risks of developing advanced AI? What kind of safety precautions could we take? Could we risk never making future discoveries, by over-limiting today’s AI in pre-emptive safety regulations?
In this episode we get a sceptical evaluation of the complex debate that’s currently raging on artificial Intelligence safety, aiming to get a balanced view of the extremely useful applications versus the currently hugely publicised existential risks, and evaluate the safety measures and legislative frameworks that are being considered to help avoid risk to humans. To do this we trace the path from today’s artificial intelligence right up the ever steeper curve towards artificial super intelligence; we risk risk assess the unpredictability of emergent properties of such systems; we assess the future of work, and the potential loss of control of our culture as AI starts to outnumber us and generate more and more of the media we consume.
My guest today has a unique take on these issues which took me by surprise, as he disagrees with the alarmism and call for harsh regulation, whilst openly predicting that emergent properties will more or less guarantee safety hazards. The fact that he has been at the cutting edge of computer science for over 40 years, creating computer language and Ai solutions, makes him well placed to provide a counterpoint to the AI safety campaigners calling for collective action. He is of course physicist, computer scientist and tech entrepreneur Stephen Wolfram. In 1987 he left academia at Caltech and Princeton behind and devoted himself to his own computer systems at his company Wolfram Research. He’s published many blog articles about his ideas, and written many influential books including “A new kind of science”, “A project to find the fundamental theory of physics”, and “Computer modelling and simulation of dynamic systems”, and most recently “The Second law” about the mystery of Entropy.
What we discussed:
00:00 Intro.
06:30 Stephen’s first forays into neural nets in the early 80s.
09:30 Cellular Automata.
11:00 Can you make the knowledge of the world available via computers?
13:00 Wolfram Alpha: A non-AI AI.
17:45 Can AI solve science?
22:00 AI is great at rough answers, worse at the detail.
33:00 Artificial General intelligence, A.G.I.
42:00 The pros & cons of super intelligence.
47:40 Chat GBT’s unpredicted peculiarities.
54:00 The spread of mistruth.
58:00 AI and the future of work.
01:05:20 Businesses leading automation push.
01:09:00 AI will outnumber us and network, changing our culture.
01:11:00 AI will follow a banal ‘mean’.
01:16:30 The AI Safety debate.
01:21:00 We have no choice, it will be developed anyway.
01:22:00 Ai systems may have feelings, we don’t know.
01:25:00 Stephen’s non-interventionist safety approach.
References:
Stepehn Wolfram, “A project to find the fundamental theory of physics”,
Stephen Wolfram, “The Second Law”
The History of “Neural Nets” since 1943, (Warren McCulloch and Walter Pitts paper)
Stephen Wolfram, “Can AI solve science?” article
4.7
4747 ratings
What are the benefits and risks of developing advanced AI? What kind of safety precautions could we take? Could we risk never making future discoveries, by over-limiting today’s AI in pre-emptive safety regulations?
In this episode we get a sceptical evaluation of the complex debate that’s currently raging on artificial Intelligence safety, aiming to get a balanced view of the extremely useful applications versus the currently hugely publicised existential risks, and evaluate the safety measures and legislative frameworks that are being considered to help avoid risk to humans. To do this we trace the path from today’s artificial intelligence right up the ever steeper curve towards artificial super intelligence; we risk risk assess the unpredictability of emergent properties of such systems; we assess the future of work, and the potential loss of control of our culture as AI starts to outnumber us and generate more and more of the media we consume.
My guest today has a unique take on these issues which took me by surprise, as he disagrees with the alarmism and call for harsh regulation, whilst openly predicting that emergent properties will more or less guarantee safety hazards. The fact that he has been at the cutting edge of computer science for over 40 years, creating computer language and Ai solutions, makes him well placed to provide a counterpoint to the AI safety campaigners calling for collective action. He is of course physicist, computer scientist and tech entrepreneur Stephen Wolfram. In 1987 he left academia at Caltech and Princeton behind and devoted himself to his own computer systems at his company Wolfram Research. He’s published many blog articles about his ideas, and written many influential books including “A new kind of science”, “A project to find the fundamental theory of physics”, and “Computer modelling and simulation of dynamic systems”, and most recently “The Second law” about the mystery of Entropy.
What we discussed:
00:00 Intro.
06:30 Stephen’s first forays into neural nets in the early 80s.
09:30 Cellular Automata.
11:00 Can you make the knowledge of the world available via computers?
13:00 Wolfram Alpha: A non-AI AI.
17:45 Can AI solve science?
22:00 AI is great at rough answers, worse at the detail.
33:00 Artificial General intelligence, A.G.I.
42:00 The pros & cons of super intelligence.
47:40 Chat GBT’s unpredicted peculiarities.
54:00 The spread of mistruth.
58:00 AI and the future of work.
01:05:20 Businesses leading automation push.
01:09:00 AI will outnumber us and network, changing our culture.
01:11:00 AI will follow a banal ‘mean’.
01:16:30 The AI Safety debate.
01:21:00 We have no choice, it will be developed anyway.
01:22:00 Ai systems may have feelings, we don’t know.
01:25:00 Stephen’s non-interventionist safety approach.
References:
Stepehn Wolfram, “A project to find the fundamental theory of physics”,
Stephen Wolfram, “The Second Law”
The History of “Neural Nets” since 1943, (Warren McCulloch and Walter Pitts paper)
Stephen Wolfram, “Can AI solve science?” article
1,842 Listeners
625 Listeners
548 Listeners
1,242 Listeners
969 Listeners
1,565 Listeners
4,145 Listeners
256 Listeners
455 Listeners
123 Listeners
810 Listeners
472 Listeners
392 Listeners
569 Listeners
25 Listeners