
Sign up to save your podcasts
Or
In this episode we discuss the super-hot and super-important topic of what Risks AI poses to humanity and society. There is perhaps not a hotter and more misunderstood topic than that of AI and the risks it poses. Many indeed fear that “it” will eliminate humans. The influential Geoffrey Hinton, co-creator of the seminal backpropagation algorithm, has been all over the media saying we need to worry and that there is 50% chance that AI outsmarts humanity and poses an existential risk. Other insiders put that probability at 100%.
Should we too worry if the real insiders are sounding the alarm?
I think not.
But why are the insiders not the people to turn to?
If the insiders were say insiders at Boeing who were blowing the whistle about dangerous aircraft manufacture processes then yes we should be concerned. Mind you we might also be concerned that three in a row now, having found their whistles, suddenly died in mysterious circumstances.
However if, in simple terms, airplane manufacturing risks are very much in the box, the risks AI poses are very much out of the box that the experts are expert in.
Whilst one must listen to their technological concerns, neural net experts are not experts out of the box. And it requires an assessment from outside the box to consider the risks to society as a whole.
So lets dive into whether AI is about to decide that their users are useless eaters or will AI be more benign than WEF luminaries and not turn us into slaves but serve us?
Or is AI merely the latest in many tools, many changes of technology which fits into a long-established pattern of such over the millennia. Each new technology by definition affecting the world in unique new ways as well as same old patterns.
We will have five sections:
And much much more
4.6
88 ratings
In this episode we discuss the super-hot and super-important topic of what Risks AI poses to humanity and society. There is perhaps not a hotter and more misunderstood topic than that of AI and the risks it poses. Many indeed fear that “it” will eliminate humans. The influential Geoffrey Hinton, co-creator of the seminal backpropagation algorithm, has been all over the media saying we need to worry and that there is 50% chance that AI outsmarts humanity and poses an existential risk. Other insiders put that probability at 100%.
Should we too worry if the real insiders are sounding the alarm?
I think not.
But why are the insiders not the people to turn to?
If the insiders were say insiders at Boeing who were blowing the whistle about dangerous aircraft manufacture processes then yes we should be concerned. Mind you we might also be concerned that three in a row now, having found their whistles, suddenly died in mysterious circumstances.
However if, in simple terms, airplane manufacturing risks are very much in the box, the risks AI poses are very much out of the box that the experts are expert in.
Whilst one must listen to their technological concerns, neural net experts are not experts out of the box. And it requires an assessment from outside the box to consider the risks to society as a whole.
So lets dive into whether AI is about to decide that their users are useless eaters or will AI be more benign than WEF luminaries and not turn us into slaves but serve us?
Or is AI merely the latest in many tools, many changes of technology which fits into a long-established pattern of such over the millennia. Each new technology by definition affecting the world in unique new ways as well as same old patterns.
We will have five sections:
And much much more
1,839 Listeners
898 Listeners
30,820 Listeners
517 Listeners
3,995 Listeners
224 Listeners
686 Listeners
2,541 Listeners
3,271 Listeners
45 Listeners
986 Listeners
11 Listeners
462 Listeners
220 Listeners
46 Listeners