
Sign up to save your podcasts
Or


Last month, some of the world’s leading artificial intelligence experts signed a petition calling for a prohibition on developing superintelligent AI until it is safe. One of those experts was Dan Hendrycks, director for the Center for AI Safety and an adviser to Elon Musk’s xAI and leading firm Scale AI. Dan has led original and thought-provoking research including into the risk of rogue AIs escaping human control, the deliberate misuse of the technology by malign actors, and the emergence of dangerous strategic dynamics if one nation creates superintelligence, prompting fears among rival nations.
In the lead-up to ASPI’s Sydney Dialogue tech and security conference in December, Dan talks about the different risks AI poses, the possibility that AI develops its own goals and values, the concept of recursion in which machines build smarter machines, definitions of artificial “general” intelligence, the shortcomings of current AIs and the inadequacy of historical analogies such as nuclear weapons in understanding risks from superintelligence.
To see some of the research discussed in today’s episode, visit the Center for AI Safety’s website here.
By Australian Strategic Policy Institute (ASPI)Last month, some of the world’s leading artificial intelligence experts signed a petition calling for a prohibition on developing superintelligent AI until it is safe. One of those experts was Dan Hendrycks, director for the Center for AI Safety and an adviser to Elon Musk’s xAI and leading firm Scale AI. Dan has led original and thought-provoking research including into the risk of rogue AIs escaping human control, the deliberate misuse of the technology by malign actors, and the emergence of dangerous strategic dynamics if one nation creates superintelligence, prompting fears among rival nations.
In the lead-up to ASPI’s Sydney Dialogue tech and security conference in December, Dan talks about the different risks AI poses, the possibility that AI develops its own goals and values, the concept of recursion in which machines build smarter machines, definitions of artificial “general” intelligence, the shortcomings of current AIs and the inadequacy of historical analogies such as nuclear weapons in understanding risks from superintelligence.
To see some of the research discussed in today’s episode, visit the Center for AI Safety’s website here.

20 Listeners

4,225 Listeners

1,808 Listeners

97 Listeners

118 Listeners

91 Listeners

128 Listeners

2,660 Listeners

15,506 Listeners

70 Listeners

3,858 Listeners

213 Listeners

61 Listeners

2,536 Listeners

1,153 Listeners