
Sign up to save your podcasts
Or


Among the first responses by Iranian state media to the US-Israeli war on Iran was a propaganda video. It flaunted row upon row of gleaming Iranian drones, safely lined up in an underground weapons cache, ready to strike Israel, Arab states and US bases.
Drones. Precision-Guided Munitions. A.I. war games. Autonomous Weapon Systems. At the Pentagon, at Anthropic, for Trump and in Iran, they're redefining warfare in real time.
When the Pentagon's A.I. partner, Anthropic, insisted its systems mustn't be used to spy on Americans or to build killer robots, President Trump baulked. On Friday, Trump directed every federal U.S. agency to stop working with Anthropic, and the Pentagon declared Anthropic to be a "supply-chain risk" - a designation normally reserved for companies in enemy nations, which would bar even private defence contractors from using Anthropic's A.I. Its competitor, OpenAI, stepped in and took the Pentagon contract instead.
As conflict spreads across the Middle East, how is artificial intelligence being used? How will these fights change in the near future? Can we control it? Toby Walsh thinks so.
He's the Chief Scientist at the UNSW A.I. Institute and a leading voice in the global regulation of A.I. weapons. He studied theoretical physics and mathematics at Cambridge University, has a Ph.D. in artificial intelligence, and was the editor-in-chief of the Journal of Artificial Intelligence Research.
Toby stopped by the Uncomfortable Conversations studios on his way to the airport to fly to Geneva to participate in a United Nations conference about A.I. in warfare.
By Josh Szeps4.5
793793 ratings
Among the first responses by Iranian state media to the US-Israeli war on Iran was a propaganda video. It flaunted row upon row of gleaming Iranian drones, safely lined up in an underground weapons cache, ready to strike Israel, Arab states and US bases.
Drones. Precision-Guided Munitions. A.I. war games. Autonomous Weapon Systems. At the Pentagon, at Anthropic, for Trump and in Iran, they're redefining warfare in real time.
When the Pentagon's A.I. partner, Anthropic, insisted its systems mustn't be used to spy on Americans or to build killer robots, President Trump baulked. On Friday, Trump directed every federal U.S. agency to stop working with Anthropic, and the Pentagon declared Anthropic to be a "supply-chain risk" - a designation normally reserved for companies in enemy nations, which would bar even private defence contractors from using Anthropic's A.I. Its competitor, OpenAI, stepped in and took the Pentagon contract instead.
As conflict spreads across the Middle East, how is artificial intelligence being used? How will these fights change in the near future? Can we control it? Toby Walsh thinks so.
He's the Chief Scientist at the UNSW A.I. Institute and a leading voice in the global regulation of A.I. weapons. He studied theoretical physics and mathematics at Cambridge University, has a Ph.D. in artificial intelligence, and was the editor-in-chief of the Journal of Artificial Intelligence Research.
Toby stopped by the Uncomfortable Conversations studios on his way to the airport to fly to Geneva to participate in a United Nations conference about A.I. in warfare.

26,389 Listeners

2,272 Listeners

2,894 Listeners

907 Listeners

2,260 Listeners

803 Listeners

575 Listeners

3,837 Listeners

807 Listeners

952 Listeners

817 Listeners

220 Listeners

8,462 Listeners

247 Listeners

215 Listeners