
Sign up to save your podcasts
Or


What happens when the people building and deploying artificial intelligence still do not fully understand where it could lead?In this conversation, Jonathan Kewley, one of Britain’s leading lawyers on AI, explores how companies, governments, and institutions are trying to navigate a technology that is moving faster than the rules designed to contain it.But this isn’t just a story about better tools.It’s about a deeper shift:🔹 What happens when businesses adopt AI before they truly understand the legal, ethical, and societal risks?🔹 Could the biggest danger of AI come not from the models themselves, but from how blindly people deploy them?🔹 And how do leaders make decisions about a technology even the experts still struggle to predict?At the centre of the discussion is a harder question:If AI could become the best or worst thing to happen to humanity, how do we build guardrails strong enough to shape its direction before the consequences are irreversible?Because with AI, the real risk is not just what it can do, but what we fail to prepare for.
By Ezra ChapmanWhat happens when the people building and deploying artificial intelligence still do not fully understand where it could lead?In this conversation, Jonathan Kewley, one of Britain’s leading lawyers on AI, explores how companies, governments, and institutions are trying to navigate a technology that is moving faster than the rules designed to contain it.But this isn’t just a story about better tools.It’s about a deeper shift:🔹 What happens when businesses adopt AI before they truly understand the legal, ethical, and societal risks?🔹 Could the biggest danger of AI come not from the models themselves, but from how blindly people deploy them?🔹 And how do leaders make decisions about a technology even the experts still struggle to predict?At the centre of the discussion is a harder question:If AI could become the best or worst thing to happen to humanity, how do we build guardrails strong enough to shape its direction before the consequences are irreversible?Because with AI, the real risk is not just what it can do, but what we fail to prepare for.