The development of AI has ramped up in recent months, with the likes of ChatGPT and deep learning tools dominating headlines as tech firms race to build their own tools and compete in this rapidly growing space.
The sudden leap forward has also been met with concern, with many calling for responsible regulation to help address the speed at which the technology is evolving. Meanwhile, prominent figures like Apple co-founder Steve Wozniak and Emma Bluemke from the Centre for the Governance of AI at Oxford University have called for a six-month pause on training AI systems.
To discuss the future of AI and what steps can be taken to ensure it develops in a way that is responsible and supports human flourishing, National Technology News was joined by Shannon Vallor, co-director of the UKRI Arts and Humanities Research Council's BRAID (Bridging Responsible AI Divides) Programme and the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh.