
Sign up to save your podcasts
Or
After I watched Ted Talk by Eliezer Yudkowsky 'Will Superintelligent AI End the World?' I decided to begin by explaining the key problems faced within the development of AI currently. I talk about Open AI's pledge to solve the Alignment Problem within four years, through a Superalignment research project led by Ilya Sutskever and Jan Leike. I highlight key points from an interview/article by Rob Anderson called 'Does Sam Altman Know What He's Creating?'. I hope this episode was able to create a heightened interest, and concern for the current future of AI in the world. Thank you for listening!
After I watched Ted Talk by Eliezer Yudkowsky 'Will Superintelligent AI End the World?' I decided to begin by explaining the key problems faced within the development of AI currently. I talk about Open AI's pledge to solve the Alignment Problem within four years, through a Superalignment research project led by Ilya Sutskever and Jan Leike. I highlight key points from an interview/article by Rob Anderson called 'Does Sam Altman Know What He's Creating?'. I hope this episode was able to create a heightened interest, and concern for the current future of AI in the world. Thank you for listening!