
Sign up to save your podcasts
Or
We have a guest on the show today! His name is Stuart Armstrong, and he works at the Future of Humanity Institute that we’ve mentioned several times over the course of the TEOTWAWKI specials who are looking at big-picture existential risks. Stuart Armstrong’s research at the Future of Humanity Institute centers on the safety and possibilities of Artificial Intelligence (AI), how to define the potential goals of AI and map humanity’s partially defined values into it, and the long term potential for intelligent life across the reachable universe. He has been working with people at FHI and other organizations, such as DeepMind, to formalize AI desiderata in general models so that AI designers can include these safety methods in their designs.
4.8
7575 ratings
We have a guest on the show today! His name is Stuart Armstrong, and he works at the Future of Humanity Institute that we’ve mentioned several times over the course of the TEOTWAWKI specials who are looking at big-picture existential risks. Stuart Armstrong’s research at the Future of Humanity Institute centers on the safety and possibilities of Artificial Intelligence (AI), how to define the potential goals of AI and map humanity’s partially defined values into it, and the long term potential for intelligent life across the reachable universe. He has been working with people at FHI and other organizations, such as DeepMind, to formalize AI desiderata in general models so that AI designers can include these safety methods in their designs.