The Inside View

2. Connor Leahy on GPT3, EleutherAI and AI Alignment


Listen Later

In the first part of the podcast we chat about how to speed up GPT-3 training, how Conor updated on recent announcements of large language models, why GPT-3 is AGI for some specific definitions of AGI [1], the obstacles in plugging planning to GPT-N and why the brain might approximate something like backprop. We end this first chat with solomonoff priors [2], adversarial attacks such as Pascal Mugging [3], and whether direct work on AI Alignment is currently tractable. In the second part, we chat about his current projects at EleutherAI [4][5], multipolar scenarios and reasons to work on technical AI Alignment research.


[1] https://youtu.be/HrV19SjKUss?t=4785
[2] https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference
[3] https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities
[4] https://www.eleuther.ai/
[5] https://discord.gg/j65dEVp5

...more
View all episodesView all episodes
Download on the App Store

The Inside ViewBy Michaël Trazzi

  • 2
  • 2
  • 2
  • 2
  • 2

2

1 ratings


More shows like The Inside View

View all
Future of Life Institute Podcast by Future of Life Institute

Future of Life Institute Podcast

107 Listeners