
Sign up to save your podcasts
Or
Read the full transcript here.
Where does innovation come from? How common is it for "lone wolf" scientists to make large leaps in innovation by themselves? How can we imbue AIs with creativity? Or, conversely, how can we apply advances in AI creativity to our own personal creative processes? How do creative strategies that work well for individuals differ from creative strategies that work well for groups? To what extent are models like DALL-E and ChatGPT "creative"? Can machines love? Or can they only ever pretend to love? We've worried a fair bit about AI misalignment; but what should we do about the fact that so many humans are misaligned with humanity's own interests? What might it mean to be "reverent" towards science?
Joel Lehman is a machine learning researcher interested in algorithmic creativity, AI safety, artificial life, and intersections of AI with psychology and philosophy. Most recently he was a research scientist at OpenAI co-leading the Open-Endedness team (studying algorithms that can innovate endlessly). Previously he was a founding member of Uber AI Labs, first employee of Geometric Intelligence (acquired by Uber), and a tenure track professor at the IT University of Copenhagen. He co-wrote with Kenneth Stanley a popular science book called Why Greatness Cannot Be Planned on what AI search algorithms imply for individual and societal accomplishment. Follow him on Twitter at @joelbot3000 or email him at [email protected].
Further reading
Staff
Music
Affiliates
4.8
126126 ratings
Read the full transcript here.
Where does innovation come from? How common is it for "lone wolf" scientists to make large leaps in innovation by themselves? How can we imbue AIs with creativity? Or, conversely, how can we apply advances in AI creativity to our own personal creative processes? How do creative strategies that work well for individuals differ from creative strategies that work well for groups? To what extent are models like DALL-E and ChatGPT "creative"? Can machines love? Or can they only ever pretend to love? We've worried a fair bit about AI misalignment; but what should we do about the fact that so many humans are misaligned with humanity's own interests? What might it mean to be "reverent" towards science?
Joel Lehman is a machine learning researcher interested in algorithmic creativity, AI safety, artificial life, and intersections of AI with psychology and philosophy. Most recently he was a research scientist at OpenAI co-leading the Open-Endedness team (studying algorithms that can innovate endlessly). Previously he was a founding member of Uber AI Labs, first employee of Geometric Intelligence (acquired by Uber), and a tenure track professor at the IT University of Copenhagen. He co-wrote with Kenneth Stanley a popular science book called Why Greatness Cannot Be Planned on what AI search algorithms imply for individual and societal accomplishment. Follow him on Twitter at @joelbot3000 or email him at [email protected].
Further reading
Staff
Music
Affiliates
4,224 Listeners
1,716 Listeners
2,652 Listeners
26,462 Listeners
2,389 Listeners
10,693 Listeners
893 Listeners
120 Listeners
87 Listeners
401 Listeners
15,237 Listeners
60 Listeners
145 Listeners
44 Listeners
123 Listeners