
Sign up to save your podcasts
Or
Read the full transcript here.
What is "apocaloptimism"? Is there a middle ground between apocalypticism and optimism? What are the various camps in the AI safety and ethics debates? What's the difference between "working on AI safety" and "building safe AIs"? Can our social and technological coordination problems be solved only by AI? What is "qualintative" research? What are some social science concepts that can aid in the development of safe and ethical AI? What should we do with things that don't fall neatly into our categories? How might we benefit by shifting our focus from individual intelligence to collective intelligence? What is cognitive diversity? What are "AI Now", "AI Next", and "AI in the Wild"?
Adam Russell is the Director of the AI Division at the University of Southern California's Information Sciences Institute (ISI). Prior to ISI, Adam was the Chief Scientist at the University of Maryland's Applied Research Laboratory for Intelligence and Security, or ARLIS, and was an adjunct professor at the University of Maryland's Department of Psychology. He was the Principal Investigator for standing up the INFER (Integrated Forecasting and Estimates of Risk) forecasting platform. Adam's almost 20-year career in applied research and national security has included serving as a Program Manager at the Intelligence Advanced Research Projects Activity (IARPA), then as a Program Manager at the Defense Advanced Research Projects Agency (DARPA) (where he was known as the DARPAnthropologist) and in May 2022 was appointed as the Acting Deputy Director to help stand up the Advanced Research Projects Agency for Health (ARPA-H). Adam has a BA in cultural anthropology from Duke University and a D.Phil. in social anthropology from Oxford University, where he was a Rhodes Scholar. He has also represented the United States in rugby at the international level, having played for the US national men's rugby team (the Eagles).
Staff
Music
Affiliates
4.8
126126 ratings
Read the full transcript here.
What is "apocaloptimism"? Is there a middle ground between apocalypticism and optimism? What are the various camps in the AI safety and ethics debates? What's the difference between "working on AI safety" and "building safe AIs"? Can our social and technological coordination problems be solved only by AI? What is "qualintative" research? What are some social science concepts that can aid in the development of safe and ethical AI? What should we do with things that don't fall neatly into our categories? How might we benefit by shifting our focus from individual intelligence to collective intelligence? What is cognitive diversity? What are "AI Now", "AI Next", and "AI in the Wild"?
Adam Russell is the Director of the AI Division at the University of Southern California's Information Sciences Institute (ISI). Prior to ISI, Adam was the Chief Scientist at the University of Maryland's Applied Research Laboratory for Intelligence and Security, or ARLIS, and was an adjunct professor at the University of Maryland's Department of Psychology. He was the Principal Investigator for standing up the INFER (Integrated Forecasting and Estimates of Risk) forecasting platform. Adam's almost 20-year career in applied research and national security has included serving as a Program Manager at the Intelligence Advanced Research Projects Activity (IARPA), then as a Program Manager at the Defense Advanced Research Projects Agency (DARPA) (where he was known as the DARPAnthropologist) and in May 2022 was appointed as the Acting Deputy Director to help stand up the Advanced Research Projects Agency for Health (ARPA-H). Adam has a BA in cultural anthropology from Duke University and a D.Phil. in social anthropology from Oxford University, where he was a Rhodes Scholar. He has also represented the United States in rugby at the international level, having played for the US national men's rugby team (the Eagles).
Staff
Music
Affiliates
4,248 Listeners
1,708 Listeners
2,652 Listeners
26,365 Listeners
2,404 Listeners
10,663 Listeners
898 Listeners
122 Listeners
90 Listeners
423 Listeners
15,512 Listeners
60 Listeners
144 Listeners
43 Listeners
124 Listeners