Share The Most Interesting People I Know
Share to email
Share to Facebook
Share to X
By Garrison Lovely
4.7
2323 ratings
The podcast currently has 37 episodes available.
I'm really excited to come out of hiatus to share this conversation with you. You may have noticed people are talking a lot about AI, and I've started focusing my journalism on the topic. I recently published a 9,000 word cover story in Jacobin’s winter issue called “Can Humanity Survive AI,” and was fortunate to talk to over three dozen people coming at AI and its possible risks from basically every angle. You can find a full episode transcript here.
My next guest is about as responsible as anybody for the state of AI capabilities today. But he's recently begun to wonder whether the field he spent his life helping build might lead to the end of the world. Following in the tradition of the Manhattan Project physicists who later opposed the hydrogen bomb, Dr. Yoshua Bengio started warning last year that advanced AI systems could drive humanity extinct.
(I’ve started a Substack since my last episode was released. You can subscribe here.)
The Jacobin story asked if AI poses an existential threat to humanity, but it also introduced the roiling three-sided debate around that question. And two of the sides, AI ethics and AI safety, are often pitched as standing in opposition to one another. It's true that the AI ethics camp often argues that we should be focusing on the immediate harms posed by existing AI systems. They also often argue that the existential risk arguments overhype the capabilities of those systems and distract from their immediate harms. It's also the case that many of the people focusing on mitigating existential risks from AI don't really focus on those issues. But Dr. Bengio is a counterexample to both of these points. He has spent years focusing on AI ethics and the immediate harms from AI systems, but he also worries that advanced AI systems pose an existential risk to humanity. And he argues in our interview that it's a false choice between AI ethics and AI safety, that it's possible to have both.
Yoshua Bengio is the second-most cited living scientist and one of the so-called “Godfathers of deep learning.” He and the other “Godfathers,” Geoffrey Hinton and Yann LeCun, shared the 2018 Turing Award, computing’s Nobel prize.
In November, Dr. Bengio was commissioned to lead production of the first “State of the Science” report on the “capabilities and risks of frontier AI” — the first significant attempt to create something like the Intergovernmental Panel on Climate Change (IPCC) for AI.
I spoke with him last fall while reporting my cover story for Jacobin’s winter issue, “Can Humanity Survive AI?” Dr. Bengio made waves last May when he and Geoffrey Hinton began warning that advanced AI systems could drive humanity extinct.
We discuss:
Since we had limited time, we jumped straight into things and didn’t cover much of the basics of the idea of AI-driven existential risk, so I’m including some quotes and background in the intro. If you’re familiar with these ideas, you can skip straight to the interview at 7:24.
Unless stated otherwise, the below are quotes from my Jacobin story:
“Bengio posits that future, genuinely human-level AI systems could improve their own capabilities, functionally creating a new, more intelligent species. Humanity has driven hundreds of other species extinct, largely by accident. He fears that we could be next…”
Last May, “hundreds of AI researchers and notable figures signed an open letter stating, ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’ Hinton and Bengio were the lead signatories, followed by OpenAI CEO Sam Altman and the heads of other top AI labs.”
“Hinton and Bengio were also the first authors of an October position paper warning about the risk of ‘an irreversible loss of human control over autonomous AI systems,’ joined by famous academics like Nobel laureate Daniel Kahneman and Sapiens author Yuval Noah Harari.”
The “position paper warns that ‘no one currently knows how to reliably align AI behavior with complex values.’”
The largest survey of machine learning researchers on AI x-risk was conducted in 2023. The median respondent estimated that there was a 50% chance of AGI by 2047 — a 13 year drop from a similar survey conducted just one year earlier — and that there was at least a 5% chance AGI would result in an existential catastrophe.
The October “Managing AI Risks” paper states:
There is no fundamental reason why AI progress would slow or halt when it reaches human-level abilities. . . . Compared to humans, AI systems can act faster, absorb more knowledge, and communicate at a far higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions.
“Here’s a stylized version of the idea of ‘population’ growth spurring an intelligence explosion: if AI systems rival human scientists at research and development, the systems will quickly proliferate, leading to the equivalent of an enormous number of new, highly productive workers entering the economy. Put another way, if GPT-7 can perform most of the tasks of a human worker and it only costs a few bucks to put the trained model to work on a day’s worth of tasks, each instance of the model would be wildly profitable, kicking off a positive feedback loop. This could lead to a virtual ‘population’ of billions or more digital workers, each worth much more than the cost of the energy it takes to run them. [OpenAI chief scientist Ilya] Sutskever thinks it’s likely that ‘the entire surface of the earth will be covered with solar panels and data centers.’”
“The fear that keeps many x-risk people up at night is not that an advanced AI would ‘wake up,’ ‘turn evil,’ and decide to kill everyone out of malice, but rather that it comes to see us as an obstacle to whatever goals it does have. In his final book, Brief Answers to the Big Questions, Stephen Hawking articulated this, saying, ‘You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants.’”
Episode art by Ricardo Santos for Jacobin.
Carl Robichaud is the first person I go to on the topic of nuclear weapons. He has been working as a grantmaker and analyst of nuclear weapons policy for close to two decades. He co-leads nuclear security grantmaking at Longview Philanthropy, where I used to work as a media consultant. Prior to Longview, Carl led nuclear grantmaking for the Carnegie Corporation of New York.
We recently saw Oppenheimer together and decided to have a discussion about the film, the real history, and nuclear weapons more broadly.
This episode is being released on the 78th anniversary of the Hiroshima bombing. The Nagasaki bombing happened just three days later, after the Japanese emperor had already secretly decided to surrender. As we discuss, the fact that nuclear weapons have not been used in war in the nearly eight decades since should be seen as a remarkable achievement, or a sign of extreme luck.
We have a spoiler-filled discussion of the new film Oppenheimer and the real history until 31:12, in case you’d like to skip ahead.
We discuss:
Links:
[This episode was recorded before the FTX collapse. It contains some discussion of Sam Bankman-Fried. Habiba has asked me to pass on that, to say the least, she no longer endorses what she says about Sam as an example of someone doing good. I've also linked in the show notes to her twitter thread with her thoughts on FTX.]
This episode is a long time in the making. We’re going deep on the intersection of effective altruism (EA) and the left.
When I tell people that I’m a leftist and into effective altruism, they’re often surprised. A lot of the recent criticism of EA from the left may make it seem like the ideas and communities are incompatible, causing people to genuinely ask, can you be an effective altruist and a leftist? I think you can. But that doesn’t mean there aren’t real tensions between the two approaches to improving the world.
This is not meant to be a point by point rebuttal of any criticisms of EA or the left. Instead, I wanted to better understand myself how these ideas interact.
To discuss this, I brought on Habiba Islam. Habiba is a career advisor for 80,000 Hours, an organization that helps people find high-impact careers. 80,000 Hours grew out of the effective altruism movement, but Habiba also identifies as a leftist. As you’ll soon discover, Habiba has given these ideas a lot of thought and helped clarify a lot of longstanding confusions for me.
We go through our backgrounds with the left and EA and attempt to define each. We then go through hidden agreements EA and the left have, misconceptions each has about the other, and the real disagreements between EA in practice and the left.
When I first got into EA and left politics, I had grand plans to try to reconcile the two. I felt like EA’s commitment to prioritization, responding to evidence, and doing whatever works could help make the left better at achieving its goals. And I thought that the left’s ability to build movements, shape narratives, analyze power, and understand history could shore up some major blindspots within EA. Time has tempered my ambitions a bit, and I think there are good reasons why the left and EA will and should remain distinct things. But there is still a lot each can learn from the other.
Left critiques of EA:
Show notes:
Rutger Bregman is the bestselling author of Utopia for Realists and Humankind: a Hopeful History. He has been profiled in the New York Times and interviewed on the Daily Show. Rupert Murdoch has been spotted reading his book, and Tucker Carlson called him a “fucking moron.”
I first came across Rutger years ago when a friend was reading Utopia for Realists. The book, which argues for UBI, open borders, and a 15 hour work week, intrigued me, but I’m ashamed to admit I haven’t read it.
He popped back up on my radar when he appeared at Davos, the annual gathering of the super-wealthy, and lambasted his audience for not talking about taxes. The viral moment he created led to an invitation onto Tucker Carlson’s show, where Rutger’s challenge to the Fox News host led to what can only be described as a meltdown. In our interview, Rutger goes deeper into the full story of both events than I’ve seen anywhere else.
We spend the bulk of the interview discussing his book Humankind, which argues that people are actually pretty decent, but power corrupts. This is one of my favorite books, and I can’t recommend it highly enough.
We wrap up with a discussion of Rutger’s relationship with effective altruism, the philosophy and social movement trying to do as much as possible to improve the world.
In particular, we discuss:
Links:
Discourse on Inequality
The Doomsday Machine
Violence
The Secret of Our Success
The Dawn of Everything
The real Lord of the Flies: what happened when six boys were shipwrecked for 15 months
The Possibility of an Ongoing Moral Catastrophe
Giving What We Can
Grilled
TMIPIK - Leah Garcés on Working with Factory Farmers to Help Animals
If You’re an Egalitarian, How Come You’re So Rich?
Famine, Affluence, and Morality
Yes, it’s all the fault of Big Oil, Facebook and ‘the system’. But let’s talk about you this time
Alexander Zaitchik is a freelance journalist and author with work in The New Republic, The Nation, The Guardian, and elsewhere. Zaitchik has written two books, one about Glenn Beck and another exploring Trump’s America. He’s working on a third, out in January 2022, called Owning the Sun: A People’s History of Monopoly Medicine, from Aspirin to Covid-19.
This episode is about one of the most important stories in the world right now: global vaccine production and distribution. Alex wrote a long-form investigation in the New Republic called “How Bill Gates Impeded Global Access to Covid Vaccines”, which goes deep into the global intellectual property paradigm that is limiting vaccine production and the people who defend it.
We recorded this episode before the US announced support for some kind of waiver on vaccine patents. It’s important to note that the US did not back the TRIPS waiver proposed by South Africa and India in October 2020. The US is also reportedly concerned that sharing information would undermine American competitiveness with China and Russia in biopharmaceuticals. The idea that it would be bad if more countries developed the ability to make advanced vaccines is emblematic of the harms of prioritizing profit-making in an industry so essential to human wellbeing. A source in the Biden administration also said the negotiations are expected to take months.
Last Thursday, the Gates Foundation reversed course and supported a temporary suspension of IP rights on Covid vaccines. The Foundation’s statement cites the number of cases in Brazil and India as a reason to support the suspension. But Bill Gates was pushing against any efforts to suspend IP protections right until the US supported some kind of waiver. Gates’ firm position for over a year has been that IP protections play zero role in limiting vaccine supply, but now his foundation supports suspending those protections because we need to increase vaccine supply so badly. Either Gates recently came across some really persuasive evidence, or public opinion actually can still matter.
As I record this, India is being ravaged by Covid. Yesterday, nearly 400,000 new cases were reported, a number which almost certainly represents a small fraction of true cases. Less than 10 percent of the country has received even one dose of vaccine. Hospitals and crematoria alike are overwhelmed and there is an acute shortage of wood due to the sheer number of deaths. Domestic policy failures of the Modi government play a big role in this story, but so too do the choices of pharmaceutical firms and their client governments like the United States and other rich countries.
We cover a lot of ground and dispel a lot of myths propagated by the pharmaceutical industry.
We specifically discuss:
I think this is one of the most important episodes of the show so far. So much rides on whether governments make decisions that prioritize global public health, even if they come at the expense of the profits of one industry.
Buy Alex's book in January 2022.
Alex’s writing:
How Bill Gates Impeded Global Access to Covid Vaccines
No Vaccine in Sight
Moderna’s Pledge Not to Enforce the Patents on Their COVID-19 Vaccine Is Worthless
Links:
They Pledged to Donate Rights to Their COVID Vaccine, Then Sold Them to Pharma
Goldman Sachs asks in biotech research report: ‘Is curing patients a sustainable business model?’
TRIPS waiver: there’s more to the story than vaccine patents
Myths of Vaccine Manufacturing
Views from a vaccine manufacturer: Q&A - Abdul Muktadir, Incepta Pharmaceuticals; Pandemic Treaty Action
Video of Gates responding to criticism of his push to close-source the Oxford vaccine
Tobias Leenaert is the author of How to Create a Vegan World: a Pragmatic Approach, which has been translated into five languages. He is the cofounder of ProVeg International, which aims to reduce the consumption of animal products by 50% by 2040. Tobias also writes the Vegan Strategist blog, where he shares strategies for convincing people to reduce their animal product consumption.
We discuss:
I think this episode is useful for both vegetarians and vegan activists and people who are interested in consuming less animal products but aren’t sure how.
Links:
Conor Oberst is one of the most prolific singer-songwriters of the last twenty years. Best known for his work with Bright Eyes, Oberst has also collaborated with Flea, Jim James, Alt-J, and Phoebe Bridgers. His most recent song, “Miracle of Life”, featuring Bridgers, raised money for Planned Parenthood and opposed Trump’s nomination of Amy Coney Barrett to the Supreme Court.
Oberst sat for an interview with me this fall as the first in a series for Jacobin. An edited and condensed transcript can be found here. We talked a bit about politics (Oberst made public stances against the Iraq War and supported Bernie Sanders in 2016 and 2020) and a lot about music.
I’ve been a big fan of Bright Eyes and Conor’s solo work for years now, so it was a real treat to get to chat with him.
Be sure to check out Bright Eyes's first album in 9 years, Down in the Weeds, Where the World Once Was.
As always, you can find me on Twitter @GarrisonLovely
David Shor is a data scientist and the former head of political data science for Civis Analytics, a Democratic think tank. In 2012, he developed the Obama campaign’s in-house election forecasting system, which accurately predicted the outcome to within a point in every state. David was the subject of some controversy this summer when he was fired following his tweeting of an academic paper. The paper argued that violent protests decreased Democratic presidential vote share while nonviolent protests increased vote share. Unfortunately, David is not at liberty to discuss the details of this incident, which is an excellent example of what happens when employment protections don’t exist.
I want to state up front that the focus of this episode is on how to improve the electoral prospects of Democrats, which is David’s expertise. I have many disagreements with the Democratic party and its leaders, and there are many pathways to power beyond electoral politics. But America’s political institutions are extremely powerful, and ensuring that they are controlled by the non-death cult party is important.
We discuss:
Links:
National Popular Vote Interstate Compact
Matt Grossman on Twitter
David Shor on Twitter
Trevor Beaulieu is the host of the podcast Champagne Sharks, a “podcast about race, politics, and pop culture, through the lenses of humor and psychology.” The show has released over 300 episodes on a huge range of topics, from Afro-pessimism and social justice, to Marvel movies and Tumblr. I’ve only scratched the surface of the show, but have really enjoyed the episodes I’ve listened to so far. Check out the show notes for a few of my favorites. Trevor’s many appearances on Chapo Trap House are also well worth a listen.
You can find Trevor on Twitter: @rickyrawls and Champagne Sharks: @champagnesharks. I’m on Twitter @garrisonlovely.
You can check out Champagne Sharks wherever you find podcasts, and you can subscribe at https://www.patreon.com/champagnesharks
On today’s episode we discuss:
A few of my favorite Champagne Sharks episodes:
CS 238: Is The Whole Internet Becoming 4Chan? Pt. 1 feat. Dale Beran (01/23/2020)
CS 186: Tumblr Brain feat. Jaya Sundaresh (@shutupjaya) (06/20/2019)`
CS 272: Karens (Hard-R) With Attitude feat. Nashwa Khan pt. 1
CS 276: The Futureless Now feat. Matt Christman pt. 1
CS 274: After the Bern feat. Felix Biederman pt. 1
CS 282: Live, Love, Work and Catastrophe feat. Rob Delaney
CS 284: Clarence Thomas and The Reactionary Mind Pt. 1 feat. Corey Robin
CS 280: Afropessimism feat. Frank Wilderson III *DOUBLE EPISODE*
Show notes:
Why the stock market is divorced from the pain of a pandemic economy
What if ‘Herd Immunity’ Is Closer Than Scientists Thought?
Video showed police thank (Kyle Rittenhouse) & give him water prior to the killings
Wage Theft vs. Other Forms of Theft in the U.S.
The 1968 Kerner Commission Got It Right, But Nobody Listened
The Protesting of a Protest Paper
Ross Barkan is an award-winning journalist and former political candidate. Ross ran for state senate in Brooklyn in 2018 (where he was endorsed by AOC). He is back to full-time journalism, with a column in the Guardian and frequent contributions to the Nation and Gothamist. He also has work in the New York Times, the Washington Post, the New Yorker, New York Magazine, and the Columbia Journalism Review. In both 2017 and 2019, he was the recipient of the New York Press Club’s award for distinguished newspaper commentary. He now teaches journalism at NYU and St. Joseph’s College. He also created a popular newsletter, Political Currents, on New York and national affairs.
As always, links to his work will be found in the show notes. Ross’s Substack newsletter, Political Currents, is an amazing font of information on New York City politics.
In today’s episode, we discuss:
His experience running for state senate, the curse of fundraising, and how running for office destroys your social life, how small dollar digital fundraising is fueling left wing candidates, what a DSA endorsement means and why Ross thinks he didn’t get it, why he thinks he didn't win, what you should consider when deciding whether to run for office, how De Blasio and Cuomo bungled New York’s COVID response, how Cuomo refuses to raise taxes on the wealthy, the lack of any meaningful action to reduce the power of the NYPD, why Ross doesn’t support police abolition and why we think the case for prison abolition is stronger, Bernie’s loss and the progress the left has made in recent years, and the very exciting election of five DSA-endorsed candidates to statewide political office in New York
More about Ross:
Links:
The podcast currently has 37 episodes available.