Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Tech Industry is the Biggest Blocker to Meaningful AI Safety Regulations, published by Garrison on August 16, 2024 on The Effective Altruism Forum.
If you enjoy this, please consider subscribing to my Substack.
My
latest reporting went up in The Nation yesterday:
It's about the tech industry's meltdown in response to SB 1047, a California bill that would be the country's first significant attempt to mandate safety measures from developers of AI models more powerful and expensive than any yet known.
Rather than summarize that story, I've added context from some past reporting as well as new reporting on two big updates from yesterday: a congressional letter asking Newsom to veto the bill and a slate of amendments.
The real AI divide
After spending months on my January
cover story in Jacobin on the AI existential risk debates, one of my strongest conclusions was that the AI ethics crowd (focused on the tech's immediate harms) and the x-risk crowd (focused on speculative, extreme risks) should recognize their shared interests in the face of a much more powerful enemy - the tech industry:
According to one
estimate, the amount of money moving into AI safety start-ups and nonprofits in 2022 quadrupled since 2020, reaching $144 million. It's difficult to find an equivalent figure for the AI ethics community. However, civil society from either camp is dwarfed by industry spending. In just the first quarter of 2023, OpenSecrets reported roughly
$94 million was spent on AI lobbying in the United States. LobbyControl estimated tech firms spent
€113 million this year lobbying the EU, and we'll recall that hundreds of billions of dollars are being invested in the AI industry as we speak.
And here's how I
ended that story:
The debate playing out in the public square may lead you to believe that we have to choose between addressing AI's immediate harms and its inherently speculative existential risks. And there are certainly trade-offs that require careful consideration.
But when you look at the material forces at play, a different picture emerges: in one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.
In short, it's capitalism versus humanity.
This was true at the time I published it, but honestly, it felt like momentum was on the side of the AI safety crowd, despite its huge structural disadvantages (industry has way more money and
armies of seasoned lobbyists).
Since then, it's become increasingly clear that meaningful federal AI safety regulations aren't happening any time soon. The Republican Majority Leader Steve Scalise
promised as much in June. But it turns out Democrats would have also likely blocked any national, binding AI safety legislation.
The congressional letter
Yesterday, eight Democratic California Members of Congress published a
letter to Gavin Newsom, asking him to veto SB 1047 if it passes the state Assembly. There are serious problems with basically every part of this letter, which I picked apart
here. (Spoiler: it's full of industry talking points repackaged under congressional letterhead).
Many of the signers
took
lots of
money from tech, so it shouldn't come as too much of a surprise. I'm most disappointed to see that Silicon Valley Representative Ro Khanna is one of the signatories. Khanna had stood out to me positively in the past (like when he Skyped into The Intercept's five year anniversary party).
The top signatory is Zoe Lofgren, who I
wrote about in The Nation story:
SB 1047 has also acquired powerful enemies on Capitol Hill. The most dangerous might be Zoe Lofgren, the ranking Democrat in the House Committee on Science, Space, and Technology. Lofgren, whose district covers much of ...