
Sign up to save your podcasts
Or
https://astralcodexten.substack.com/p/contra-acemoglu-onoh-god-were-doing
The Washington Post has published yet another "luminary in unrelated field discovers AI risk, pronounces it stupid" article. This time it's Daron Acemoglu. I respect Daron Acemoglu and appreciate the many things his work has revealed about economics. In particular, I respect him so much that I wish he would stop embarrassing himself by writing this kind of article (I feel the same way about Steven Pinker and Ted Chiang).
In service of this goal, I want to discuss the piece briefly. I’ll start with what I think is its main flaw, then nitpick a few other things:
1: The Main Flaw: “AI Is Dangerous Now, So It Can’t Be Dangerous Later"This is the basic structure around which this article is written. It goes: 1. Some people say that AI might be dangerous in the future. 2. But AI is dangerous now! 3. So it can’t possibly be dangerous in the future. 4. QED!
I have no idea why Daron Acemoglu and every single other person who writes articles on AI for the popular media thinks this is such a knockdown argument. But here we are. He writes:
AI detractors have focused on the potential danger to human civilization from a super-intelligence if it were to run amok. Such warnings have been sounded by tech entrepreneurs Bill Gates and Elon Musk, physicist Stephen Hawking and leading AI researcher Stuart Russell.
4.8
123123 ratings
https://astralcodexten.substack.com/p/contra-acemoglu-onoh-god-were-doing
The Washington Post has published yet another "luminary in unrelated field discovers AI risk, pronounces it stupid" article. This time it's Daron Acemoglu. I respect Daron Acemoglu and appreciate the many things his work has revealed about economics. In particular, I respect him so much that I wish he would stop embarrassing himself by writing this kind of article (I feel the same way about Steven Pinker and Ted Chiang).
In service of this goal, I want to discuss the piece briefly. I’ll start with what I think is its main flaw, then nitpick a few other things:
1: The Main Flaw: “AI Is Dangerous Now, So It Can’t Be Dangerous Later"This is the basic structure around which this article is written. It goes: 1. Some people say that AI might be dangerous in the future. 2. But AI is dangerous now! 3. So it can’t possibly be dangerous in the future. 4. QED!
I have no idea why Daron Acemoglu and every single other person who writes articles on AI for the popular media thinks this is such a knockdown argument. But here we are. He writes:
AI detractors have focused on the potential danger to human civilization from a super-intelligence if it were to run amok. Such warnings have been sounded by tech entrepreneurs Bill Gates and Elon Musk, physicist Stephen Hawking and leading AI researcher Stuart Russell.
4,223 Listeners
13,360 Listeners
26,446 Listeners
2,388 Listeners
87 Listeners
3,759 Listeners
87 Listeners
389 Listeners
128 Listeners
198 Listeners
47 Listeners
91 Listeners
75 Listeners
145 Listeners
114 Listeners