
Sign up to save your podcasts
Or
https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy
The Safe Uncertainty Fallacy goes:
The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go.
Therefore, it’ll be fine.
You’re not missing anything. It’s not supposed to make sense; that’s why it’s a fallacy.
For years, people used the Safe Uncertainty Fallacy on AI timelines:
Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be “soon”.
4.8
123123 ratings
https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy
The Safe Uncertainty Fallacy goes:
The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go.
Therefore, it’ll be fine.
You’re not missing anything. It’s not supposed to make sense; that’s why it’s a fallacy.
For years, people used the Safe Uncertainty Fallacy on AI timelines:
Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be “soon”.
4,221 Listeners
13,357 Listeners
26,400 Listeners
2,388 Listeners
87 Listeners
3,755 Listeners
86 Listeners
388 Listeners
128 Listeners
199 Listeners
47 Listeners
90 Listeners
75 Listeners
145 Listeners
114 Listeners