I think a lot of blogging is reactive. You read other people's blogs and you're like, no, that's totally wrong. A part of what we want to do with this scenario is say something concrete and detailed enough that people will say no, that's totally wrong, and write their own thing.
I recently read the AI 2027 predictions[1] . I think they're way off. I was visualizing my self at Christmastime 2027, sipping eggnog and gloating about how right I was, but then I realized it doesn't count if I don't register my prediction publicly, so here it is.
This blog post is mostly about me trying to register my predictions than trying to convince anyone, but I've also included my justifications below, as well as what I think went wrong with the AI 2027 predictions (assuming [...]
---
Outline:
(01:07) My predictions for AI by the end of 2027
(05:42) Justification
(06:01) Shallow vs. Deep Thinking
(07:10) Noticing Where LLMs Fail
(09:28) Why the Architecture of LLMs Makes Them Bad at Deep Thinking: Theyre Too Wide
(11:44) LLMs Are Also Too Linear
(15:49) Whats wrong with AI 2027
(17:29) The Takeoff Forecast is Based on Guesswork
(22:45) I Dont Take These Predictions Seriously
(24:25) The Presentation was Misleading
(26:23) Deep Thinking vs. Shallow Thinking For Making Predictions
(28:05) Was AI 2027 a Valuable Exercise?
(29:05) Conclusion
The original text contained 10 footnotes which were omitted from this narration.
---