The article "When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1" investigates whether a new language model, o1, developed by OpenAI, which was specifically trained for reasoning, retains limitations stemming from its origins in next-word prediction. Despite significant improvements in performance compared to previous LLMs, o1 still exhibits similar qualitative trends, indicating a sensitivity to the probability of both the text it generates and the tasks it is asked to perform. The authors find that o1, like other LLMs, performs better on high-probability tasks and examples, suggesting that even with optimization for reasoning, the influence of next-word prediction persists.