This paper investigates the reasoning patterns of OpenAI's o1 model, a large language model designed to reason through complex tasks. The authors compare the performance of o1 with other Test-time Compute methods, including Best-of-N (BoN), Step-wise BoN, Agent Workflow, and Self-Refine, across benchmarks in mathematics, coding, and commonsense reasoning. They identify six reasoning patterns used by o1 and analyze the impact of factors such as reward model quality and search space on the performance of the different methods. The study provides insights into the capabilities and limitations of these methods, highlighting the importance of efficient reasoning strategies for enhancing the performance of large language models.