This research paper proposes a new method for training large language models (LLMs) to solve complex scientific problems. The authors argue that current LLMs struggle with complex scientific questions, often hallucinating answers instead of providing accurate solutions. To address this, they suggest integrating LLMs with specialized tools. However, traditional methods for integrating tools often result in the model over-relying on these tools, even when simpler reasoning would suffice. To overcome these limitations, the paper presents a two-component fine-tuning method: World Knowledge Distillation (WKD), which allows the LLM to learn from the solutions generated by tools, and Tool Usage Adaptation (TUA), which trains the model to intelligently choose between direct reasoning and using external tools based on the difficulty of the question. The authors demonstrate the effectiveness of their approach on various datasets across different scientific domains, including mathematics, climate science, and epidemiology, finding significant improvements in both answer accuracy and tool usage precision.