
Sign up to save your podcasts
Or


Large Language Models (LLMs) like GPT and LLaMA have shown remarkable general capabilities, yet they consistently hit a critical wall when faced with structured symbolic planning. This struggle is especially apparent when dealing with formal planning representations such as the Planning Domain Definition Language (PDDL), a fundamental requirement for reliable real-world sequential decision-making systems.
In this episode, we explore PDDL-INSTRUCT, a novel instruction tuning framework designed to significantly enhance LLMs' symbolic planning capabilities. This approach explicitly bridges the gap between general LLM reasoning and the logical precision needed for automated planning by using logical Chain-of-Thought (CoT) reasoning.
Key topics covered include:
By GenAI Level UPLarge Language Models (LLMs) like GPT and LLaMA have shown remarkable general capabilities, yet they consistently hit a critical wall when faced with structured symbolic planning. This struggle is especially apparent when dealing with formal planning representations such as the Planning Domain Definition Language (PDDL), a fundamental requirement for reliable real-world sequential decision-making systems.
In this episode, we explore PDDL-INSTRUCT, a novel instruction tuning framework designed to significantly enhance LLMs' symbolic planning capabilities. This approach explicitly bridges the gap between general LLM reasoning and the logical precision needed for automated planning by using logical Chain-of-Thought (CoT) reasoning.
Key topics covered include: