AI Post Transformers

LLM-AutoDiff: Auto-Differentiate Any LLM Workflow


Listen Later

The January 30, 2025 paper introduces LLM-AutoDiff, a novel framework for Automatic Prompt Engineering (APE) that allows for the optimization of complex Large Language Model (LLM) workflows. This framework models an entire LLM application—including multiple LLM calls, functional components like retrievers, and cyclical operations—as a directed, auto-differentiable graph. By treating textual inputs as trainable parameters, LLM-AutoDiff uses a separate "backward engine" LLM to generate textual gradients (feedback) that guide an optimizer LLM to revise prompts, effectively automating the manual and labor-intensive process of prompt engineering. The paper details several technical advances, such as pass-through gradients for functional nodes and time-sequential gradients for cyclic structures, to ensure accurate error attribution across multi-component pipelines, ultimately demonstrating improved accuracy and efficiency over existing textual gradient and few-shot baselines. Source: January 30, 2025 LLM-AutoDiff: Auto-Differentiate Any LLM Workflow https://arxiv.org/pdf/2501.16673
...more
View all episodesView all episodes
Download on the App Store

AI Post TransformersBy mcgrof