Why agents need 1990s search algorithms
While modern artificial intelligence has led to highly capable autonomous agents, recent research reveals that these advanced systems often require classic algorithms, formal logic, and fundamental physical laws to function optimally. Here is a short summary of three recent studies demonstrating this:
1. Classic Search Algorithms for Deep Research The paper "Revisiting Text Ranking in Deep Research" evaluates how LLM-based agents retrieve information and finds that classic lexical algorithms like BM25—developed in the 1990s—often outperform modern, parameter-heavy neural retrievers. Because autonomous agents tend to generate "web-search-style" queries that rely heavily on keywords, phrases, and exact-match quotation marks, older methods like BM25 are highly effective, particularly when retrieving passage-level text rather than full documents. In contrast, large single-vector dense retrievers struggle to adapt to these specific agent-issued queries.
2. Formal Mathematical Solvers for Agent Planning The article "TAPE: Tool-Guided Adaptive Planning and Constrained Execution" highlights that modern Language Model (LM) agents are highly vulnerable in environments where a single mistake leads to an irrecoverable failure. To solve this, the researchers propose the TAPE framework, which limits the stochastic nature of LLMs by relying on traditional external solvers, such as Integer Linear Programming (ILP). By mapping multiple LLM-generated ideas into a plan graph and using a formal solver to calculate an optimal, constraint-feasible path, the system significantly reduces planning errors and prevents the agent from reaching dead-ends.
3. Fundamental Physical Laws for Image Editing The paper "From Statics to Dynamics: Physics-Aware Image Editing" addresses a major flaw in modern multi-modal generative models: they often generate visual edits that match text prompts but blatantly violate basic real-world physics, such as gravity, material deformation, or optical refraction. To fix this, the researchers propose treating image editing not as a static "black-box" mapping of pixels, but as a continuous physical state transition. By training the model on a specialized dataset of video transitions (PhysicTran38K), their PhysicEdit framework forces the AI to utilize structured, physically-grounded reasoning, ensuring that generated images strictly adhere to the causal rules of the physical world.