
Sign up to save your podcasts
Or


This September 30 2025 academic paper, introduces Regression Language Models (RLMs) as a unified method for code-to-metric regression, which is the task of predicting numerical outcomes from source code or computation graphs. This approach simplifies traditional methods by directly using text input—such as high-level programming languages like Haskell and Python or low-level ONNX graph representations—to predict metrics like accuracy, memory consumption, and execution latency. The RLM, initialized from a pretrained T5Gemma encoder, is shown to perform competitively against specialized models like Graph Neural Networks (GNNs) across various tasks, including predicting performance in Neural Architecture Search (NAS) and estimating memory usage in competitive programming. The findings highlight the RLM's versatility and ability to model multiple objectives concurrently, suggesting a shift toward generic, text-based regression in computational graph analysis.
Source:
https://arxiv.org/pdf/2509.26476
By mcgrofThis September 30 2025 academic paper, introduces Regression Language Models (RLMs) as a unified method for code-to-metric regression, which is the task of predicting numerical outcomes from source code or computation graphs. This approach simplifies traditional methods by directly using text input—such as high-level programming languages like Haskell and Python or low-level ONNX graph representations—to predict metrics like accuracy, memory consumption, and execution latency. The RLM, initialized from a pretrained T5Gemma encoder, is shown to perform competitively against specialized models like Graph Neural Networks (GNNs) across various tasks, including predicting performance in Neural Architecture Search (NAS) and estimating memory usage in competitive programming. The findings highlight the RLM's versatility and ability to model multiple objectives concurrently, suggesting a shift toward generic, text-based regression in computational graph analysis.
Source:
https://arxiv.org/pdf/2509.26476