
Sign up to save your podcasts
Or


This paper, a research paper from Google DeepMind, introduces a novel approach called Embed-then-Regress for Bayesian Optimization. This method leverages the ability of language models to embed string representations of various types of inputs, including synthetic, combinatorial, and hyperparameter configurations, into fixed-length vectors. These vectors then serve as features for a Transformer-based regressor trained using in-context learning. The paper demonstrates that this approach achieves comparable results to traditional Gaussian Process algorithms across diverse optimization tasks, highlighting its versatility and potential for broader application in blackbox optimization.
By Enoch H. KangThis paper, a research paper from Google DeepMind, introduces a novel approach called Embed-then-Regress for Bayesian Optimization. This method leverages the ability of language models to embed string representations of various types of inputs, including synthetic, combinatorial, and hyperparameter configurations, into fixed-length vectors. These vectors then serve as features for a Transformer-based regressor trained using in-context learning. The paper demonstrates that this approach achieves comparable results to traditional Gaussian Process algorithms across diverse optimization tasks, highlighting its versatility and potential for broader application in blackbox optimization.