
Sign up to save your podcasts
Or
Today I'm speaking with Jan N. van Rijn about metalearning.
Jan is an assistant professor at Leiden University, where he also did his PhD. He is one of the founders of the OpenML Foundation, he previously did a post-doc at Freiburg in the Frank Hutter lab, and he is one of the authors of the metalearning book, which we'll be discussing.
We’ll be primarily examining the contents of their chapter titled “Evaluating Recommendations of Metalearning/AutoML Systems”.
We'll be covering such topics as OpenML and metalearning, how the space of AutoML has changed in the last few years, benchmarks in the space of AutoML, how benchmarks measure progress of a field, some of the challenges benchmarks present, the need to build better tooling around benchmarks, the rise of NAS within the AutoML umbrella, the scope of AutoML (algorithm selection, hyperparameter optimization, the CASH problem, pipeline optimization, etc.), how meta learning helps traverse through the various scopes of AutoML problem types, metalearning in the context of hyperparameter optimizations, the importance of properly designing meta-datasets, approaches to the inputs and outputs of meta-models and their advantages and disadvantages, surrogate models, how AutoML systems interact with meta-models, how to think about metalearning across dataset difficulties, diagnosing meta-models and meta-datasets, how to compare different metadata systems, loss-time curves, meta-features and the various approaches to creating them (including dataset 2 vec), contemporary meta-learning in a deep-learning context, and other topics.
Link to book - https://link.springer.com/book/10.1007/978-3-030-67024-5
Link to OpenML - https://www.openml.org/
Today I'm speaking with Jan N. van Rijn about metalearning.
Jan is an assistant professor at Leiden University, where he also did his PhD. He is one of the founders of the OpenML Foundation, he previously did a post-doc at Freiburg in the Frank Hutter lab, and he is one of the authors of the metalearning book, which we'll be discussing.
We’ll be primarily examining the contents of their chapter titled “Evaluating Recommendations of Metalearning/AutoML Systems”.
We'll be covering such topics as OpenML and metalearning, how the space of AutoML has changed in the last few years, benchmarks in the space of AutoML, how benchmarks measure progress of a field, some of the challenges benchmarks present, the need to build better tooling around benchmarks, the rise of NAS within the AutoML umbrella, the scope of AutoML (algorithm selection, hyperparameter optimization, the CASH problem, pipeline optimization, etc.), how meta learning helps traverse through the various scopes of AutoML problem types, metalearning in the context of hyperparameter optimizations, the importance of properly designing meta-datasets, approaches to the inputs and outputs of meta-models and their advantages and disadvantages, surrogate models, how AutoML systems interact with meta-models, how to think about metalearning across dataset difficulties, diagnosing meta-models and meta-datasets, how to compare different metadata systems, loss-time curves, meta-features and the various approaches to creating them (including dataset 2 vec), contemporary meta-learning in a deep-learning context, and other topics.
Link to book - https://link.springer.com/book/10.1007/978-3-030-67024-5
Link to OpenML - https://www.openml.org/