This is the second part of my interview with Hamid Eghbal-zadeh, post-doc at the Johannes Kepler University at the Institute of Machine Learning.
In the interview, we are talking about his research on a series of different aspects of representation learning with deep neural networks in order to make them more robust and improve their out-of-distribution behavior.
In this second part, we are talking about disentangled representations and the benefit they bring to agents trained in contextualized reinforcement tasks, in order to operate in unseen contexts and environments.
Personal Homepage: https://eghbalz.github.io/
Hamid on LinkedIn: https://www.linkedin.com/in/hamid-eghbal-zadeh-8642b3a8/
H. Eghbal-zadeh, Representation Learning and Inference from Signals and Sequences, PhD Thesis, 2019.
H. Eghbal-zadeh, F. Henkel, G. Widmer, Context-Adaptive Reinforcement Learning using Unsupervised Learning of Context Variables, In Proceedings of Machine Learning Research, NeurIPS 2020 Workshop on Pre-registration in Machine Learning, PMLR 148:236-254, 2021.