Machine learning in safety critical systems can be... ~a very dumb idea~ a challenge. In this talk I want to introduce a recent framework for a robust safety system for reinforcement learning in safety critical systems based on Model Predictive Control.
In short: How to train a system without blowing up the building in the process
Model Predictive Control is a recent compute heavy control strategy for autonomous systems. I want to explore its applications to reinforcement learning. A mathematical model of both the enviroment and the system-in-training acts as a supervisor during the training and provides mathematical guarantees for certain bahviour boundaries. To provide some insights I will present my own experiments from my bachelor thesis as well as the research done at University of Lübeck on the topic. If I have time I will try to demonstrate a simple version of the architecture.
https://creativecommons.org/licenses/by-sa/4.0/
about this event: https://talks.mrmcd.net/2025/talk/URX8PM/