Zwane, S;
Hadjivelichkov, D;
Luo, Y;
Bekiroglu, Y;
Kanoulas, D;
Deisenroth, MP;
(2023)
Safe Trajectory Sampling in Model-Based Reinforcement Learning.
In:
IEEE International Conference on Automation Science and Engineering.
IEEE: Auckland, New Zealand.
Preview |
Text
P44__Zwane_CASE_2023.pdf - Accepted Version Download (2MB) | Preview |
Abstract
Model-based reinforcement learning aims to learn a policy to solve a target task by leveraging a learned dynamics model. This approach, paired with principled handling of uncertainty allows for data-efficient policy learning in robotics. However, the physical environment has feasibility and safety constraints that need to be incorporated into the policy before it is safe to execute on a real robot. In this work, we study how to enforce the aforementioned constraints in the context of model-based reinforcement learning with probabilistic dynamics models. In particular, we investigate how trajectories sampled from the learned dynamics model can be used on a real robot, while fulfilling user-specified safety requirements. We present a model-based reinforcement learning approach using Gaussian processes where safety constraints are taken into account without simplifying Gaussian assumptions on the predictive state distributions. We evaluate the proposed approach on different continuous control tasks with varying complexity and demonstrate how our safe trajectory-sampling approach can be directly used on a real robot without violating safety constraints.
Type: | Proceedings paper |
---|---|
Title: | Safe Trajectory Sampling in Model-Based Reinforcement Learning |
Event: | 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE) |
Dates: | 26 Aug 2023 - 30 Aug 2023 |
ISBN-13: | 9798350320695 |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1109/CASE56687.2023.10260496 |
Publisher version: | https://doi.org/10.1109/CASE56687.2023.10260496 |
Language: | English |
Additional information: | This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. |
Keywords: | Visualization, Uncertainty, Propioception, Reinforcement learning, Probabilistic logic, Safety, Trajectory |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
URI: | https://discovery-pp.ucl.ac.uk/id/eprint/10180162 |
Archive Staff Only
View Item |