Exploiting Linear Models for Model-Free Nonlinear Control: A Provably Convergent Policy Gradient Approach

Abstract

Model-free learning-based control methods have seen great success recently. However, such methods typically suffer from poor sample complexity and limited convergence guarantees. This is in sharp contrast to classical model-based control, which has a rich theory but typically requires strong modeling assumptions. In this paper, we combine the two approaches. We consider a dynamical system with both linear and non-linear components and use the linear model to define a warm start for a model-free, policy gradient method. We show this hybrid approach outperforms the model-based controller while avoiding the convergence issues associated with model-free approaches via both numerical experiments and theoretical analyses, in which we derive sufficient conditions on the non-linear component such that our approach is guaranteed to converge to the (nearly) global optimal controller.

Publication
2021 60th IEEE Conference on Decision and Control (CDC)
Chenkai Yu
Chenkai Yu
PhD Student in Decision, Risk, and Operations

Incentive, Information, and Computation

Related