The Power of Predictions in Online Control

Abstract

We study the impact of predictions in online Linear Quadratic Regulator control with both stochastic and adversarial disturbances in the dynamics. In both settings, we characterize the optimal policy and derive tight bounds on the minimum cost and dynamic regret. Perhaps surprisingly, our analysis shows that the conventional greedy MPC approach is a near-optimal policy in both stochastic and adversarial settings. Specifically, for length-$T$ problems, MPC requires only $O(\log T)$ predictions to reach $O(1)$ dynamic regret, which matches (up to lower-order terms) our lower bound on the required prediction horizon for constant regret.

Publication
Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020)
Chenkai Yu
Chenkai Yu
PhD Student in Decision, Risk, and Operations

Incentive, Information, and Computation

Related