Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev

Abstract

Classically, the continuous-time Langevin diffusion converges exponentially fast to its stationary distribution π under the sole assumption that π satisfies a Poincaré inequality. Using this fact to provide guarantees for the discrete-time Langevin Monte Carlo (LMC) algorithm, however, is considerably more challenging due to the need for working with chi-squared or Rényi divergences, and prior works have largely focused on strongly log-concave targets. In this work, we provide the first convergence guarantees for LMC assuming that π satisfies either a Latała–Oleszkiewicz or modified log-Sobolev inequality, which interpolates between the Poincaré and log-Sobolev settings. Unlike prior works, our results allow for weak smoothness and do not require convexity or dissipativity conditions.

Publication
Conference on Learning Theory, 2022
Matthew (Shunshi) Zhang
Matthew (Shunshi) Zhang
PhD Candidate in Computer Science

I am a PhD student and researcher in statistical complexity of sampling, optimization and machine learning algorithms.