Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence

Abstract

We study sampling from a target distribution ν=exp(−f) using the unadjusted Langevin Monte Carlo (LMC) algorithm when the potential f satisfies a strong dissipativity condition and it is first-order smooth with a Lipschitz gradient. We prove that, initialized with a Gaussian random vector that has sufficiently small variance, iterating the LMC algorithm for O(λ2dϵ^(−1)) steps is sufficient to reach ϵ-neighborhood of the target in both Chi-squared and Renyi divergence, where λ is the logarithmic Sobolev constant of ν. Our results do not require warm-start to deal with the exponential dimension dependency in Chi-squared divergence at initialization. In particular, for strongly convex and first-order smooth potentials, we show that the LMC algorithm achieves the rate estimate O(dϵ^{−1}) which improves the previously known rates in both of these metrics, under the same assumptions. Translating this rate to other metrics, our results also recover the state-of-the-art rate estimates in KL divergence, total variation and 2-Wasserstein distance in the same setup. Finally, as we rely on the logarithmic Sobolev inequality, our framework covers a range of non-convex potentials that are first-order smooth and exhibit strong convexity outside of a compact region.

Publication
Artificial Intelligence and Statistics, 2022
Matthew (Shunshi) Zhang
Matthew (Shunshi) Zhang
PhD Candidate in Computer Science

I am a PhD student and researcher in statistical complexity of sampling, optimization and machine learning algorithms.