publications
Preprints and submissions
- Yunbum Kook, Matthew S. Zhang “Covariance estimation with Markov chain Monte Carlo”. [arXiv] [PDF]
- Sinho Chewi, Atsushi Nitanda, Matthew S. Zhang “Uniform-in-N log-Sobolev inequality for the mean-field Langevin dynamics with convex energy”. [arXiv] [PDF]
Journal articles
- Sinho Chewi, Murat A. Erdogdu, Mufan (Bill) Li, Ruoqi Shen, Matthew S. Zhang “Analysis of Langevin Monte Carlo from Poincaré to log-Sobolev”, in Foundations of Computational Mathematics (2024). [arXiv] [PDF]
Conference papers
- Yunbum Kook, Matthew S. Zhang “Rényi-infinity constrained sampling with d3 membership queries”, in the ACM-SIAM Symposium on Discrete Algorithms (SODA 2025). [arXiv] [PDF]
- Yunbum Kook, Santosh Vempala, Matthew S. Zhang “In-and-Out: Algorithmic Diffusion for Sampling Convex Bodies”, in the 38th Conference on Neural Information Processing Systems (NeurIPS 2024, Spotlight). [arXiv] [PDF]
- Yunboom Kook, Matthew S. Zhang, Sinho Chewi, Mufan (Bill) Li, Murat A. Erdogdu “Sampling from the mean-field stationary distribution”, in the 37th Conference on Learning Theory (COLT 2024). [arXiv] [PDF]
- Matthew S. Zhang, Sinho Chewi, Mufan (Bill) Li, Krishnakumar Balasubramanian, Murat A. Erdogdu “Improved discretization analysis for the underdamped Langevin Monte Carlo”, in the 36th Conference on Learning Theory (COLT 2023). [arXiv][PDF]
- Tom Huix, Matthew S. Zhang, Alain Durmus “Tight regret and complexity bounds for Thompson sampling via Langevin Monte Carlo”, in the 26th International Conference on Artificial Intelligence and Statistics (AISTATS 2023). [PDF]
- Krishnakumar Balasubramanian, Sinho Chewi, Murat A. Erdogdu, Mufan Li, Adil Salim, Matthew S. Zhang “Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo”, in the 35th Conference on Learning Theory (COLT 2022). [arXiv] [PDF]
- Matthew S. Zhang, Murat A. Erdogdu, Animesh Garg “Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings”, in the 36th AAAI Conference on Artificial Intelligence (AAAI 2022). [arXiv] [PDF]
- Murat A. Erdogdu, Rasa Hosseinzadeh, Matthew S. Zhang “Convergence of Langevin Monte Carlo in Chi-Squared and Rényi Divergence”,in the 25th International Conference on Artificial Intelligence and Statistics (AISTATS 2022). [arXiv] [PDF]
Pre-PhD conference, journal, and workshop papers
- Matthew S. Zhang, Bradly Stadie, “One-shot pruning of recurrent neural networks by Jacobian spectrum evaluation”, in the 8th International Conference on Learning Representations (ICLR 2020). [arXiv] [PDF]
- Tingwu Wang, Xuchan Bao, Ignasi Clavera, Jerrick Hoang, Yeming Wen, Eric Langlois, Matthew S. Zhang, Guodong Zhang, Pieter Abbeel, Jimmy Ba “Benchmarking Model-Based Reinforcement Learning”, preprint. [arXiv] [PDF]