Niladri S. Chatterji
Niladri S. Chatterji
Postdoctoral Researcher, Department of Computer Science, Stanford University
Verified email at cs.stanford.edu - Homepage
Title
Cited by
Cited by
Year
Underdamped Langevin MCMC: A non-asymptotic analysis
X Cheng, NS Chatterji, PL Bartlett, MI Jordan
arXiv preprint arXiv:1707.03663, 2017
1462017
Sharp convergence rates for Langevin dynamics in the nonconvex setting
X Cheng, NS Chatterji, Y Abbasi-Yadkori, PL Bartlett, MI Jordan
arXiv preprint arXiv:1805.01648, 2018
952018
On the theory of variance reduction for stochastic gradient Monte Carlo
N Chatterji, N Flammarion, Y Ma, P Bartlett, M Jordan
International Conference on Machine Learning, 764-773, 2018
772018
Is there an analog of Nesterov acceleration for gradient-based MCMC?
YA Ma, NS Chatterji, X Cheng, N Flammarion, PL Bartlett, MI Jordan
Bernoulli 27 (3), 1942-1992, 2021
62*2021
Enhancement of Spin-transfer torque switching via resonant tunneling
N Chatterji, AA Tulapurkar, B Muralidharan
Applied Physics Letters 105 (23), 232410, 2014
282014
Finite-sample analysis of interpolating linear classifiers in the overparameterized regime
NS Chatterji, M Long, Philip
arXiv preprint arXiv:2004.12019 16, 2020
252020
Alternating minimization for dictionary learning: Local convergence guarantees
NS Chatterji, PL Bartlett
arXiv preprint arXiv:1711.03634, 2017
25*2017
Osom: A simultaneously optimal algorithm for multi-armed and linear contextual bandits
N Chatterji, V Muthukumar, P Bartlett
International Conference on Artificial Intelligence and Statistics, 1844-1854, 2020
212020
The intriguing role of module criticality in the generalization of deep networks
NS Chatterji, B Neyshabur, H Sedghi
arXiv preprint arXiv:1912.00528, 2019
182019
Langevin monte carlo without smoothness
N Chatterji, J Diakonikolas, MI Jordan, P Bartlett
International Conference on Artificial Intelligence and Statistics, 1716-1726, 2020
132020
Online learning with kernel losses
N Chatterji, A Pacchiano, P Bartlett
International Conference on Machine Learning, 971-980, 2019
10*2019
When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?
NS Chatterji, PM Long, PL Bartlett
arXiv preprint arXiv:2102.04998, 2021
22021
On the Opportunities and Risks of Foundation Models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
12021
The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer Linear Networks
NS Chatterji, PM Long, PL Bartlett
arXiv preprint arXiv:2108.11489, 2021
2021
On the Theory of Reinforcement Learning with Once-per-Episode Feedback
NS Chatterji, A Pacchiano, PL Bartlett, MI Jordan
arXiv preprint arXiv:2105.14363, 2021
2021
When Does Gradient Descent with Logistic Loss Find Interpolating Two-Layer Networks?
NS Chatterji, PM Long, PL Bartlett
Journal of Machine Learning Research 22 (159), 1-48, 2021
2021
Why do Gradient Methods Work in Optimization and Sampling?
NS Chatterji
University of California, Berkeley, 2021
2021
When does gradient descent with logistic loss find interpolating two-layer networks?
NS Chatterji, PM Long, PL Bartlett
arXiv preprint arXiv:2012.02409, 2020
2020
Oracle lower bounds for stochastic gradient sampling algorithms
NS Chatterji, PL Bartlett, PM Long
arXiv preprint arXiv:2002.00291, 2020
2020
Proposal for a spin-transfer torque device based on resonant tunneling
N Chatterji, A Tulapurkar, B Muralidharan
APS March Meeting Abstracts 2015, D28. 008, 2015
2015
The system can't perform the operation now. Try again later.
Articles 1–20