Chi Jin
Chi Jin
Assistant Professor, Princeton University
Adresse e-mail validée de princeton.edu - Page d'accueil
Titre
Citée par
Citée par
Année
Escaping from saddle points—online stochastic gradient for tensor decomposition
R Ge, F Huang, C Jin, Y Yuan
Conference on Learning Theory, 797-842, 2015
7002015
How to escape saddle points efficiently
C Jin, R Ge, P Netrapalli, SM Kakade, MI Jordan
arXiv preprint arXiv:1703.00887, 2017
4032017
No spurious local minima in nonconvex low rank problems: A unified geometric analysis
R Ge, C Jin, Y Zheng
arXiv preprint arXiv:1704.00708, 2017
2452017
Is Q-learning provably efficient?
C Jin, Z Allen-Zhu, S Bubeck, MI Jordan
Advances in Neural Information Processing Systems, 4863-4873, 2018
1572018
Accelerated gradient descent escapes saddle points faster than gradient descent
C Jin, P Netrapalli, MI Jordan
Conference On Learning Theory, 1042-1085, 2018
1192018
Gradient descent can take exponential time to escape saddle points
SS Du, C Jin, JD Lee, MI Jordan, A Singh, B Poczos
Advances in neural information processing systems, 1067-1077, 2017
1102017
Streaming PCA: Matching matrix Bernstein and near-optimal finite sample guarantees for Oja’s algorithm
P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford
Conference on learning theory, 1147-1164, 2016
99*2016
Faster eigenvector computation via shift-and-invert preconditioning
D Garber, E Hazan, C Jin, C Musco, P Netrapalli, A Sidford
International Conference on Machine Learning, 2626-2634, 2016
89*2016
Stochastic cubic regularization for fast nonconvex optimization
N Tripuraneni, M Stern, C Jin, J Regier, MI Jordan
Advances in neural information processing systems, 2899-2908, 2018
812018
Local maxima in the likelihood of gaussian mixture models: Structural results and algorithmic consequences
C Jin, Y Zhang, S Balakrishnan, MJ Wainwright, MI Jordan
Advances in neural information processing systems, 4116-4124, 2016
782016
Provable efficient online matrix completion via non-convex stochastic gradient descent
C Jin, SM Kakade, P Netrapalli
Advances in Neural Information Processing Systems, 4520-4528, 2016
692016
What is local optimality in nonconvex-nonconcave minimax optimization?
C Jin, P Netrapalli, MI Jordan
arXiv preprint arXiv:1902.00618, 2019
66*2019
Efficient algorithms for large-scale generalized eigenvector computation and canonical correlation analysis
R Ge, C Jin, P Netrapalli, A Sidford
International Conference on Machine Learning, 2741-2750, 2016
492016
Provably efficient reinforcement learning with linear function approximation
C Jin, Z Yang, Z Wang, MI Jordan
Conference on Learning Theory, 2137-2143, 2020
442020
On gradient descent ascent for nonconvex-concave minimax problems
T Lin, C Jin, MI Jordan
arXiv preprint arXiv:1906.00331, 2019
442019
Global convergence of non-convex gradient descent for computing matrix squareroot
P Jain, C Jin, S Kakade, P Netrapalli
Artificial Intelligence and Statistics, 479-488, 2017
39*2017
Sampling can be faster than optimization
YA Ma, Y Chen, C Jin, N Flammarion, MI Jordan
Proceedings of the National Academy of Sciences 116 (42), 20881-20885, 2019
332019
Stochastic gradient descent escapes saddle points efficiently
C Jin, P Netrapalli, R Ge, SM Kakade, MI Jordan
arXiv preprint arXiv:1902.04811, 2019
312019
Dimensionality dependent PAC-Bayes margin bound
C Jin, L Wang
Advances in neural information processing systems, 1034-1042, 2012
282012
On the local minima of the empirical risk
C Jin, LT Liu, R Ge, MI Jordan
Advances in neural information processing systems, 4896-4905, 2018
26*2018
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20