Suivre
Arthur Jacot
Arthur Jacot
Assistant Professor, Courant Institute of Mathematical Sciences, NYU
Adresse e-mail validée de nyu.edu - Page d'accueil
Titre
Citée par
Citée par
Année
Neural tangent kernel: Convergence and generalization in neural networks
A Jacot, F Gabriel, C Hongler
Advances in neural information processing systems 31, 2018
29722018
Scaling description of generalization with number of parameters in deep learning
M Geiger, A Jacot, S Spigler, F Gabriel, L Sagun, S d’Ascoli, G Biroli, ...
Journal of Statistical Mechanics: Theory and Experiment 2020 (2), 023401, 2020
2052020
Disentangling feature and lazy training in deep neural networks
M Geiger, S Spigler, A Jacot, M Wyart
Journal of Statistical Mechanics: Theory and Experiment 2020 (11), 113301, 2020
118*2020
Implicit regularization of random feature models
A Jacot, B Simsek, F Spadaro, C Hongler, F Gabriel
International Conference on Machine Learning, 4631-4640, 2020
882020
Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances
B Simsek, F Ged, A Jacot, F Spadaro, C Hongler, W Gerstner, J Brea
International Conference on Machine Learning, 9722-9732, 2021
672021
Kernel alignment risk estimator: Risk prediction from training data
A Jacot, B Simsek, F Spadaro, C Hongler, F Gabriel
Advances in neural information processing systems 33, 15568-15578, 2020
532020
Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity
A Jacot, F Ged, B Şimşek, C Hongler, F Gabriel
arXiv preprint arXiv:2106.15933, 2021
44*2021
The asymptotic spectrum of the hessian of dnn throughout training
A Jacot, F Gabriel, C Hongler
arXiv preprint arXiv:1910.02875, 2019
272019
Freeze and chaos: Ntk views on dnn normalization, checkerboard and boundary artifacts
A Jacot, F Gabriel, F Ged, C Hongler
Mathematical and Scientific Machine Learning, 257-270, 2022
22*2022
Implicit bias of large depth networks: a notion of rank for nonlinear functions
A Jacot
The Eleventh International Conference on Learning Representations, 2022
142022
Feature Learning in -regularized DNNs: Attraction/Repulsion and Sparsity
A Jacot, E Golikov, C Hongler, F Gabriel
Advances in Neural Information Processing Systems 35, 6763-6774, 2022
92022
Order and chaos: NTK views on DNN normalization, checkerboard and boundary artifacts
A Jacot, F Gabriel, F Ged, C Hongler
arXiv preprint arXiv:1907.05715, 2019
62019
Bottleneck structure in learned features: Low-dimension vs regularity tradeoff
A Jacot
Advances in Neural Information Processing Systems 36, 2024
52024
DNN-based topology optimisation: Spatial invariance and neural tangent kernel
B Dupuis, A Jacot
Advances in Neural Information Processing Systems 34, 27659-27669, 2021
52021
Implicit bias of SGD in -regularized linear DNNs: One-way jumps from high to low rank
Z Wang, A Jacot
arXiv preprint arXiv:2305.16038, 2023
22023
Understanding Layer-wise Contributions in Deep Neural Networks through Spectral Analysis
Y Dandi, A Jacot
arXiv preprint arXiv:2111.03972, 2021
22021
Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning
Y Wen, A Jacot
arXiv preprint arXiv:2402.08010, 2024
2024
DNN-based Topology Optimisation: Spatial Invariance and Neural Tangent Kernel Supplementary Material
B Dupuis, A Jacot
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–18