Exploiting higher order smoothness in derivative-free optimization and continuous bandits A Akhavan, M Pontil, A Tsybakov Advances in Neural Information Processing Systems 33, 9017-9027, 2020 | 56 | 2020 |
A gradient estimator via L1-randomization for online zero-order optimization with two point feedback A Akhavan, E Chzhen, M Pontil, A Tsybakov Advances in Neural Information Processing Systems 35, 7685-7696, 2022 | 28 | 2022 |
Distributed zero-order optimization under adversarial noise A Akhavan, M Pontil, A Tsybakov Advances in Neural Information Processing Systems 34, 10209-10220, 2021 | 21 | 2021 |
Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm A Akhavan, E Chzhen, M Pontil, AB Tsybakov Journal of Machine Learning Research 25 (370), 1-50, 2024 | 13 | 2024 |
Group meritocratic fairness in linear contextual bandits R Grazzi, A Akhavan, JIF Falk, L Cella, M Pontil Advances in Neural Information Processing Systems 35, 24392-24404, 2022 | 13 | 2022 |
Contextual Continuum Bandits: Static Versus Dynamic Regret A Akhavan, K Lounici, M Pontil, AB Tsybakov arXiv preprint arXiv:2406.05714, 2024 | 1 | 2024 |
Estimating the Minimizer and the Minimum Value of a Regression Function under Passive Design A Akhavan, D Gogolashvili, AB Tsybakov Journal of Machine Learning Research 25 (11), 1-37, 2024 | | 2024 |
Re-thinking High-dimensional Mathematical Statistics F Bunea, R Nowak, AB Tsybakov Oberwolfach Reports 19 (2), 1377-1430, 2023 | | 2023 |
Derivative-free stochastic optimization, online learning and fairness A Akhavanfoomani Institut Polytechnique de Paris, 2023 | | 2023 |