Daniel Russo
Daniel Russo
Verified email at gsb.columbia.edu - Homepage
Title
Cited by
Cited by
Year
Learning to optimize via posterior sampling
D Russo, B Van Roy
Mathematics of Operations Research 39 (4), 1221-1243, 2014
4342014
A tutorial on thompson sampling
D Russo, B Van Roy, A Kazerouni, I Osband, Z Wen
Foundations and Trends in Machine Learning 11 (1), 1–96, 2018
4292018
An information-theoretic analysis of thompson sampling
D Russo, B Van Roy
The Journal of Machine Learning Research 17 (1), 2442-2471, 2016
2552016
Learning to optimize via information-directed sampling
D Russo, B Van Roy
Operations Research 66 (1), 230-252, 2018
164*2018
Deep Exploration via Randomized Value Functions.
I Osband, B Van Roy, DJ Russo, Z Wen
J. Mach. Learn. Res. 20 (124), 1-62, 2019
1532019
A finite time analysis of temporal difference learning with linear function approximation
J Bhandari, D Russo, R Singal
Conference on learning theory, 1691-1692, 2018
1442018
How much does your data exploration overfit? Controlling bias via information usage.
D Russo, J Zou
IEEE Transactions on Information Theory, 2019
141*2019
Simple bayesian algorithms for best arm identification
D Russo
Conference on Learning Theory, 1417-1418, 2016
1302016
Eluder Dimension and the Sample Complexity of Optimistic Exploration.
D Russo, B Van Roy
Advances in Neural Information Processing Systems 26, 2256-2264, 2013
802013
Global optimality guarantees for policy gradient methods
J Bhandari, D Russo
arXiv preprint arXiv:1906.01786, 2019
752019
Improving the expected improvement algorithm
C Qin, D Klabjan, D Russo
Advances in Neural Information Processing Systems, 5382--5392, 2017
492017
(More) efficient reinforcement learning via posterior sampling
I Osband, D Russo, B Van Roy
Advances in Neural Information Processing Systems 26, 2013
452013
Worst-case regret bounds for exploration via randomized value functions
D Russo
Advances in Neural Information Processing Systems 32, 2019
372019
Satisficing in time-sensitive bandit learning
D Russo, B Van Roy
arXiv preprint arXiv:1803.02855, 2018
28*2018
A note on the linear convergence of policy gradient methods
J Bhandari, D Russo
arXiv preprint arXiv:2007.11120, 2020
162020
A note on the equivalence of upper confidence bounds and gittins indices for patient agents
D Russo
Operations Research 69 (1), 273-278, 2021
82021
Policy gradient optimization of Thompson sampling policies
S Min, CC Moallemi, DJ Russo
arXiv preprint arXiv:2006.16507, 2020
52020
Approximation benefits of policy gradient methods with aggregated states
D Russo
arXiv preprint arXiv:2007.11684, 2020
42020
On the Futility of Dynamics in Robust Mechanism Design
S Balseiro, A Kim, DJ Russo
Columbia Business School Research Paper Forthcoming, 2019
22019
On the Linear Convergence of Policy Gradient Methods for Finite MDPs
J Bhandari, D Russo
International Conference on Artificial Intelligence and Statistics, 2386-2394, 2021
12021
The system can't perform the operation now. Try again later.
Articles 1–20