Bilal Piot
Bilal Piot
DeepMind
Verified email at google.com
Title
Cited by
Cited by
Year
Rainbow: Combining improvements in deep reinforcement learning
M Hessel, J Modayil, H Van Hasselt, T Schaul, G Ostrovski, W Dabney, ...
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
8592018
Deep q-learning from demonstrations
T Hester, M Vecerik, O Pietquin, M Lanctot, T Schaul, B Piot, D Horgan, ...
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
4252018
Noisy networks for exploration
M Fortunato, MG Azar, B Piot, J Menick, I Osband, A Graves, V Mnih, ...
arXiv preprint arXiv:1706.10295, 2017
4022017
Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards
M Vecerik, T Hester, J Scholz, F Wang, O Pietquin, B Piot, N Heess, ...
arXiv preprint arXiv:1707.08817, 2017
2392017
Learning from demonstrations for real world reinforcement learning
T Hester, M Vecerik, O Pietquin, M Lanctot, T Schaul, B Piot, A Sendonaris, ...
1232017
Bootstrap your own latent: A new approach to self-supervised learning
JB Grill, F Strub, F Altché, C Tallec, PH Richemond, E Buchatskaya, ...
arXiv preprint arXiv:2006.07733, 2020
1182020
End-to-end optimization of goal-driven and visually grounded dialogue systems
F Strub, H De Vries, J Mary, B Piot, A Courville, O Pietquin
arXiv preprint arXiv:1703.05423, 2017
852017
Inverse reinforcement learning through structured classification
E Klein, M Geist, B Piot, O Pietquin
NIPS 2012, 1-9, 2012
842012
Agent57: Outperforming the atari human benchmark
AP Badia, B Piot, S Kapturowski, P Sprechmann, A Vitvitskyi, ZD Guo, ...
International Conference on Machine Learning, 507-517, 2020
702020
Laugh-aware virtual agent and its impact on user amusement
R Niewiadomski, J Hofmann, J Urbain, T Platt, J Wagner, P Bilal, T Ito, ...
University of Zurich, 2013
632013
Boosted bellman residual minimization handling expert demonstrations
B Piot, M Geist, O Pietquin
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2014
472014
Approximate dynamic programming for two-player zero-sum markov games
J Perolat, B Scherrer, B Piot, O Pietquin
International Conference on Machine Learning, 1321-1329, 2015
452015
Observe and look further: Achieving consistent performance on atari
T Pohlen, B Piot, T Hester, MG Azar, D Horgan, D Budden, G Barth-Maron, ...
arXiv preprint arXiv:1805.11593, 2018
442018
A cascaded supervised learning approach to inverse reinforcement learning
E Klein, B Piot, M Geist, O Pietquin
Joint European conference on machine learning and knowledge discovery in …, 2013
412013
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
A Gruslys, W Dabney, MG Azar, B Piot, M Bellemare, R Munos
arXiv preprint arXiv:1704.04651, 2017
392017
Bridging the gap between imitation learning and inverse reinforcement learning
B Piot, M Geist, O Pietquin
IEEE transactions on neural networks and learning systems 28 (8), 1814-1826, 2016
392016
Never give up: Learning directed exploration strategies
AP Badia, P Sprechmann, A Vitvitskyi, D Guo, B Piot, S Kapturowski, ...
arXiv preprint arXiv:2002.06038, 2020
30*2020
Learning from demonstrations: Is it worth estimating a reward function?
B Piot, M Geist, O Pietquin
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2013
302013
Observational learning by reinforcement learning
D Borsa, B Piot, R Munos, O Pietquin
arXiv preprint arXiv:1706.06617, 2017
282017
Hybrid collaborative filtering with autoencoders
F Strub, J Mary, R Gaudel
arXiv preprint arXiv:1603.00806, 2016
272016
The system can't perform the operation now. Try again later.
Articles 1–20