Bilal Piot
Bilal Piot
DeepMind
Adresse e-mail validée de google.com
Titre
Citée par
Citée par
Année
Rainbow: Combining improvements in deep reinforcement learning
M Hessel, J Modayil, H Van Hasselt, T Schaul, G Ostrovski, W Dabney, ...
arXiv preprint arXiv:1710.02298, 2017
7552017
Deep q-learning from demonstrations
T Hester, M Vecerik, O Pietquin, M Lanctot, T Schaul, B Piot, D Horgan, ...
arXiv preprint arXiv:1704.03732, 2017
3832017
Noisy networks for exploration
M Fortunato, MG Azar, B Piot, J Menick, I Osband, A Graves, V Mnih, ...
arXiv preprint arXiv:1706.10295, 2017
3632017
Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards
M Vecerik, T Hester, J Scholz, F Wang, O Pietquin, B Piot, N Heess, ...
arXiv preprint arXiv:1707.08817, 2017
2092017
Learning from demonstrations for real world reinforcement learning
T Hester, M Vecerik, O Pietquin, M Lanctot, T Schaul, B Piot, A Sendonaris, ...
1232017
End-to-end optimization of goal-driven and visually grounded dialogue systems
F Strub, H De Vries, J Mary, B Piot, A Courville, O Pietquin
arXiv preprint arXiv:1703.05423, 2017
832017
Inverse reinforcement learning through structured classification
E Klein, M Geist, B Piot, O Pietquin
Advances in neural information processing systems 25, 1007-1015, 2012
802012
Laugh-aware virtual agent and its impact on user amusement
R Niewiadomski, J Hofmann, J Urbain, T Platt, J Wagner, P Bilal, T Ito, ...
University of Zurich, 2013
602013
Bootstrap your own latent-a new approach to self-supervised learning
JB Grill, F Strub, F Altché, C Tallec, P Richemond, E Buchatskaya, ...
Advances in Neural Information Processing Systems 33, 2020
572020
Boosted bellman residual minimization handling expert demonstrations
B Piot, M Geist, O Pietquin
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2014
442014
Observe and look further: Achieving consistent performance on atari
T Pohlen, B Piot, T Hester, MG Azar, D Horgan, D Budden, G Barth-Maron, ...
arXiv preprint arXiv:1805.11593, 2018
422018
Approximate dynamic programming for two-player zero-sum markov games
J Perolat, B Scherrer, B Piot, O Pietquin
International Conference on Machine Learning, 1321-1329, 2015
422015
Agent57: Outperforming the atari human benchmark
AP Badia, B Piot, S Kapturowski, P Sprechmann, A Vitvitskyi, D Guo, ...
arXiv preprint arXiv:2003.13350, 2020
412020
A cascaded supervised learning approach to inverse reinforcement learning
E Klein, B Piot, M Geist, O Pietquin
Joint European conference on machine learning and knowledge discovery in …, 2013
392013
Bridging the gap between imitation learning and inverse reinforcement learning
B Piot, M Geist, O Pietquin
IEEE transactions on neural networks and learning systems 28 (8), 1814-1826, 2016
372016
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
A Gruslys, W Dabney, MG Azar, B Piot, M Bellemare, R Munos
arXiv preprint arXiv:1704.04651, 2017
342017
Learning from demonstrations: Is it worth estimating a reward function?
B Piot, M Geist, O Pietquin
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2013
282013
Score-based inverse reinforcement learning
L El Asri, B Piot, M Geist, R Laroche, O Pietquin
272016
Hybrid collaborative filtering with autoencoders
F Strub, J Mary, R Gaudel
arXiv preprint arXiv:1603.00806, 2016
262016
Actor-critic fictitious play in simultaneous move multistage games
J Perolat, B Piot, O Pietquin
242018
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20