George Tucker
George Tucker
Google Brain
Adresse e-mail validée de google.com - Page d'accueil
Titre
Citée par
Citée par
Année
Efficient Bayesian mixed-model analysis increases association power in large cohorts
PR Loh, G Tucker, BK Bulik-Sullivan, BJ Vilhjalmsson, HK Finucane, ...
Nature genetics 47 (3), 284-290, 2015
8702015
Regularizing neural networks by penalizing confident output distributions
G Pereyra, G Tucker, J Chorowski, Ł Kaiser, G Hinton
arXiv preprint arXiv:1701.06548, 2017
5822017
Soft actor-critic algorithms and applications
T Haarnoja, A Zhou, K Hartikainen, G Tucker, S Ha, J Tan, V Kumar, ...
arXiv preprint arXiv:1812.05905, 2018
4732018
Widespread macromolecular interaction perturbations in human genetic disorders
N Sahni, S Yi, M Taipale, JIF Bass, J Coulombe-Huntington, F Yang, ...
Cell 161 (3), 647-660, 2015
3732015
A quantitative chaperone interaction network reveals the architecture of cellular protein homeostasis pathways
M Taipale, G Tucker, J Peng, I Krykbaeva, ZY Lin, B Larsen, H Choi, ...
Cell 158 (2), 434-448, 2014
3082014
Model-based reinforcement learning for atari
L Kaiser, M Babaeizadeh, P Milos, B Osinski, RH Campbell, ...
arXiv preprint arXiv:1903.00374, 2019
2842019
On variational bounds of mutual information
B Poole, S Ozair, A Van Den Oord, A Alemi, G Tucker
International Conference on Machine Learning, 5171-5180, 2019
232*2019
Soft Co-Clustering of Data
FW Elliott, R Rohwer, SC Jones, GJ Tucker, CJ Kain, CN Weidert
US Patent App. 12/133,902, 2009
2192009
Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models
G Tucker, A Mnih, CJ Maddison, D Lawson, J Sohl-Dickstein
arXiv preprint arXiv:1703.07370, 2017
2062017
Offline reinforcement learning: Tutorial, review, and perspectives on open problems
S Levine, A Kumar, G Tucker, J Fu
arXiv preprint arXiv:2005.01643, 2020
1652020
Stabilizing off-policy q-learning via bootstrapping error reduction
A Kumar, J Fu, G Tucker, S Levine
arXiv preprint arXiv:1906.00949, 2019
1552019
Deep bayesian bandits showdown: An empirical comparison of bayesian deep networks for thompson sampling
C Riquelme, G Tucker, J Snoek
arXiv preprint arXiv:1802.09127, 2018
1502018
Sample-efficient reinforcement learning with stochastic ensemble value expansion
J Buckman, D Hafner, G Tucker, E Brevdo, H Lee
arXiv preprint arXiv:1807.01675, 2018
1492018
Learning to walk via deep reinforcement learning
T Haarnoja, S Ha, A Zhou, J Tan, G Tucker, S Levine
arXiv preprint arXiv:1812.11103, 2018
1402018
Filtering variational objectives
CJ Maddison, D Lawson, G Tucker, N Heess, M Norouzi, A Mnih, ...
arXiv preprint arXiv:1705.09279, 2017
1362017
Methods and devices for ignoring similar audio being received by a system
AD Rosen, MJ Rodehorst, GJ Tucker, ALM Challenner
US Patent 9,728,188, 2017
1092017
Proteomic and functional genomic landscape of receptor tyrosine kinase and ras to extracellular signal–regulated kinase signaling
AA Friedman, G Tucker, R Singh, D Yan, A Vinayagam, Y Hu, R Binari, ...
Science signaling 4 (196), rs10-rs10, 2011
952011
Behavior regularized offline reinforcement learning
Y Wu, G Tucker, O Nachum
arXiv preprint arXiv:1911.11361, 2019
892019
Conservative q-learning for offline reinforcement learning
A Kumar, A Zhou, G Tucker, S Levine
arXiv preprint arXiv:2006.04779, 2020
842020
D4rl: Datasets for deep data-driven reinforcement learning
J Fu, A Kumar, O Nachum, G Tucker, S Levine
arXiv preprint arXiv:2004.07219, 2020
822020
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20