Aurick Zhou
Titre
Citée par
Citée par
Année
Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor
T Haarnoja, A Zhou, P Abbeel, S Levine
International conference on machine learning, 1861-1870, 2018
17892018
Soft actor-critic algorithms and applications
T Haarnoja, A Zhou, K Hartikainen, G Tucker, S Ha, J Tan, V Kumar, ...
arXiv preprint arXiv:1812.05905, 2018
4732018
Efficient off-policy meta-reinforcement learning via probabilistic context variables
K Rakelly, A Zhou, C Finn, S Levine, D Quillen
International conference on machine learning, 5331-5340, 2019
1832019
Learning to walk via deep reinforcement learning
T Haarnoja, S Ha, A Zhou, J Tan, G Tucker, S Levine
arXiv preprint arXiv:1812.11103, 2018
1402018
Composable deep reinforcement learning for robotic manipulation
T Haarnoja, V Pong, A Zhou, M Dalal, P Abbeel, S Levine
2018 IEEE international conference on robotics and automation (ICRA), 6244-6251, 2018
1232018
Conservative q-learning for offline reinforcement learning
A Kumar, A Zhou, G Tucker, S Levine
arXiv preprint arXiv:2006.04779, 2020
842020
Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation
A Zhou, S Levine
International Conference on Machine Learning, 12803-12812, 2021
2*2021
MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning
K Li, A Gupta, A Reddy, VH Pong, A Zhou, J Yu, S Levine
International Conference on Machine Learning, 6346-6356, 2021
2021
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–8