Fast and scalable bayesian deep learning by weight-perturbation in adam M Khan, D Nielsen, V Tangkaratt, W Lin, Y Gal, A Srivastava International conference on machine learning, 2611-2620, 2018 | 316 | 2018 |
Imitation learning from imperfect demonstration YH Wu, N Charoenphakdee, H Bao, V Tangkaratt, M Sugiyama International Conference on Machine Learning, 6818-6827, 2019 | 181 | 2019 |
Variational imitation learning with diverse-quality demonstrations V Tangkaratt, B Han, ME Khan, M Sugiyama International Conference on Machine Learning, 9407-9417, 2020 | 50 | 2020 |
TD-regularized actor-critic methods S Parisi, V Tangkaratt, J Peters, ME Khan Machine Learning 108, 1467-1501, 2019 | 49 | 2019 |
Efficient sample reuse in policy gradients with parameter-based exploration T Zhao, H Hachiya, V Tangkaratt, J Morimoto, M Sugiyama Neural computation 25 (6), 1512-1547, 2013 | 46 | 2013 |
Active deep Q-learning with demonstration SA Chen, V Tangkaratt, HT Lin, M Sugiyama Machine Learning 109 (9), 1699-1725, 2020 | 41 | 2020 |
Hierarchical reinforcement learning via advantage-weighted information maximization T Osa, V Tangkaratt, M Sugiyama arXiv preprint arXiv:1901.01365, 2019 | 40 | 2019 |
Discovering diverse solutions in deep reinforcement learning by maximizing state–action-based mutual information T Osa, V Tangkaratt, M Sugiyama Neural Networks 152, 90-104, 2022 | 35* | 2022 |
Model-based policy gradients with parameter-based exploration by least-squares conditional density estimation V Tangkaratt, S Mori, T Zhao, J Morimoto, M Sugiyama Neural networks 57, 128-140, 2014 | 35 | 2014 |
Robust imitation learning from noisy demonstrations V Tangkaratt, N Charoenphakdee, M Sugiyama arXiv preprint arXiv:2010.10181, 2020 | 30 | 2020 |
Guide actor-critic for continuous control V Tangkaratt, A Abdolmaleki, M Sugiyama arXiv preprint arXiv:1705.07606, 2017 | 27 | 2017 |
Model-based reinforcement learning with dimension reduction V Tangkaratt, J Morimoto, M Sugiyama Neural Networks 84, 1-16, 2016 | 25 | 2016 |
Policy search with high-dimensional context variables V Tangkaratt, H Van Hoof, S Parisi, G Neumann, J Peters, M Sugiyama Proceedings of the AAAI Conference on Artificial Intelligence 31 (1), 2017 | 22 | 2017 |
Variational adaptive-Newton method for explorative learning ME Khan, W Lin, V Tangkaratt, Z Liu, D Nielsen arXiv preprint arXiv:1711.05560, 2017 | 20 | 2017 |
Vprop: Variational inference using rmsprop ME Khan, Z Liu, V Tangkaratt, Y Gal arXiv preprint arXiv:1712.01038, 2017 | 18 | 2017 |
Direct conditional probability density estimation with sparse feature selection M Shiga, V Tangkaratt, M Sugiyama Machine Learning 100, 161-182, 2015 | 16 | 2015 |
Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization V Tangkaratt, N Xie, M Sugiyama Neural computation 27 (1), 228-254, 2014 | 15 | 2014 |
Simultaneous Planning for Item Picking and Placing by Deep Reinforcement Learning T Tanaka, T Kaneko, M Sekine, V Tangkaratt, M Sugiyama IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020 | 13 | 2020 |
Direct estimation of the derivative of quadratic mutual information with application in supervised dimension reduction V Tangkaratt, H Sasaki, M Sugiyama Neural Computation 29 (8), 2076-2122, 2017 | 13 | 2017 |
Meta-model-based meta-policy optimization T Hiraoka, T Imagawa, V Tangkaratt, T Osa, T Onishi, Y Tsuruoka Asian Conference on Machine Learning, 129-144, 2021 | 12 | 2021 |