Jonathan N. Lee
Jonathan N. Lee
Other namesJonathan Lee
Verified email at - Homepage
Cited by
Cited by
DART: Noise Injection for Robust Imitation Learning
M Laskey, J Lee, R Fox, A Dragan, K Goldberg
Conference on Robot Learning, 143-156, 2017
Robot grasping in clutter: Using a hierarchy of supervisors for learning from demonstrations
M Laskey, J Lee, C Chuck, D Gealy, W Hsieh, FT Pokorny, AD Dragan, ...
2016 IEEE International Conference on Automation Science and Engineering …, 2016
Comparing human-centric and robot-centric sampling for robot deep learning from demonstrations
M Laskey, C Chuck, J Lee, J Mahler, S Krishnan, K Jamieson, A Dragan, ...
2017 IEEE International Conference on Robotics and Automation (ICRA), 358-365, 2017
Online Model Selection for Reinforcement Learning with Function Approximation
JN Lee, A Pacchiano, V Muthukumar, W Kong, E Brunskill
arXiv preprint arXiv:2011.09750, 2020
On-Policy Robot Imitation Learning from a Converging Supervisor
A Balakrishna, B Thananjeyan, J Lee, A Zahed, F Li, JE Gonzalez, ...
arXiv preprint arXiv:1907.03423, 2019
Online Learning with Continuous Variations: Dynamic Regret and Reductions
CA Cheng, J Lee, K Goldberg, B Boots
arXiv preprint arXiv:1902.07286, 2019
Generalizing Robot Imitation Learning with Invariant Hidden Semi-Markov Models
AK Tanwani, J Lee, B Thananjeyan, M Laskey, S Krishnan, R Fox, ...
arXiv preprint arXiv:1811.07489, 2018
A dynamic regret analysis and adaptive regularization algorithm for on-policy robot imitation learning
J Lee, M Laskey, A Kumar Tanwani, A Aswani, K Goldberg
Proceedings of the 13th Workshop on the Algorithmic Foundations of Robotics …, 2018
Design of Experiments for Stochastic Contextual Linear Bandits
A Zanette, K Dong, J Lee, E Brunskill
arXiv preprint arXiv:2107.09912, 2021
Model Selection in Batch Policy Optimization
JN Lee, G Tucker, O Nachum, B Dai
arXiv preprint arXiv:2112.12320, 2021
Dueling RL: Reinforcement Learning with Trajectory Preferences
A Pacchiano, A Saha, J Lee
arXiv preprint arXiv:2111.04850, 2021
Near Optimal Policy Optimization via REPS
A Pacchiano, J Lee, P Bartlett, O Nachum
arXiv preprint arXiv:2103.09756, 2021
Improved Estimator Selection for Off-Policy Evaluation
G Tucker, J Lee
Workshop on Reinforcement Learning Theory at the 38th International …, 2021
Convergence Rates of Smooth Message Passing with Rounding in Entropy-Regularized MAP Inference
J Lee, A Pacchiano, M Jordan
International Conference on Artificial Intelligence and Statistics, 3003-3014, 2020
Stability analysis of on-policy imitation learning algorithms using dynamic regret
J Lee, M Laskey, AK Tanwani, K Goldberg
RSS Workshop on Imitation and Causality, 2018
Model-Free Error Detection and Recovery for Robot Learning from Demonstration
J Lee, M Laskey, R Fox, K Goldberg
Oracle Inequalities for Model Selection in Offline Reinforcement Learning
JN Lee, G Tucker, O Nachum, B Dai, E Brunskill
arXiv preprint arXiv:2211.02016, 2022
Learning in POMDPs is Sample-Efficient with Hindsight Observability
JN Lee, A Agarwal, C Dann, T Zhang
arXiv preprint arXiv:2301.13857, 2023
Accelerated Message Passing for Entropy-Regularized MAP Inference
JN Lee, A Pacchiano, P Bartlett, MI Jordan
arXiv preprint arXiv:2007.00699, 2020
Approximate Sherali-Adams Relaxations for MAP Inference via Entropy Regularization
JN Lee, A Pacchiano, MI Jordan
arXiv preprint arXiv:1907.01127, 2019
The system can't perform the operation now. Try again later.
Articles 1–20