Follow
Yao Liu
Yao Liu
Amazon
Verified email at stanford.edu - Homepage
Title
Cited by
Cited by
Year
Provably good batch reinforcement learning without great exploration
Y Liu, A Swaminathan, A Agarwal, E Brunskill
Advances in Neural Information Processing Systems 33, 1264–1274, 2020
2002020
Off-Policy Policy Gradient with Stationary Distribution Correction
Y Liu, A Swaminathan, A Agarwal, E Brunskill
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference …, 2019
170*2019
Representation balancing mdps for off-policy policy evaluation
Y Liu, O Gottesman, A Raghu, M Komorowski, A Faisal, F Doshi-Velez, ...
Advances in Neural Information Processing Systems 31, 2644--2653, 2018
742018
Interpretable off-policy evaluation in reinforcement learning by highlighting influential transitions
O Gottesman, J Futoma, Y Liu, S Parbhoo, L Celi, E Brunskill, ...
International Conference on Machine Learning, 3658-3667, 2020
512020
Understanding the curse of horizon in off-policy evaluation via conditional importance sampling
Y Liu, PL Bacon, E Brunskill
International Conference on Machine Learning, 6184-6193, 2020
392020
Behaviour policy estimation in off-policy policy evaluation: Calibration matters
A Raghu, O Gottesman, Y Liu, M Komorowski, A Faisal, F Doshi-Velez, ...
arXiv preprint arXiv:1807.01066, 2018
382018
Combining parametric and nonparametric models for off-policy evaluation
O Gottesman, Y Liu, S Sussex, E Brunskill, F Doshi-Velez
In International Conference on Machine Learning, 2366-2375, 2019
302019
When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms
Y Liu, E Brunskill
The 14th European Workshop on Reinforcement Learning, 2018
232018
Pac continuous state online multitask reinforcement learning with identification
Y Liu, Z Guo, E Brunskill
Proceedings of the 2016 International Conference on Autonomous Agents …, 2016
182016
Reinforcement learning tutor better supported lower performers in a math task
S Ruan, A Nie, W Steenbergen, J He, JQ Zhang, M Guo, Y Liu, ...
Machine Learning, 1-26, 2024
82024
All-action policy gradient methods: A numerical integration approach
B Petit, L Amdahl-Culleton, Y Liu, J Smith, PL Bacon
arXiv preprint arXiv:1910.09093, 2019
52019
Nonlinear Dimensionality Reduction by Local Orthogonality Preserving Alignment
T Lin, Y Liu, B Wang, LW Wang, HB Zha
Journal of Computer Science and Technology 31 (3), 512-524, 2016
3*2016
Offline policy optimization with eligible actions
Y Liu, Y Flet-Berliac, E Brunskill
Uncertainty in Artificial Intelligence, 1253-1263, 2022
22022
Provably sample-efficient RL with side information about latent dynamics
Y Liu, D Misra, M Dudík, RE Schapire
Advances in Neural Information Processing Systems 35, 33482-33493, 2022
12022
Stitched trajectories for off-policy learning
S Sussex, O Gottesman, Y Liu, S Murphy, E Brunskill, F Doshi-Velez
ICML Workshop, 2018
12018
Budgeting counterfactual for offline RL
Y Liu, P Chaudhari, R Fakoor
Advances in Neural Information Processing Systems 36, 2024
2024
TD Convergence: An Optimization Perspective
K Asadi, S Sabach, Y Liu, O Gottesman, R Fakoor
Advances in Neural Information Processing Systems 36, 2024
2024
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
Z Liu, J Zhang, K Asadi, Y Liu, D Zhao, S Sabach, R Fakoor
arXiv preprint arXiv:2310.05905, 2023
2023
Model Selection for Off-Policy Policy Evaluation
Y Liu, PS Thomas, E Brunskill
The Multi-disciplinary Conference on Reinforcement Learning and Decision Making, 2017
2017
The system can't perform the operation now. Try again later.
Articles 1–19