Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)1 DJ Klionsky, AK Abdel-Aziz, S Abdelfatah, M Abdellatif, A Abdoli, S Abel, ... autophagy 17 (1), 1-382, 2021 | 13214* | 2021 |
On the variance of the adaptive learning rate and beyond L Liu, H Jiang, P He, W Chen, X Liu, J Gao, J Han arXiv preprint arXiv:1908.03265, 2019 | 2023 | 2019 |
Deberta: Decoding-enhanced bert with disentangled attention P He, X Liu, J Gao, W Chen arXiv preprint arXiv:2006.03654, 2020 | 2011 | 2020 |
Unified language model pre-training for natural language understanding and generation L Dong, N Yang, W Wang, F Wei, X Liu, Y Wang, J Gao, M Zhou, HW Hon Advances in neural information processing systems 32, 2019 | 1612 | 2019 |
Domain-specific language model pretraining for biomedical natural language processing Y Gu, R Tinn, H Cheng, M Lucas, N Usuyama, X Liu, T Naumann, J Gao, ... ACM Transactions on Computing for Healthcare (HEALTH) 3 (1), 1-23, 2021 | 1474 | 2021 |
Multi-task deep neural networks for natural language understanding X Liu, P He, W Chen, J Gao arXiv preprint arXiv:1901.11504, 2019 | 1337 | 2019 |
Ms marco: A human generated machine reading comprehension dataset P Bajaj, D Campos, N Craswell, L Deng, J Gao, X Liu, R Majumder, ... arXiv preprint arXiv:1611.09268, 2016 | 640 | 2016 |
Representation learning using multi-task deep neural networks for semantic classification and information retrieval X Liu, J Gao, X He, L Deng, K Duh, YY Wang | 492 | 2015 |
Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers B Wang, R Shin, X Liu, O Polozov, M Richardson arXiv preprint arXiv:1911.04942, 2019 | 479 | 2019 |
Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization H Jiang, P He, W Chen, X Liu, J Gao, T Zhao arXiv preprint arXiv:1911.03437, 2019 | 424 | 2019 |
Cyclical annealing schedule: A simple approach to mitigating kl vanishing H Fu, C Li, X Liu, J Gao, A Celikyilmaz, L Carin arXiv preprint arXiv:1903.10145, 2019 | 382 | 2019 |
Unilmv2: Pseudo-masked language models for unified language model pre-training H Bao, L Dong, F Wei, W Wang, N Yang, X Liu, Y Wang, J Gao, S Piao, ... International conference on machine learning, 642-652, 2020 | 379 | 2020 |
Specification and estimation of social interaction models with network structures L Lee, X Liu, X Lin The Econometrics Journal 13 (2), 145-176, 2010 | 379 | 2010 |
The conduct of drug metabolism studies considered good practice (II): in vitro experiments L Jia, X Liu Current drug metabolism 8 (8), 822-829, 2007 | 345 | 2007 |
The involvement of P‐glycoprotein in berberine absorption G Pan, GJ Wang, XD Liu, JP Fawcett, YY Xie Pharmacology & toxicology 91 (4), 193-197, 2002 | 273 | 2002 |
Non-fullerene acceptor with low energy loss and high external quantum efficiency: towards high performance polymer solar cells Y Li, X Liu, FP Wu, Y Zhou, ZQ Jiang, B Song, Y Xia, ZG Zhang, F Gao, ... Journal of Materials Chemistry A 4 (16), 5890-5897, 2016 | 247 | 2016 |
Record: Bridging the gap between human and machine commonsense reading comprehension S Zhang, X Liu, J Liu, J Gao, K Duh, B Van Durme arXiv preprint arXiv:1810.12885, 2018 | 246 | 2018 |
Understanding the difficulty of training transformers L Liu, X Liu, J Gao, W Chen, J Han arXiv preprint arXiv:2004.08249, 2020 | 245 | 2020 |
Stochastic answer networks for machine reading comprehension X Liu, Y Shen, K Duh, J Gao arXiv preprint arXiv:1712.03556, 2017 | 231 | 2017 |
ABC family transporters X Liu Drug transporters in drug disposition, effects and toxicity, 13-100, 2019 | 222 | 2019 |