On the opportunities and risks of foundation models R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ... arXiv preprint arXiv:2108.07258, 2021 | 3757 | 2021 |
Holistic Evaluation of Language Models P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ... arXiv preprint arXiv:2211.09110, 2022 | 934 | 2022 |
When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset L Zheng, N Guha, BR Anderson, P Henderson, DE Ho arXiv preprint arXiv:2104.08671, 2021 | 191* | 2021 |
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset P Henderson, MS Krass, L Zheng, N Guha, CD Manning, D Jurafsky, ... | 70 | 2022 |
FLawN-T5: An Empirical Examination of Effective Instruction-Tuning Data Mixtures for Legal Reasoning J Niklaus, L Zheng, AD McCarthy, C Hahn, BM Rosen, P Henderson, ... arXiv preprint arXiv:2404.02127, 2024 | 2 | 2024 |
NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps K Gligoric, M Cheng, L Zheng, E Durmus, D Jurafsky arXiv preprint arXiv:2404.01651, 2024 | 1 | 2024 |