Chain-of-thought prompting elicits reasoning in large language models J Wei, X Wang, D Schuurmans, M Bosma, E Chi, Q Le, D Zhou NeurIPS 2022, 2022 | 8181* | 2022 |
PaLM: Scaling language modeling with Pathways A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ... JMLR 2023, 2022 | 4556 | 2022 |
GPT-4 technical report J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ... arXiv preprint arXiv:2303.08774, 2023 | 3852 | 2023 |
Finetuned language models are zero-shot learners J Wei, M Bosma, V Zhao, K Guu, A Yu, B Lester, N Du, A Dai, Q Le ICLR 2022, 2022 | 2795 | 2022 |
Emergent abilities of large language models J Wei, Y Tay, R Bommasani, C Raffel, B Zoph, S Borgeaud, D Yogatama, ... TMLR 2022, 2022 | 2645* | 2022 |
Scaling instruction-finetuned language models HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, E Li, X Wang, ... JMLR 2024, 2022 | 2593 | 2022 |
Easy data augmentation techniques for boosting performance on text classification tasks J Wei, K Zou EMNLP 2019, 2019 | 2429 | 2019 |
Self-consistency improves chain-of-thought reasoning in language models X Wang, J Wei, D Schuurmans, Q Le, E Chi, D Zhou ICLR 2023, 2023 | 1943* | 2023 |
Large language models encode clinical knowledge K Singhal, S Azizi, T Tu, SS Mahdavi, J Wei, HW Chung, N Scales, ... Nature 2023, 2023 | 1679 | 2023 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... TMLR 2023, 2022 | 999 | 2022 |
Least-to-most prompting enables complex reasoning in large language models D Zhou, N Schärli, L Hou, J Wei, N Scales, X Wang, D Schuurmans, ... ICLR 2023, 2023 | 920 | 2023 |
A survey of data augmentation approaches for NLP S Feng, V Gangal, J Wei, S Chandar, S Vosoughi, T Mitamura, E Hovy ACL Findings 2021, 2021 | 812 | 2021 |
The Flan Collection: Designing data and methods for effective instruction tuning S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ... ICML 2023, 2023 | 510 | 2023 |
Challenging BIG-bench tasks and whether chain-of-thought can solve them M Suzgun, N Scales, N Schärli, S Gehrmann, Y Tay, HW Chung, ... ACL Findings 2023, 2022 | 445 | 2022 |
Unifying language learning paradigms Y Tay, M Dehghani, VQ Tran, X Garcia, D Bahri, T Schuster, HS Zheng, ... ICLR 2023, 2023 | 372* | 2023 |
Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks J Wei, L Tafe, Y Linnik, L Vaickus, N Tomita, S Hassanpour Scientific Reports, 2019 | 316 | 2019 |
Larger language models do in-context learning differently J Wei, J Wei, Y Tay, D Tran, A Webson, Y Lu, X Chen, H Liu, D Huang, ... arXiv preprint arXiv:2303.03846, 2023 | 217 | 2023 |
Language models are multilingual chain-of-thought reasoners F Shi, M Suzgun, M Freitag, X Wang, S Srivats, S Vosoughi, HW Chung, ... ICLR 2023, 2023 | 201 | 2023 |
Attention-based deep neural networks for detection of cancerous and precancerous esophagus tissue on histopathological slides N Tomita, B Abdollahi, J Wei, B Ren, A Suriawinata, S Hassanpour JAMA Network Open, 2019 | 174 | 2019 |
A recipe for arbitrary text style transfer with large language models E Reif, D Ippolito, A Yuan, A Coenen, C Callison-Burch, J Wei ACL 2022, 2022 | 148 | 2022 |