Language models are few-shot learners T Brown, B Mann, N Ryder, M Subbiah, JD Kaplan, P Dhariwal, ... Advances in neural information processing systems 33, 1877-1901, 2020 | 8656 | 2020 |
Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford …, 2020 | 2786 | 2020 |
Extracting Training Data from Large Language Models. N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ... USENIX Security Symposium 6, 2021 | 494 | 2021 |
Evaluating large language models trained on code M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ... arXiv preprint arXiv:2107.03374, 2021 | 458 | 2021 |
Language models are few-shot learners. arXiv 2020 TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... arXiv preprint arXiv:2005.14165 4, 2020 | 236 | 2020 |
Toward trustworthy AI development: mechanisms for supporting verifiable claims M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ... arXiv preprint arXiv:2004.07213, 2020 | 203 | 2020 |
Release strategies and the social impacts of language models I Solaiman, M Brundage, J Clark, A Askell, A Herbert-Voss, J Wu, ... arXiv preprint arXiv:1908.09203, 2019 | 172 | 2019 |
Computing minimal interpolants in C1, 1 (Rd) A Herbert-Voss, MJ Hirn, F McCollum Rev. Mat. Iberoam 33 (1), 29-66, 2017 | 16 | 2017 |
Computing minimal interpolants in A Herbert-Voss, MJ Hirn, F McCollum arXiv preprint arXiv:1411.5668, 2014 | 4 | 2014 |
Language models are few-shot learners B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, ... CoRR abs/200, 2020 | 2 | 2020 |