Follow
Ariel Herbert-Voss
Ariel Herbert-Voss
Verified email at g.harvard.edu
Title
Cited by
Cited by
Year
Language models are few-shot learners
T Brown, B Mann, N Ryder, M Subbiah, JD Kaplan, P Dhariwal, ...
Advances in neural information processing systems 33, 1877-1901, 2020
86562020
Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford …, 2020
27862020
Extracting Training Data from Large Language Models.
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
USENIX Security Symposium 6, 2021
4942021
Evaluating large language models trained on code
M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374, 2021
4582021
Language models are few-shot learners. arXiv 2020
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arXiv preprint arXiv:2005.14165 4, 2020
2362020
Toward trustworthy AI development: mechanisms for supporting verifiable claims
M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ...
arXiv preprint arXiv:2004.07213, 2020
2032020
Release strategies and the social impacts of language models
I Solaiman, M Brundage, J Clark, A Askell, A Herbert-Voss, J Wu, ...
arXiv preprint arXiv:1908.09203, 2019
1722019
Computing minimal interpolants in C1, 1 (Rd)
A Herbert-Voss, MJ Hirn, F McCollum
Rev. Mat. Iberoam 33 (1), 29-66, 2017
162017
Computing minimal interpolants in
A Herbert-Voss, MJ Hirn, F McCollum
arXiv preprint arXiv:1411.5668, 2014
42014
Language models are few-shot learners
B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, ...
CoRR abs/200, 2020
22020
The system can't perform the operation now. Try again later.
Articles 1–10