Follow
Xinyun Chen
Xinyun Chen
Google DeepMind
Verified email at berkeley.edu - Homepage
Title
Cited by
Cited by
Year
Delving into transferable adversarial examples and black-box attacks
Y Liu, X Chen, C Liu, D Song
arXiv preprint arXiv:1611.02770, 2016
18592016
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
arXiv preprint arXiv:2210.11416, 2022
16322022
Targeted backdoor attacks on deep learning systems using data poisoning
X Chen, C Liu, B Li, K Lu, D Song
arXiv preprint arXiv:1712.05526, 2017
15862017
Competition-level code generation with alphacode
Y Li, D Choi, J Chung, N Kushman, J Schrittwieser, R Leblond, T Eccles, ...
Science 378 (6624), 1092-1097, 2022
5992022
Adversarial example defense: Ensembles of weak defenses are not strong
W He, J Wei, X Chen, N Carlini, D Song
11th USENIX workshop on offensive technologies (WOOT 17), 2017
4272017
Learning to perform local rewriting for combinatorial optimization
X Chen, Y Tian
Advances in neural information processing systems 32, 2019
3322019
Tree-to-tree neural networks for program translation
X Chen, C Liu, D Song
Advances in neural information processing systems 31, 2018
2682018
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses
M Goldblum, D Tsipras, C Xie, X Chen, A Schwarzschild, D Song, ...
IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (2), 1563-1580, 2022
252*2022
Teaching large language models to self-debug
X Chen, M Lin, N Schärli, D Zhou
arXiv preprint arXiv:2304.05128, 2023
2042023
Large language models can be easily distracted by irrelevant context
F Shi, X Chen, K Misra, N Scales, D Dohan, EH Chi, N Schärli, D Zhou
International Conference on Machine Learning, 31210-31227, 2023
166*2023
Large language models as optimizers
C Yang, X Wang, Y Lu, H Liu, QV Le, D Zhou, X Chen
arXiv preprint arXiv:2309.03409, 2023
1622023
Execution-guided neural program synthesis
X Chen, C Liu, D Song
International Conference on Learning Representations, 2018
1452018
Larger language models do in-context learning differently
J Wei, J Wei, Y Tay, D Tran, A Webson, Y Lu, X Chen, H Liu, D Huang, ...
arXiv preprint arXiv:2303.03846, 2023
1432023
Refit: a unified watermark removal framework for deep learning systems with limited data
X Chen, W Wang, C Bender, Y Ding, R Jia, B Li, D Song
Proceedings of the 2021 ACM Asia Conference on Computer and Communications …, 2021
117*2021
Fooling vision and language models despite localization and attention mechanism
X Xu, X Chen, C Liu, A Rohrbach, T Darrell, D Song
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
110*2018
Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension
X Chen, C Liang, AW Yu, D Zhou, D Song, QV Le
International Conference on Learning Representations, 2019
1092019
Latent attention for if-then program synthesis
C Liu, X Chen, EC Shin, M Chen, D Song
Advances in Neural Information Processing Systems 29, 2016
982016
Compositional generalization via neural-symbolic stack machines
X Chen, C Liang, AW Yu, D Song, D Zhou
Advances in Neural Information Processing Systems 33, 1690-1701, 2020
882020
Robustart: Benchmarking robustness on architecture design and training techniques
S Tang, R Gong, Y Wang, A Liu, J Wang, X Chen, F Yu, X Liu, D Song, ...
arXiv preprint arXiv:2109.05211, 2021
842021
Large language models as tool makers
T Cai, X Wang, T Ma, X Chen, D Zhou
arXiv preprint arXiv:2305.17126, 2023
832023
The system can't perform the operation now. Try again later.
Articles 1–20