Document ranking with a pretrained sequence-to-sequence model R Nogueira, Z Jiang, J Lin arXiv preprint arXiv:2003.06713, 2020 | 535 | 2020 |
What the daam: Interpreting stable diffusion using cross attention R Tang, L Liu, A Pandey, Z Jiang, G Yang, K Kumar, P Stenetorp, J Lin, ... arXiv preprint arXiv:2210.04885, 2022 | 120 | 2022 |
Investigating the limitations of transformers with simple arithmetic tasks R Nogueira, Z Jiang, J Lin arXiv preprint arXiv:2102.13019, 2021 | 112 | 2021 |
“Low-resource” text classification: A parameter-free classification method with compressors Z Jiang, M Yang, M Tsirlin, R Tang, Y Dai, J Lin Findings of the Association for Computational Linguistics: ACL 2023, 6810-6828, 2023 | 75 | 2023 |
PaperRobot: Incremental draft generation of scientific ideas Q Wang, L Huang, Z Jiang, K Knight, H Ji, M Bansal, Y Luan arXiv preprint arXiv:1905.07870, 2019 | 62 | 2019 |
Describing a knowledge base Q Wang, X Pan, L Huang, B Zhang, Z Jiang, H Ji, K Knight arXiv preprint arXiv:1809.01797, 2018 | 58 | 2018 |
Navigation-based candidate expansion and pretrained language models for citation recommendation R Nogueira, Z Jiang, K Cho, J Lin Scientometrics 125 (3), 3001-3016, 2020 | 19 | 2020 |
Chengyu cloze test Z Jiang, B Zhang, L Huang, H Ji Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building …, 2018 | 15 | 2018 |
How does BERT rerank passages? an attribution analysis with information bottlenecks Z Jiang, R Tang, J Xin, J Lin Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting …, 2021 | 13 | 2021 |
Inserting information bottlenecks for attribution in transformers Z Jiang, R Tang, J Xin, J Lin arXiv preprint arXiv:2012.13838, 2020 | 13 | 2020 |
Few-shot non-parametric learning with deep latent variable model Z Jiang, Y Dai, J Xin, M Li, J Lin Advances in Neural Information Processing Systems 35, 26448-26461, 2022 | 11 | 2022 |
Evaluating pretrained transformer models for citation recommendation R Nogueira, Z Jiang, K Cho, J Lin CEUR Workshop Proceedings 2591, 89-100, 2020 | 8 | 2020 |
Approximating Human-Like Few-shot Learning with GPT-based Compression C Huang, Y Xie, Z Jiang, J Lin, M Li arXiv preprint arXiv:2308.06942, 2023 | 5 | 2023 |
Less is more: Parameter-free text classification with gzip Z Jiang, MYR Yang, M Tsirlin, R Tang, J Lin arXiv preprint arXiv:2212.09410, 2022 | 5 | 2022 |
Building an efficiency pipeline: Commutativity and cumulativeness of efficiency operators for transformers J Xin, R Tang, Z Jiang, Y Yu, J Lin arXiv preprint arXiv:2208.00483, 2022 | 2 | 2022 |
Less is More: Restricted Representations for Better Interpretability and Generalizability Z Jiang University of Waterloo, 2023 | | 2023 |
Operator Selection and Ordering in a Pipeline Approach to Efficiency Optimizations for Transformers J Xin, R Tang, Z Jiang, Y Yu, J Lin Findings of the Association for Computational Linguistics: ACL 2023, 2870-2882, 2023 | | 2023 |
With a Little Help from Gzip: Text Classification with No Training Z Jiang, MYR Yang, M Tsirlin, R Tang, J Lin | | |
Narrating a Knowledge Base Q Wang, X Pan, L Huang, B Zhang, Z Jiang, H Ji, K Knight | | |
Rensselaer Polytechnic Institute DiDi Labs Universit 4 University of North Carolina at Chapel Hill U kevinknight@ didiglobal. com, heng Q Wang, L Huang, Z Jiang, Q Wang, L Huang, Z Jiang | | |