An image is worth 16x16 words: Transformers for image recognition at scale A Dosovitskiy arXiv preprint arXiv:2010.11929, 2020 | 49450 | 2020 |
Object-centric learning with slot attention F Locatello, D Weissenborn, T Unterthiner, A Mahendran, G Heigold, ... Advances in neural information processing systems 33, 11525-11538, 2020 | 830 | 2020 |
An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition G Tsatsaronis, G Balikas, P Malakasiotis, I Partalas, M Zschunke, ... BMC bioinformatics 16, 1-28, 2015 | 680 | 2015 |
Axial attention in multidimensional transformers J Ho, N Kalchbrenner, D Weissenborn, T Salimans arXiv preprint arXiv:1912.12180, 2019 | 608 | 2019 |
Simple open-vocabulary object detection M Minderer, A Gritsenko, A Stone, M Neumann, D Weissenborn, ... European Conference on Computer Vision, 728-755, 2022 | 453 | 2022 |
An image is worth 16 x 16 words: transformers for image recognition at scale. arXiv 201011929 A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, ... | 443 | 2010 |
& Houlsby, N.(2020). An image is worth 16x16 words: Transformers for image recognition at scale A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, ... arXiv preprint arXiv:2010.11929, 2010 | 419 | 2010 |
Making Neural QA as Simple as Possible but not Simpler D Weissenborn, G Wiese, L Seiffe CoNLL, 2017 | 326* | 2017 |
Scaling autoregressive video models D Weissenborn, O Täckström, J Uszkoreit arXiv preprint arXiv:1906.02634, 2019 | 238 | 2019 |
Colorization transformer M Kumar, D Weissenborn, N Kalchbrenner arXiv preprint arXiv:2102.04432, 2021 | 202 | 2021 |
Neural Domain Adaptation for Biomedical Question Answering G Wiese, D Weissenborn, M Neves CoNLL, 2017 | 152* | 2017 |
International Conference on Learning Representations A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, ... ICLR 2010, 11929, 2021 | 123 | 2021 |
Differentiable patch selection for image recognition JB Cordonnier, A Mahendran, A Dosovitskiy, D Weissenborn, J Uszkoreit, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021 | 99 | 2021 |
Dynamic Integration of Background Knowledge in Neural NLU Systems D Weissenborn, T Kočiský, C Dyer arXiv preprint arXiv:1706.02596, 2017 | 77* | 2017 |
Thomas Unterthiner Mostafa Dehghani Matthias Minderer Georg Heigold Sylvain Gelly Jakob Uszkoreit and Neil Houlsby. An image isworth 16× 16 words: Transformers for image … A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai International Conference on Learning Representations, 2021 | 72 | 2021 |
Simple open-vocabulary object detection with vision transformers. arxiv 2022 M Minderer, A Gritsenko, A Stone, M Neumann, D Weissenborn, ... arXiv preprint arXiv:2205.06230 2, 2022 | 47 | 2022 |
Multi-objective optimization for the joint disambiguation of nouns and named entities D Weissenborn, L Hennig, F Xu, H Uszkoreit Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015 | 47 | 2015 |
Event linking with sentential features from convolutional neural networks S Krause, F Xu, H Uszkoreit, D Weissenborn Proceedings of the 20th SIGNLL Conference on Computational Natural Language …, 2016 | 46 | 2016 |
An image is worth 16x16 words: transformers for image recognition at scale LB Alexey Dosovitskiy, A Kolesnikov, D Weissenborn, X Zhai, ... CoRR. abs, 2010 | 43 | 2010 |
An image is worth x words: Transformers for image recognition at scale A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, ... International Conference on Learning Representations, 2010 | 40 | 2010 |